Hacker News new | comments | show | ask | jobs | submitlogin
MacOS Catalina: Slow by Design? (sigpipe.macromates.com)
2030 points by jrk 13 days ago | hide | past | web | 996 comments | favorite





It seems like there is a lot of confusion here as to whether this is real or not. I've been able to confirm the behavior in the post by:

- Using a new, random executable. Even echo $rand_int will work. Edit: What I mean here is generate your rand int beforehand and statically include it in your script.

- Using a fresh filename too. Just throw a rand int at the end there. e.g. /tmp/test4329.sh

I MITMd myself while recording the network traffic and, sure enough, there is a request to ocsp.apple.com with a hash in the URL path and a bunch of binary data in the response body. Unsure what it is yet but the URL suggests it is generating a cert for the binary and checking it. See: https://en.wikipedia.org/wiki/Online_Certificate_Status_Prot...

Here's the URL I saw:

http://ocsp.apple.com/ocsp-devid01/ME4wTKADAgEAMEUwQzBBMAkGB...

Edit2: Anyone know what this hash format is? It's not quite base64, nor is it multiple base64 strings separated with '+'s but it seems similar...

Edit3: Here is the exact filename and file I used: https://gist.github.com/UsmannK/abb4b239c98ee45bdfcc5b284bf0...

Edit4 (final one probably...): On subsequent attempts I'm only seeing a request to https://api.apple-cloudkit.com and not the OCSP one anymore. Curiously, there's no headers at all. It is just checking for connectivity.


Here's some shell script to use a random file name and have friendlier output.

  RAND_FILE="/tmp/test-$RANDOM.sh";
  time_helper() { /usr/bin/time $RAND_FILE 2>&1 | tail -1 | awk '{print $1}'; }  # this just returns the real run time
  echo $'#!/bin/sh\necho Hello' $RANDOM > $RAND_FILE && chmod a+x  $RAND_FILE;
  echo "Testing $RAND_FILE";
  echo "execution time #1: $(time_helper) seconds";
  echo "execution time #2: $(time_helper) seconds";
Introducing a network delay makes the effect much more obvious. Normally I see a delay of about 0.1 seconds, but after using the XCode network link conditioner (pf rules) to add 500ms latency to everything the delay shoots way up to ~2 seconds.

example output:

  Testing /tmp/test-24411.sh
  execution time #1: 2.32 seconds
  execution time #2: 0.00 seconds
with developer tools checked both executions report "0.0 seconds".

I tried just blocking "api.apple-cloudkit.com" with /etc/hosts. This reduces the delay but doesn't eliminate it. A connection attempt is still made every time. (I don't recommend making this change permanent. Just give your terminal app the "Developers Tools" permission instead)

After blocking that domain I can see that tccd and syspolicyd are logging some error messages to the console related to the failed connection. I don't recommend blocking because my guess is that'll put syspolicyd/tccd in some unexpected state and they'll repeatedly keep trying to make requests.

Try this for watching security related console log messages:

  sudo log stream --debug --info --predicate "processImagePath contains 'tccd' OR processImagePath contains 'syspolicyd' OR processImagePath Contains[c] 'taskgated' OR processImagePath contains 'trustd' OR eventMessage Contains[c] 'malware' OR senderImagePath Contains[c] 'security' "
syspolicyd explicitly logs when it makes the network request.

   syspolicyd: cloudkit record fetch: https://api.apple-cloudkit.com/database/1/com.apple.gk.ticket-delivery/production/public/records/lookup, 2/2/23de35......
(you need to enable private logging to see that url)

Enabling private logging is fairly annoying these days, unfortunately. (Interestingly, if macOS thinks you're AppleInternal, it will make it just as annoying to disable private logging…)

wait a sec...I recognize that name. I only know how to enable private logging thanks to your detailed and informative blog post! Seriously, it's one of the favorite macOS things I've read in a while. I loved the step by step walk through using gdb you showed.

Though just today I saw that apparently an enterprise policy config can enable private logging in 10.15.3+ without having to disable SIP. https://georgegarside.com/blog/macos/sierra-console-private/

For reference for others: this is the blog post by OP on enabling private logging in Catalina. check it out! https://saagarjha.com/blog/2019/09/29/making-os-log-public-o...


I’m glad you appreciated it, but I think it also happened to be some of the fastest-to-deteriorate advice I’ve given :) I should go back and revisit this, as on my system I have it currently stuck in a state where it unconditionally enables private data logging at boot (which mean my crash logs have personal information in them unless I remember to turn it off with the workaround I’ve been using until now…)

Huh this is crazy. 2 seconds is way slow and this shouldn't involve any network activity. Seems like a real problem.

He/she added an artificial network latency/delay into the config, just like they describe. That is the reason for the delay. It is made artificially long on purpose.

It’s not an unreasonable delay on a slow 3g hotspot. It’s problematic to have the performance tied to the network speed and suffer an overall slow performance because your network happens to be slow.

Have I written anything that is contradicts that? I simply pointed out that in the example the delay was artificial, and it was definitely due to network, not due to something other than network, as the comment suggested.

It's called lockdown for a reason. Apple was just the very first to implement centralized binary blacklisting, revocation. They call it notarization.

Problem is, that they did it unannounced. There must be really some weird stuff going on in those managers heads. How can they possibly think to go away with that?


There were announcements about notarization around WWDC last year. They didn't seem to get a lot of media traction however, but there were specific pages detailing what's required from a developer and some basic details on how it would work

From April 10, 2019: https://developer.apple.com/news/?id=04102019a

https://developer.apple.com/documentation/xcode/notarizing_m...


For each and every shell or perl script that I create and use privately? No, certainly not.

Command line apps aren't affected by Notarization.

If you're compiling something yourself, the compiler won't put a quarantine bit on it and it will execute fine. Same with homebrew/friends.

Scripts don't need to be signed. There is something else going on here.


Seems that in fact even though scripts aren't signed, IF YOU DONT have devTooling enabled for a given terminal, scripts are hashed and checked against bad known digests.

not a big deal, assuming no data is kept.

Also I wonder what it looks like if a script is deemed bad...


There was nothing "unannounced" about it. Notarization was introduced at WWDC 2018 and announced as required at WWDC 2019. Every macOS developer should have been aware of this requirement. It was a special project for my apps.

I believe the concern here is that this is affecting not just macOS developers, but all developers who use macOS. That's an important distinction.

Developers who use macOS as shiny GNU/Linux replacement are only getting what they deserve, they should have supported Linux OEMs to start with.

Those that show up at FOSDEM, carrying their beloved macBooks and iPads while pretending to be into FOSS.

I use Apple devices knowingly what they are for, not as replacement for something else.


Sadly it's not the "shiny"... it's the fact that Mac OS has a GUI that works.

Been using linux since the days you installed Slackware from floppies and recompiled your kernel to get drivers. Command line has always been a bliss, but no one has managed to come up with an usable and consistent GUI yet.

Btw does sleep work on linux laptops these days? How's hi dpi support?


I've partially switched from MacOS X to Linux now that wayland pipewire is reaching a mostly functional state and am quite happy with it.

It took me maybe 150 hours to do the switch though during quarantine, and I still haven't managed to be able to properly connect to SMB at work...


What if using macOS enables me to be a more effective FOSS contributor? What if I think that FOSDEM is actually has many participants who aren't really into free software?

developer who uses MacOS != MacOS developer. I couldn't care less about what is announced at WWDC

First? Windows SmartScreen has checked for malicious binaries since Windows 8.

> Problem is, that they did it unannounced.

No, the entire thing is the problem. Windows 10 can still open applications that were compiled in 1994, and it doesn't make it less secure.


Once you start something, it's hard to stop it.

Every software place I've worked gives a special urgency to security stuff.

And even if features don't come out regularly, security updates do. This is more of that.


Isn't this what bloom filters are for?

>Apple was just the very first to implement centralized binary blacklisting

No, AV vendors did it for decades. In a more efficient way though.


Not sure it’s more efficient given how sluggish most AV software used to make my machine...

Not as bad as Catalina

OCSP is Online Certificate Status Protocol, generally used for checking the revocation status of certificates. You used to be able to turn it off in keychain access, but that ability went away in recent macOS releases.

Ah, Apple. When you can no longer innovate, just start removing features and call it simplicity...

Another way to look at it is that Apple is making it harder to run the system in an insecure fashion. You may not agree with that decision, but I certainly appreciate how Apple is looking out for the safety and security of the user.

Tangent: as much as some developers hate that the only way to distribute apps for the iPhone is through the App Store, as a user I consider that walled garden of apps to be a real security benefit. When John Gruber says “If you must use Zoom or simply want to use it, I highly recommend using it on your iPad and iPhone only. The iOS version is sandboxed and reviewed by the App Store.” There’s a reason why he can say things like that and it’s because Apple draws a hard line in the sand that not everyone will be happy with.


Another way to look at it is that Apple is making it harder to run the system in an insecure fashion. You may not agree with that decision, but I certainly appreciate how Apple is looking out for the safety and security of the user.

"Those who give up freedom for security deserve neither."

(Yes, I know the original intent was slightly different, but that old saying has gotten a lot more vivid recently, as companies are increasingly using the excuse of security to further their own interests and control over their users.)

The ability to control exactly what millions of people can or cannot run on "their" computers is an authoritarian wet dream. People may think Apple's interests aligns with theirs --- but that is not a certainty. How many times have you been stopped from doing what you wanted to because of Apple? It might not be a lot so far, but can you break free from that relationship when/if it does turn against you?


The quote isn't at all relevant to technical decisions though. Eg, there is enforcement that a program can't arbitrarily access any RAM it likes on the same machine. That is trading freedom for security and it is a good trade. And there isn't really an argument against gatekeeping software - users as a body don't have time to verify that the software they use is secure. I'd be shocked if the median web developer even reads up on all the CVEs for their preferred libraries. Gatekeepers are an overwhelmingly good idea for typical don't-care everyday users.

The issue is if it becomes practically impossible to move away from Apple to an alternative. Given that they have a pretty typical market share in absolute terms that doesn't seem like a risk right now. They don't even hold an absolute majority in what I assume is their strongest market, the US, let alone globally.


So keep a Linux box if you want. Don't shit on people for using a mac.

I can use macOS, Windows 10, and any distribution Linux I want without having to pick one. That's freedom. I have choices. I choose all of the above in my personal setup. I'll fight to keep my free software but, at the same time, you can pry logic on the mac from my cold dead hands. I've been using it for 15 years and I am not going to stop now. Use the best/preferred tool for the job you have to do.


The original quote from Franklin was about liberty not freedom. A suttle but vitally important distinction as freedom requires security where liberty does not. If you sacrifice freedom for security you still at least have security, as in a despotism, but if you sacrifice security for freedom you have neither. Conversely if you sacrifice liberty for security you have less liberty without any increase in security just resulting in a net loss.

That’s not close to the original quote. And it was just Ben Franklin politicking, not the word of god.

Another way to look at it is that Apple is moving towards a future where all software for the mac must be purchased from the app store.

Bubye Apple, my next machine will likely be a Dell Ubuntu.


Yeah, this is the future I've been foreseeing for years. Every new OS update just ever so slightly decreases your ability to control what software is on your device, and how you can use it.

For example, you used to be able to back up your purchased iOS apps to your computer, and restore them from your computer. In one iOS update (9 IIRC?), they removed the ability to back up the apps from your phone. In a later iOS/iTunes update, they removed the ability to restore backed up apps from your computer, making your existing backed-up apps useless, if you still had them.

Now, the only way to keep your software on your iPhone indefinitely is to never delete it, and never reformat your phone. Ohh and never update iOS because they will break backwards compatibility with apps you already have. For any app that is no longer supported by the developer, you're just out of luck (and I have purchased MANY such apps, being an iPhone user since 2009).


Mine is already about to be a Linux workstation since, in addition to all the developer hostility the past few years, Catalina essentially killed off Mac gaming (something like 75% of Mac games are 32 bit? or something?). Prior to that it was merely a joke, but it was nice to have an occasional game to play. Now? Nope, Apple Store and recently updated game code or GTFO

Dell Ubuntu is not a good choice, they don’t provide proper drivers and their support has zero knowledge about Linux

Ubuntu phones home a lot too.

motd-news, apport, snaps, whoopsie, kerneloops, ubuntu-report, unattended-upgrades, ...


> Dell Ubuntu

Casual Manjaro and Arch rolling distro with AUR is better drop.


The problem is that there is more than one market here. There is a general market where people love the vendor looking after their security and doing things for them, and there is a pro/hacker market where people want to control things themselves and dont want a lot of this stuff.

This. Yes the option of a walled garden is a great thing and I wouldn't recommend anything but an Apple device to my non-technical relatives. But if Apple also wants to make the $$ that comes from selling "pro" gear, they need to stop relentlessly consumerizing and turning OS X into iOS. I don't think they realize the level of ill will they are engendering in the developer/pro market.

Perhaps it's time for a "Pro" and "Home" Mac OS.


Why can’t they have their walled garden App Store and also allow me to install other app stores?

It’s an authoritarian usurpation of the spirit of property rights. I should be able to decide for myself what software to run on my hardware, Apple HQ’s opinion should be irrelevant.


Why would any developer even want to release their app in walled garden when they can do whatever they want by releasing elsewhere?

On macOS, they do. On a phone, if you want to side load, there’s the option of Android.

Wouldn't a sandboxed Zoom downloaded directly from them be equally secure?

> Wouldn't a sandboxed Zoom downloaded directly from them be equally secure?

More relevantly, wouldn't a sandboxed Zoom downloaded from Apple's store be equally secure even if you could install different apps from developers you trust more outside of the store?


Apple’s rejected a huge number of App updates for security reasons. It’s not a huge benefit, but it does exist.

Yes, but would a typical user know or care if the app they downloaded from a web site was sandboxed and would otherwise have been approved by the App Store if it was submitted there? And if not, how could someone like John Gruber make that claim of safety on anything other than iPhone and iPad? Taking the Zoom example on a parent thread above, look at what happens when you’re installing a Zoom client on the Mac without the strict enforcements of the iOS App Store: https://news.ycombinator.com/item?id=22736608

I don’t really understand this argument. Apple has long been heralded for its safety and security. It’s why in three decades of owning macs we’ve never installed antivirus software.

What is the point of all this security these days? What are they protecting us from?


Who is this Gruber person you quote and why is he relevant here?

He's the person who made the markdown format, which you've used as your username.

Other than that, he's mostly known for writing and talking about Apple.


if gruber wants to dictate what i run on my computer maybe he can pay for my computer instead of me.

Honestly I'm trying to think of a reason you would WANT to disable OCSP, I'm having enough problems thinking of more than 2 developers I know who can actually articulate how it works enough to evaluate this. Not that it's complicated—it's just mostly invisible.

Even when OCSP is a problem, generally you're more worried about issuing a new certificate than an immediate workaround. What are you going to do, ask all your customers to go into keychain access to work around your problem?

This behavior of slowing down appears to be because apple is making HTTPS connections apparently synchronously (probably unnecessarily) and you'd only be potentially harming yourself by disable OCSP.

Though, I am often frustrated FLOSS desktops and Windows don't allow the behavior I want—maybe this is just cultural.


How about it's totally ineffective? OCSP is pointless if you "soft fail" when the OCSP server can't be reached. [1]

This is why Chrome disabled OSCP by default all the way back in 2012-2013 era. Not to mention the performance cost of making all HTTPS connections wait for an OCSP lookup. [2]

[1]: https://www.imperialviolet.org/2012/02/05/crlsets.html

[2]: https://arstechnica.com/information-technology/2012/02/googl...


That's why there's OCSP stapling and OCSP must staple. Ever seen an nginx server fail HTTPS connection exactly once after rotating the certificate? That's nginx lazily fetching the OCSP response from upstream for stapling purposes.

Well, security starts from the user. If you're not mindful of what websites you visit, or what files/apps you download and run, there's no OCSP or anything else there to save you.

OCSP enabled or not, you're still one website click away from being pwned to oblivion, giving full control to the hacker – which, of course, is inevitable to an extent, since bugs always find their way into software.

So why not make it easy to disable?


Well, are you going to manually look up certificate revocations yourself? This necessarily requires a network lookup—you can't just glance at the certificate. What's the benefit of disabling this functionality that actively alerts you to revocations?

> Well, security starts from the user. If you're not mindful of what websites you visit, or what files/apps you download and run, there's no OCSP or anything else there to save you.

Sure, but we're discussing good-faith security here. Presumably if people complain about a missing feature they can envision using it. The scenario here is not visiting a shady website and doing something stupid, the scenario here is something like a man-in-the middle attack using a revoked certificate, which would by definition by difficult for the end-user to detect.

> So why not make it easy to disable?

Because then people would disable it for no discernable good effect.

I mean let me be clear, if you're a security researcher you can just modify your own HTTP stack, run a VM, control the hardware, whatever. This isn't a blocker to investigating HTTPS reactions sans OCSP—this is about denying secure connections when they've publicly revoked the cert used to sign the connection. The only reason this is even considered a discrete feature is that most people have never written an OCSP request in order to then trust an HTTPS server—you're just opening yourself up to be misled without even realizing this (and this goes for most of my very network-stack-aware coworkers).

If you're in a browser, you want the browser to be using best practice security, which necessarily includes OCSP. If you know what you're doing this is trivial to bypass.


Feature-removal has been the most aggravating part of my Mac life for the past several years. Admittedly I tend to use unusual features, but it's just another PITA when they go away.

Not sure they have removed anything, but add something.

What happens if you edit /private/etc/hosts to point ocsp.apple.com to 0.0.0.0 and flush the DNS cache?

This seems like an interesting line of inquiry.

AIUI doing what you said would permit the network request to proceed, and it would fail because nothing is listening on port 80 [1] We already know that the phone-home bails out when there's no network connection, so perhaps that code also bails out on connection failure?

Alternatively, is there some way to make DNS lookup itself fail for ocsp.apple.com?

Last resort, if we know how to fake the response, running a dummy server listening on localhost would be faster than allowing the request to go over the internet.

[1] Empirically, `curl http://0.0.0.0` yields a connection failure. I think I know that 0.0.0.0 is used in a listening context to mean "listen on all interfaces" but tbh I don't really know what it means in a sending context. Maybe someone can educate me?


Sending to 0.0.0.0 will fail immediately. This differs from sending to 127.0.0.0/8 that may connect to a server on the local machine.

> Sending to 0.0.0.0 will fail immediately.

Right, and as far as we know that exception might be caught in the same way as "your computer doesn't have any network connection at all" is caught. Or would those be likely to generate the same exception? Either way, there's a chance that it would result in exec gracefully and quickly not doing the blocking phone-home isn't there?


0.0.0.0 is non-routable and generally only valid as a src not a dest

I think it is fairly likely that your system would not work at all.

I believe it's just Base64 encoded DER information, based on the code that seems to be similar: https://github.com/apple-open-source-mirror/Security/blob/70...

Yes, that base64 decodes to:

  OCSP Request Data:
    Version: 1 (0x0)
    Requestor List:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: 3381D1EFDB68B085214D2EEFAF8C4A69643C2A6C
          Issuer Key Hash: 5717EDA2CFDC7C98A110E0FCBE872D2CF2E31754
          Serial Number: 7D86ED91E10A66C2

I can't edit anymore but it seems like the OCSP link could potentially be a red herring just checking the cert for the next request to https://api.apple-cloudkit.com/. It's worth looking further!

I'm surprised nobody mentioned that Windows Defender does something very similar (checking for never-seen-before binaries at runtime, uploading them to Microsoft servers, then running them there) : https://news.ycombinator.com/item?id=21180019

God, this shit makes me laugh. Why are they doing this.

But from Edit2: Your hash is some sort of base64

     let str = 
"ME4wTKADAgEAMEUwQzBBMAkGBSsOAwIaBQAEFDOB0e_baLCFIU0u76+MSmlkPCpsBBRXF+2iz9x8mKEQ4Py+hy0s8uMXVAIIfYbtkeEKZsI="

Then we see weird random gaps in the alphabet used, not so weird, because not every character will be used in every string:

     Prelude Data.List> map head $  group $ sort $ str
     "+0246789=ABCDEFGIKLMOPQRSTUVXYZ_abefghiklmpstuwxyz"
If we fill these up then:

      Prelude Data.List> let xs = "+0123456789=ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz"
      Prelude Data.List> length xs
      65
So base64 with some non standard symbols. I don't know what standard base64 is supposed to look to be honest, so perhaps it is standard base64. The = is definitely padding.

It decodes cleanly as base64.

Does this mean you can't run a custom shell script without an internet connection?

If the connection fails it goes ahead and grants permission.

The isn't specific to the article, but another place that can be interesting to look at system activity on Mac OS is the console.

https://support.apple.com/en-ca/guide/console/cnslbf30b61a/m...


Were you able to MITM the api.apple-cloudkit.com connection? I tried with MITMProxy but ran into a client error, which made me think they were doing cert pinning.

If you did get it to work could you paste the logs somewhere?


Yes but it looks like there is no actual session, at least for shell scripts that don't have an app bundle ID. There is just an HTTP CONNECT, TLS negotiation, then nothing.

> a degraded user experience, as the first time a user runs a new executable, Apple delays execution while waiting for a reply from their server.

The way to avoid this behavior is to staple the notarization ticket to your bundle (or dmg/pkg), i.e. "/usr/bin/stapler staple <path>." Otherwise, Gatekeeper will fetch the ticket and staple it for the user on the first run.

(I'm the author of xcnotary [1], a tool to make notarization way less painful, including uploading to Apple/polling for completion/stapling/troubleshooting various code signing issues.)

[1] https://github.com/akeru-inc/xcnotary


Xcode (the UI) is able to bypass GateKeeper checks for things it builds.

The "Developer Tool" pane in System Prefs, Security, Privacy is the same power. Drag anything into that list you'd like to grant the same privilege (such as xcodebuild). This is inherited by child processes as well.

The point of this is to avoid malware packing bits of Xcode with itself and silently compiling itself on the target machine, thus bypassing system security policy.


Reminds me of the AV exception folder our corporate IT created for developers. Soon absolutely everything developers needed or created was installed into that folder. Applications, IDEs, you name it.

Guilty as accused. I try to keep to an absolute minimum. Like docker data-dir and IDE. With that i can atleast use my machine.

otherwise this macos notarisation, along with a possibly of cpu heating issues with left thunderbolt usage and corporate av scanning, makes my machine, next to useless


Putting Terminal (and your favorite text editor) in this category and in "Full Disk Access" will change your life.

How does "Full Disk Access" help?

You can browse Time Machine backup directory trees from the CLI again.

Yes, falling victim to ransomware is definitely lifechanging if you don’t have good backups.

That is a non-sequitur.

It's not; they are stating that if you bypass these security checks, you open the machine up to ransomware.

So since these permissions apply to process trees, what happens if you put launchd in there?

The computer will probably hang while it tries to solve the chicken-egg problem.

Isn't launchd Mac's ‘init’? I.e. run before anything else.


Yes, and that's the point — everything you run will theoretically inherit the permission from it.

Can you advise on how to make the "Developer Tool" panel in "System Prefs, Security, Privacy" appear if it is not present? Cant find a way: https://stackoverflow.com/questions/60176405/macos-catalina-...


Thanks for the link. Tried it, but that did not work

GateKeeper only triggers the check for things downloaded from the internet. IOW, it checks if your binary has a quarantine flag attached via an extended attribute.

That is not correct starting with Catalina.

How do I get a "Developer Tool" pane in System Prefs? Do I have to install X-Code? I would really rather not


This is life-changing. Thank you!

What did you notice?

> The way to avoid this behavior is to staple the notarization ticket to your bundle (or dmg/pkg)

Maybe in some cases, but the article says "even if you write a one line shell script and run it in a terminal, you will get a delay!"

Shell scripts don't come in bundles. I don't think this kind of stapling is possible for them? I don't think it'd be reasonable to expect users to do this anyway.


The Gatekeeper behavior is specific to running things from Finder (not Terminal), and only if you downloaded it via a browser that sets the com.apple.quarantine xattr.

Two posts from Apple dev support (Cmd+F "eskimo") describe this in more detail.

https://forums.developer.apple.com/thread/127709

https://forums.developer.apple.com/thread/127694


I recently learned that `xattr -cr path/to/my.app` solves the “this App is damaged would you like to move it to the trash” you get when you copy an app from one Mac to another.

That might be the Windows-iest feature of OSX I've ever heard of.

It seems macOS is going downhill fast these days.

What would that mean?

> The Gatekeeper behavior is specific to running things from Finder (not Terminal), and only if you downloaded it via a browser that sets the com.apple.quarantine xattr.

The article says the described problem isn't limited in this way:

> This is not just for files downloaded from the internet, nor is it only when you launch them via Finder, this is everything. So even if you write a one line shell script and run it in a terminal, you will get a delay!


If you read the comments of the article and do your own testing, you will find that reality appears to be more complicated than the article suggests. Users have shown using both timing and wireshark that the shell scripts do not appear to be triggering notarization checks.

Quinn The Eskimo at Apple's forums is a 10x support engineer, his posts have helped me fix dozens of problems.

Unless somebody took over his name he’s been at Apple for almost 25 years, and was already being interviewed as such 20 years ago (http://preserve.mactech.com/articles/mactech/Vol.16/16.06/Ju...)

His site (http://www.quinn.echidna.id.au/Quinn/WWW/) supports its claim “I'm not a great believer in web” :-)


It's interesting to see a time when Apple seemed to allow employees to have side projects…

He needs to be, because Apple Developer Technical Support is chronically understaffed.

This is the way things worked prior to Catalina but is no longer the case.

I mean, when I’m developing in a compiled language with the workflow edit code -> compile -> run (with forced stapling), changing it to edit code -> compile -> staple -> run doesn’t make it any less slow...

An update: flat out denying network access to syspolicyd using Little Snitch could cut down on the delay. (Yes, syspolicyd does send a network request to apple-cloudkit.com for every single new executable. Denying its access to apple-cloudkit.com only isn't sufficient either since it falls back to IP address directly.) Note that this might not be a great idea, and it still has nonzero cost — a network request has to be made and denied by Little Snitch.

Here's my benchmarking script:

  #!/bin/zsh
  tmpfile=$(mktemp)
  cat >$tmpfile <<EOF
  #!/bin/sh
  echo $RANDOM  # Use a different script each time in case it makes a difference.
  EOF
  chmod +x $tmpfile
  setopt xtrace
  time ( $tmpfile )
  time ( $tmpfile )
  unsetopt xtrace
  rm -f $tmpfile
If your local terminal emulator is immune with "Developer Tools" access (interestingly, toggling it off doesn't bring back the delay for some reason), you should be able to reproduce the delay over ssh.

I can repro this locally as well. Interesting if it's inconsistent with Apple docs and when Gatekeeper should be firing, as running stuff locally without distributing/downloading is somewhat out of scope for notarization.

Reached out about this to Apple dev support, hope to get more insight.


> interestingly, toggling it off doesn't bring back the delay for some reason

Noticed the same; it should come back if you disable it and reboot.


Notarization/stapling/etc. is for distribution only, not generally part of your dev workflow.

But TFA and my personal experience do point to a noticeable delay after each recompile in dev workflows, and TFA claims this is due to notarization checks... So I guess I’m confused and you’re talking about something else?

How does mac identify a dev workflow and normal workflow?

When you use XCode you have different compilation options.

I'm confused. does macbook send executable to apple servers or just the hash?

Just the hash.

The way to avoid this behavior is to not buy a machine from a company that actively hates it's users.

In our company many of us have similar issues. I have always loved OSX but this time it is driving me crazy. I though the issue was some sort of company antivirus/firewall, or it could even be a combination of that and this issue (maybe my vpn + path to company firewall is what magnifies the issue in this post). The thing is that some commands take 1 second, some others take 2 minutes or even more. Actually, some commands slow down the computer until they are finished (more likely, until they just decide to start).

For example, I can run "terraform apply" and it could take up to 5 minutes to start, leaving my computer almost unusable until it runs. The weird thing is that this only happens sometimes. In some cases, I restart the laptop and it starts working a little bit faster, but the issue comes back after some time.

It's already been a few months since I try to run every command from a VM in a remote location, since I am tired of waiting for my commands to start.

I have a macbook air from 2013 which never had this issue.

Any easy fix that I could test? Disconnecting from the internet is not an option. Disabling SIP could be tried, but I think I already did and didn't seem to fix it, plus it is not a good idea for a company laptop.

Don't we have some sort of hosts file or firewall that we can use to block or fake the connectivity to apple servers?


IIRC the big thing that changed with 10.15 for CLI applications is that BSD-userland processes (i.e. ones that don't go through all the macOS Frameworks, but just call libc syscall wrappers like fopen(2)) now also deal with sandboxing, since the BSD syscall ABI is now reimplemented in terms of macOS security capabilities.

Certain BSD-syscall-ABI operations like fopen(2) and readdir(2) are now not-so-fast by default, because the OS has to do a synchronous check of the individual process binary's capabilities before letting the syscall through. But POSIX utilities were written to assume that these operations were fast-ish, and therefore they do tons of them, rather than doing any sort of batching.

That means that any CLI process that "walks" the filesystem is going to generate huge amounts of security-subsystem request traffic; which seemingly bottlenecks the security subsystem (OS-wide!); and so slows down the caller process and any other concurrent processes/threads that need capabilities-grants of their own.

To find a fix, it's important to understand the problem in fine detail. So: the CLI process has a set of process-local capabilities (kernel tokens/handles); and whenever it tries to do something, it first tries to use these. If it turns out none of those existing capabilities let it perform the operation, then it has to request the kernel look at it, build a firewall-like "capabilities-rules program" from the collected information, and run it, to determine whether it should grant the process that capability. (This means that anything that already has capabilities granted from its code-signed capabilities manifest doesn't need to sit around waiting for this capabilities-ruleset program to be built and run. Unless the app's capabilities manifest didn't grant the specific capability it's trying to use.)

Unlike macOS app-bundles, regular (i.e. freshly-compiled) BSD-userland executable binaries don't have a capabilities manifest of their own, so they don't start with any process-local capabilities. (You can embed one into them, but the process has to be "capabilities-aware" to actually make use of it, so e.g. GNU coreutils from Homebrew isn't gonna be helped by this. Oh, and it won't kick in if the program isn't also code-signed, IIRC.)

But all processes inherit their capabilities from their runtime ancestors, so there's a simple fix, for the case of running CLI software interactively: grant your terminal emulator the capabilities you need through Preferences. In this case, the "Full Disk Access" capability. Then, since all your all CLI processes have your terminal emulator as a runtime ancestor-process, all your CLI processes will inherit that capability, and thus not need to spend time requesting it from the security subsystem.

Note that this doesn't apply to BSD-userland executable binaries which run as LaunchDaemons, since those aren't being spawned by your terminal emulator. Those either need to learn to use capabilities for real; or, at least, they need to get exec(2)ed by a shim binary that knows how.

-----

tl;dr: I had this problem (slowness in numerous CLI apps, most obvious as `brew upgrade` suddenly taking forever) after upgrading to 10.15 as well. Granting "Full Disk Access" to iTerm fixed it for me.


> IIRC the big thing that changed with 10.15 for CLI applications is that BSD-userland processes (i.e. ones that don't go through all the macOS Frameworks, but just call libc syscall wrappers like fopen(2)) now also deal with sandboxing, since the BSD syscall ABI is now reimplemented in terms of macOS security capabilities.

Is this actually new in macOS 10.15? I seem to recall this being a thing ever since sandboxing was a thing, even all the way back to when it was called Seatbelt.

> That means that any CLI process that "walks" the filesystem is going to generate huge amounts of sandboxd traffic, which bottlenecks sandboxd and so slows down the caller process.

Is this not implemented in the kernel as an extension? I thought the checks went through MAC framework hooks. Doesn't sandboxd just log access violations when told to do so by the Sandbox kernel extension?

> Unlike macOS app-bundles, regular BSD-userland executable binaries don't have a capabilities manifest of their own, so they don't start with any process-local capabilities (with some interesting exceptions, that I think involve the binary being embedded in the directory-structure of a system framework, where the binary inherits its capabilities from the enclosing framework.)

I am fairly sure you can just embed a profile in a section of your app's binary and call the sandboxing Mach call with that…


> I seem to recall this being a thing ever since sandboxing was a thing, even all the way back to when it was called Seatbelt.

Maybe you're right; I'm not sure when they actually put the Seatbelt/TrustedBSD interpreter inline in the BSD syscall code-path. What I do know is that, until 10.15, Apple tried to ensure that the BSD-userland libc-syscall codepath retained mostly the same behavioral guarantees as it did before they updated it, in terms of worst-case time-complexities of syscalls. Not sure whether that was using a short-circuit path that went around Seatbelt or used a "mini-Seatbelt" fast path; or whether it was by hard-coding a pre-compiled MAC ruleset for libc calls that only relied upon the filesystem flag-bits, and so never had to do anything blocking during evaluation.

Certainly, even as of 10.12, BSD-userland processes weren't immune to being exec(2)-blocked by the quarantine xattr. But that may have been a partial implementation (e.g. exec(2) going through the MAC system while other syscalls don't.) It's kind of opaque from the outside. It was at least "more than nothing", though I'm not sure if it was "everything."

One thing that is clear is that, until 10.15, BSD processes with no capabilities manifest, still had the pretty much exactly the same default set of privileges that they had before capabilities, which means "almost everything" (and therefore they almost never needed to actually hit up the security system for more grants.) I guess all Apple really needed to have done in 10.15 to "break BSD", was to introduce some more capabilities, and then not put them in the default/implicit manifest.

I suppose what actually happened in 10.15 can be determined easily-enough from the OSS code that's been released. :)

> Is this not implemented in the kernel as an extension? // I am fairly sure you can just embed a profile in a section of your app's binary and call the sandboxing Mach call with that…

Yeah, sorry, you're right; updated my assertions above. I'm not a kernel dev; I've just picked up my understanding of this stuff from running head-first into it while trying to do other things!


It's a new behavior that doing 'find ~' will trigger a MacOS (GUI) permissions warning dialog when `find` tries to access your photos directory, contacts file, etc.

That is new, but I believe the groundwork for that was mostly laid in 10.14 and is also mostly in the kernel.

Why would sandboxing be slower?

They are definitely doing something way too slow.


Apple replaced the very simple (i.e. function fits in a cache line; inputs fit in a single dword) BSD user/group/other filesystem privileges system, with a Lisp interpreter (or maybe compiler? not sure) executing some security DSL[1][2].

[1] https://wiki.mozilla.org/Sandbox/OS_X_Rule_Set

[2] https://reverse.put.as/wp-content/uploads/2011/09/Apple-Sand...

This capabilities-ruleset interpreter is what Apple uses the term "Gatekeeper" to refer to, mostly. It had already been put in charge of authorizing most Cocoa-land system interactions as of 10.12. But the capabilities-ruleset interpreter wasn't in the code-path for any BSD-land code until 10.15.

A capabilities-ruleset "program" for this interpreter can be very simple (and thus quick to execute), or arbitrarily complex. In terms of how complex a ruleset can get—i.e. what the interpreter's runtime allows it to take into consideration in a single grant evaluation—it knows about all the filesystem bitflags BSD used to, plus Gatekeeper-level grants (e.g. the things you do in Preferences; the "com.apple.quarantine" xattr), plus external system-level capabilities "hotfixes" (i.e. the same sort of "rewrite the deployed code after the fact" fixes that GPU makers deploy to make games run better, but for security instead of performance), plus some stuff (that I don't honestly know too much about) that can require it to contact Apple's servers during the ruleset execution. Much of this stuff can be cached between grant requests, but some of it will inevitably have to hit the disk (or the network!) for a lookup—in the middle of a blocking syscall.

I'm not sure whether it's the implementation (an in-kernel VM doesn't imply slowness; see eBPF) or the particular checks that need to be done, but either way, it adds up to a bit of synchronous slowness per call.

The real killer that makes you notice the problem, though, isn't the per-call overhead, but rather that the whole security subsystem seems to now have an OS-wide concurrency bottleneck in it for some reason. I'm not sure where it is, exactly; the "happy path" for capabilities-grants shouldn't make any Mach IPC calls at all. But it's bottlenecked anyway. (Maybe there's Mach IPC for audit logging?)

The security framework was pretty obviously structured to expect that applications would only send it O(1) capability-grant requests, since the idiomatic thing to do when writing a macOS Cocoa-userland application, if you want to work with a directory's contents, is to get a capability on a whole directory-tree from a folder-picker, and then use that capability to interact with the files.

Under such an approach, the sandbox system would never be asked too many questions at a time, and so you'd never really end up in a situation where the security system is going to be bottlenecked for very long. You'd mostly notice it as increased post-reboot startup latency, not as latency under regular steady-state use.

Under an approach where you've got many concurrent BSD "filesystem walker" processes, each spamming individual fopen(2)-triggered capability requests into the security system, though, a failure-to-scale becomes very apparent. Individual capabilities-grant requests go from taking 0.1s to resolve, to sometimes over 30s. (It's very much like the kind of process-inbox bottlenecks you see in Erlang, that are solved by using process pools or ETS tables.)

Either Apple should have rethought the IPC architecture of sandboxing in 10.15, but forgot/deprioritized this; or they should have made their BSD libc transparently handle "push down" of capabilities to descendent requests, but forgot/deprioritized that.


The Scheme interpreter only runs when compiling a sandbox. It's compiled into a simple non-Turing-complete bytecode, and that's what's consulted on every syscall. This has been the case since… 10.5 or something. It's always been on the path for BSD code. And Cocoa operations lower to BSD syscalls anyway. There's no system for them to get a "capability" for a directory tree; on the contrary, file descriptors ought to be able to serve as capabilities, but the Sandbox kext stupidly computes the full path for every file that's accessed before matching it against a bunch of regexes. This too has been the case as long as Sandbox has existed.

There is a bunch of new stuff in 10.15, mostly involving binary execs (and I don't understand all of it), but I'm pretty sure it doesn't match what you're describing.


> Lisp interpreter (or maybe compiler? not sure)

I believe it is actually a Scheme dialect, and I would be very surprised if it is not compiled to some internal representation upon load.

> This capabilities-ruleset interpreter is what Apple uses the term "Gatekeeper" to refer to, mostly.

I am fairly sure Gatekeeper is mostly just Quarantine and other bits that prevent the execution of random things you download from the internet.


In the Apple Sandbox Guide v1.0 [1], it mentions Dionysus Blazakis' paper [2] presented at Blackhat DC 2011.

In the latter, Apple's sandbox rule set (custom profiles) is called SBPL - Sandbox Profile Language - and is described as a "Scheme embedded domain specific language".

It's evaluated by libSandbox, which contains TinyScheme! [3]

From what I could understand, the Scheme interpreter generates a blob suitable for passing to the kernel.

---

[1] https://reverse.put.as/wp-content/uploads/2011/09/Apple-Sand...

[2] https://media.blackhat.com/bh-dc-11/Blazakis/BlackHat_DC_201...

[3] http://tinyscheme.sourceforge.net/home.html


> Much of this stuff can be cached between grant requests, but some of it will inevitably have to hit the disk (or the network!) for a lookup—in the middle of a blocking syscall.

Running any kind of I/O during a capability check is a broken design.

There is no reason to hit the disk (it should be preloaded), much less the network (such a design will never work if offline).


A command like `terraform` shouldn't trigger the check because the quarantine system is bypassed altogether when you download and extract an archive. Maybe this is a red herring and your initial gut inkling is correct.

Try sampling the process as it starts; I doubt your issue is the one shown here.

> For example, I can run "terraform apply" and it could take up to 5 minutes to start, leaving my computer almost unusable until it runs.

On a clean Catalina install this does not happen. Does “terraform version” have the same delay? If not, check your remote configuration - maybe run with TF_LOG=trace. Terraform Cloud will definitely highlight the inherent performance problems of using a VPN.


It is worth noting that `terraform version` connects to HashiCorp’s own checkpoint service by default so this may not be the best test.

docker run -i -t -v "$(pwd)":/project hashicorp/terraform:light apply /project/thing.tf . Maybe(if your projects terraform version is the latest.)?

Adding network calls to syscalls like exec() is utterly insane. This road can lead to bricked laptops where you can't run anything to fix it (imagine an unexpected network error that the code doesn't handle properly). And crackers will just use ways to overwrite running instruction text to avoid the exec().

The comments on the article are annoying: it good that there's a mini way to reproduce, but please, use some further debugging like tcpdump (it still exists on osx, right?). Last time I summarized osx debugging was https://www.slideshare.net/brendangregg/analyzing-os-x-syste...

I'd also stress test it: generate scripts in a loop that include random numbers and execute them.


There is no excuse for this except for sheer, utter incompetence. Everyone involved in writing and shipping this should be ashamed of themselves.

This is what I scrolled all the way down this thread for - to see if anyone thinks this is a good design/security decision on Apples part. I’m trying to understand what the reasoning is for this particular decision and if it actually makes the OS more secure in any meaningful way? Or does it actually- just degrade performance with very limited benefits? Are there any real benefits to this VS current security design in popular Desktop Linux distros at this point?

Couldn't this have been a business decision? Not about security? (just what they say?)

To make non-App-store apps annoyingly unusable, so the App store will sell more apps, instead of people downloading in other ways?

Just like Apple cripples the Safari browser and PWA apps.

Long term, maybe Apple wants to be able to remote-forbid apps if Apple is developing their own competing app?

Whilst most developers working at Apple understands this, and don't like it? Maybe the developers even feel happy about people here at HN being disappointed, and think that "now the business people here at Apple notice that this causes disappointment" ?


Most of the apps that sell well originate from a developer solving a need they had, on the system they were using.

If this drives developers from OSX to other OSes, chances are they will develop apps for those OSes first.

Apple is too big to fail at this point, but driving developers away from your platform isn't a very clever strategy. You never know when you are going to hit a tipping point, and after you notice and people stop using macosx for development its already too late.

It took me ~150 hours to migrate to Linux, but my user and developer experience on Linux is much better than on MacOSX (emacs daemon "just works"!!!), so after all that work I wouldn't consider switching to OSX in the next 5 years at least. I had a Macbook air 2012, and because Apple still hasn't released a laptop that isn't a downgrade from that in some sense (keyboard, magsafe, ...) I've went with a think pad instead. Tiny details, like having a webcam that doesn't suck now prevent me from going back to OSX.


I don't think the people at Apple are actively trying to make non-App Store apps unusable because they want to make more money from the App Store or anything. It's just that they want code to pass through them, and as a by product making code that has been vetted less or does things that could potentially be abused is made more annoying to run. Such a change is divisive, as you may have guessed.

That vetting will come at the cost of 30% of money paid for your software and any money earned within the software.

It checks that executables have been notarized by Apple? I can't say I really think notarization is great, but I think it's clear from their perspective how it would be beneficial?

Sure. But as Brendan Gregg pointed out in his comment - doing this at the level of exec() on a UNIX-like OS is ... a questionable technical choice to say the least.

What’s the Linux equivalent of “notarization”? I’m not sure. Of course there’s probably more than one answer to that - let’s just taking signing packages as an example.

In theory Apple could put their weight behind vetting some of the popular open source packages perhaps? Or delegate that to the maintainers of those repositories and make them trusted? Like homebrew, for example (maybe a poor example, but you see how I’m trying to compare this with Linux...)

This is after all, what actually makes macOS useful to people on the command line 99% of the time, anyway.

So anyway, I agree on the surface it seems like this might be beneficial to Apple, but it doesn’t appear to be well considered.

They could invest more time in better sandbox and/or container type features that let people define some of their own more granular security boundaries. But they aren’t I guess? What are they doing here?


Apple OSes never were about CLI, pre-OS X you didn't have a CLI as standard OS feature.

Selling UNIX underpinning was just a marketing move for willing to betray GNU/Linux and BSD in name of a better laptop experience, instead of helping OEMs selling their stuff.

Something that NeXT also did against the Sun workstations market.

On Linux side of the this kind of security measures never work, because the moment someone introduces something like this, the distribution gets forked.

It works on ChromeOS and Android, because it hardly matters to userspace that Linux is the actual kernel, Google could embark (and it is actually) in a kernel replacement project and most stuff would just work.


Watching the notarization video from WWDC last year they explicitly said it wouldn’t affect command line apps.

Hey, malevolence can also play into this. Don't chalk everything up automatically to incompetence. /s

There’s going to be a big exodus of open source developers going to Linux-powered platforms instead of the standard Mac laptop because of this ridiculousness

> the standard Mac laptop

There is nothing standard about a Mac laptop, both technically and in market share.


Well, I'd say 90% of the computers I've seen at the last 10 confs I've attended were Macbook Pros

https://hackernoon.com/why-do-developers-run-macs-9ad81d58d1...


Look outside the US.

At Silicon Valley technology companies? A Mac is generally the computer that you're likely to get.

Silicon Valley is a very small dot in the global scale.

This is happening at my company already because docker performance on Macs is terrible.

On the one hand, of course it is, because Macs are slow at running Linux stuff in the same way that Linux is slow at running non-Linux stuff.

On the other hand, Apple should decide if they care about Docker performance. The answer seems to be "a little" (Hypervisor.framework) but much less than, say, Microsoft.

Apple doesn't talk about their future plans. Today we see stagnation, YET with spikes of exotic ideas (e.g. L4, which would permit efficient L4 Linux).

Per Apple's style, a big kernel change on the Mac side would absolutely be tied to a hardware change, to break things once and not twice. Build a new Mac with a Linux-friendly kernel (perhaps Linux, perhaps modified L4, or something new), put it on their beastly ARM CPUs, and I'm drooling.

Then again I don't work at Apple.


Is that slowness possibly related to the OP's issue? And possibly might benefit from the same workarounds posted here?

> And crackers will just use ways to overwrite running instruction text to avoid the exec().

This would require breaking your code signature and as such requires extra entitlements in the hardened runtime.


That's not quite correct. If network access is unavailable or fails then the exec is allowed. The behavior has been improved over time, putting stricter limits on how long the check is allowed to take before giving up.

The Mac remains a Mac: if you turn off SIP it also disables this behavior. You are free to choose less security for more convenience if that is your preference.



…with everything to do with the sandbox left out.

Fair point. These tarballs may be, err, editorialized.

If exec is blocking in the kernel on IPC to some daemon, that should be observable (e.g. Instruments with kernel traces enabled).


Yeah, I'm sure a good spindump would be able to find what the code is blocked on. Sadly I run with SIP disabled so I can attach to things, so I probably cannot reproduce the issue…

Most of the important parts are left out.

at this point opensource and apple are sort of on life support.


Well NFS and SMB exist, you can exec() on such mounts.

> This is not just for files downloaded from the internet, nor is it only when you launch them via Finder, this is everything. So even if you write a one line shell script and run it in a terminal, you will get a delay!

> Apple’s most recent OS where it appears that low-level system API such as exec and getxattr now do synchronous network activity before returning to the caller.

Can anyone confirm this? Because honestly this is just terrifying. I don't think even Windows authorises every process from a server. This doesn't sound good for both privacy and speed.


There are two new Security/Privacy Settings that I just noticed last night.

"Full Disk Access" to allow a program to access any place on your computer without a warning. A few programs requested this, so it looks like it's been around for a while.

The other one is "Developer Tools" and it looks pretty new. The only application requesting it is "Terminal". This "allows app to run software locally that do not meet the system's security policy". So, my reading of this is that in Terminal, you could run scripts that are unsigned and not be penalized speed-wise.


I don't see it on macOS 10.15.4 (19E287). The full list of categories on my Privacy tab:

  - Location Services
  - Contacts
  - Calendars
  - Reminders
  - Photos
  - Camera
  - Microphone
  - Speech Recognition
  - Accessibility
  - Input Monitoring
  - Full Disk Access
  - Files and Folders
  - Screen Recording
  - Automation
  - Advertising
  - Analytics & Improvements
Granted I don't typically use Terminal.app (iTerm 2 user), so I launched terminal and did some privileged stuff. Had to grant Full Disk Access to, say, `ls ~/Library/Mail`, but "Developer Tools" never popped up.

Are you running a beta build or something?

---

Update: Okay, I checked on my other machine and that one does have it (Terminal is listed but disabled by default). What in the actual fuck?!?


You can make the category appear and put Terminal in it with this command:

sudo spctl developer-mode enable-terminal


I'd be nice if this was documented somewhere :/

I was going to be that guy and say “man spctl”, but that usage isn’t listed there. If you run spctl with no arguments, it will tell you, however. The man pages on macos really do leave something to be desired.

I don't see it on my machine. Do you happen to have System Integrity Protection disabled?

No, SIP is fully enabled on both the machine with the Developer Tools category and the one without.

Interestingly, I rebooted the machine without after some benchmarking and experimentation with syspolicyd (see https://news.ycombinator.com/item?id=23274903), and after the reboot the category has mysteriously surfaced... Not sure what triggered it. Launching Xcode? Xcode and CLT were both installed on the machine, but I'm not sure when I last launched Xcode on this machine. Another possible difference I can think of: the machine without was an in-place upgrade, while the other one IIRC was a clean install of 10.15.

In the worst case scenario, you can probably insert into the TCC database (just a SQLite3 database, located at ~/Library/Application Support/com.apple.TCC/TCC.db) directly:

  INSERT INTO access VALUES('kTCCServiceDeveloperTool','com.apple.Terminal',0,1,1,NULL,NULL,NULL,'UNUSED',NULL,0,1590165238);
  INSERT INTO access VALUES('kTCCServiceDeveloperTool','com.googlecode.iterm2',0,1,1,NULL,NULL,NULL,'UNUSED',NULL,0,1590168367);
(Should be pretty self-explanatory. The first entry is for Terminal.app, the second entry is for iTerm 2.)

Back up, obviously. I'm not on the hook for any data loss or system bricking.


> In the worst case scenario, you can probably insert into the TCC database

Does this not require disabling SIP?


Maybe you need Xcode, try running "mkdir /Applications/Xcode.app"

As mentioned in a reply to a sibling, Xcode has been installed (for like five years) on this machine, and launching it doesn't help. The next step would be to compile and run an application with it, which I haven't bothered.

I would expect checks for Xcode to go through xcselect rather than a simple directory check. Installing the command line tools (sudo xcode-select --install) might actually be a better idea to test this.

I thought the same, but actually this method worked for me when I wanted the the Spotlight "Developer" option to show up (the CLT were already installed). I have the Developer panel under "privacy" as well, even if I never installed Xcode on my machine

Maybe if you ran Terminal.app once it would work?

(I'm also on 10.15.4 (19E287))


No, I played around with Terminal.app for quite a while already. Actually the category does show up on another machine of mine (see edit)... I suspected that maybe I never ran Xcode on the first machine since I upgraded to Catalina, so I launched Xcode, but again, no luck. I'm at a complete loss now.

Terminal actually gives an error if you poke into the top level library folder with full disk access disabled, no prompt to change without me looking on stack overflow for the solution.

I wonder what "Developer Tools" grants in practice. Clicking the (?) for viewing built-in help does not mention this particular setting, it skips right over it going from "Automation" above it to "Advertising" below it.

I believe it means the process will no longer check for the Quarantine xattr.

But the quarantine xattr has nothing to do with checking notarization?

via https://lapcatsoftware.com/articles/catalina-executables.htm..., I've added an entry in my /etc/hosts to block requests to api.apple-cloudkit.com:

    127.0.0.1 api.apple-cloudkit.com
    127.0.0.1 *.api.apple-cloudkit.com

Full Disk Access was added in 10.14 (2018), so it's relatively new.

I'm using the Kitty terminal, and observed the script launch delay described in the blog post. After adding Kitty to "Developer Tools", the delay disappeared. Thanks!

Making this about speed is burying the lede. From a privacy and user-freedom perspective, it's horrifying.

Don't think so? Apple now theoretically has a centralized database of every Mac user who's ever used youtube-dl. Or Tor. Or TrueCrypt.


Richard Stallman's ideals have become a bit less crazy for me now...

Either you have the ability to control the software, or it controls you


I think coming to this realisation about Stallman's ideas (not the man, mind) is something that most rational computer users are bound to do. It happens at different times for different people, but I think people very rarely go back after that "Hang on a second ....??" moment.

I remember once he said "proprietary software subjugates people" and I just sort of blinked a bit. It seemed sort of over the top. And over time I started to understand that the way things end up working out, it is very true.

I always wonder why people usually choose to neglect privacy issues about Apple.

First, there was Apple scanning photos to check for child abuse[0] (that obviously got no attention on this site), then there was this one - Apple uploading hashes of all unsigned executables you run.

Do people really accept that company's "privacy" selling point?

[0] https://news.ycombinator.com/item?id=21180019, https://news.ycombinator.com/item?id=22008855


Is it even legal that Apple is retrieving this information?

Apple already has every iPhone user's photos, messages, browsing history, keychains etc.

Not sure how a list of installed apps is going to be worse than that.


Not if you choose to not sync them.

Yup, you can choose to not use iCloud backup and back up offline in an encrypted way (even over wifi) if you’d like.

How could this possibly not be absolutely awful on projects that run hundreds of executables during their execution (e.g. some shell wrappers like oh-my-zsh call out to a large amount of different scripts every time they run).

It looks like it is done once by executable lifetime. Changing the content doesn't cause it to rerun.

If you don’t trust Apple, don’t run a multi Gigabyte closed source OS they provide.

I can confirm that executing a trivial script takes 20-200ms longer on the first run. Using 10.15.

not sure if I'm lucky or somehow I disabled something but the trivial script problem isn't affecting me on any of my machines. I am using Homebrew for a large % of command line/scripting so maybe that's why?

Privacy it may be a plus since in theory notarization provides some protection.

Speed, definitely not, this is going to make things slowwwww


> provides some protection.

That's security, not privacy...


Although insecurity leads to less privacy as well.

Insecurity leads to loss of privacy, but security does not lead to privacy. Things can be secure and non-private by design.

Sometimes, but sometimes security measures lead to less privacy. Say, if executing local programs sends information to a remote server.

If that information can’t be used to identify anyone then it retains privacy while being secure. Being slow would still be an issue.

I experienced this one day while tethering in the train. I was coding and running `go build` multiple times.

I could not for the life of me understand why go build would take upwards to 30 seconds to run and sometimes 100ms. I finally realized it was related to my internet connection being extremely spotty. I went online and searched if anybody had the same experience with `go build` but couldn't find anything.

I finally know what happened. This is a pretty intolerable "feature".


Does it work at all when unconnected?

There seems to be a delay of about 5 seconds, then it "gives up" trying to notarize your program .

I don't remember if it did or not, but I'm fairly certain it did. (otherwise I'd probably remember it, I think...)

As someone living in China, this is my result when I connected to my VPN (this is my normal life, thus I can visit sites like HN):

> Hello

> /tmp/test.sh 0.00s user 0.00s system 0% cpu 5.746 total

> Hello

> /tmp/test.sh 0.00s user 0.00s system 79% cpu 0.006 total

And even if I didn't connect to my VPN:

> Hello

> /tmp/test2.sh 0.00s user 0.00s system 0% cpu 1.936 total

> Hello

> /tmp/test2.sh 0.00s user 0.00s system 78% cpu 0.005 total

That's just ridiculous and unbearable.

Apple should provide a way to disable this notarization thing, and the user should still be able to enable SIP while disabling it.

additional information:

- macOS version: 10.15.4

- terminal: iTerm2 3.3.9

- didn't install any "security" software


Is HN blocked in China?

HN has been blocked in China since about 9 months ago.

https://news.ycombinator.com/item?id=20676573


I'm curious what your results would be with the stock Terminal. Do you have the settings that others have talked about under "Security > Privacy > Developer Tools" with Terminal.app listed? If so, and the results are better with Terminal, then it'd be interesting to see if the issue is fixed when you add iTerm2 to the list of exempted apps as well.

I have tried what you suggested. Granting "Developer Tools" access definitely FIXED THIS ISSUE for the specific application.

Here is the new result (I only run once for each case):

    ╒══════════╤═════════════╤═══════════════════════════╕
    │          │             │ +"Developer Tools" access │
    ╞══════════╪═════════════╪═══════════════════════════╡
    │ terminal │ 1.448/0.004 │ 0.016/0.004               │
    ├──────────┼─────────────┼───────────────────────────┤
    │ iTerm2   │ 1.240/0.006 │ 0.024/0.007               │
    ╘══════════╧═════════════╧═══════════════════════════╛
`1.448/0.004` means the first time it is `1.448 total`, and the second time it is `0.004 total`.

(It seems I have "good" VPN/internet connection condition at this time)


Upvoted for ASCII table alone

It doesn't work when there's no network connection, wonder if it would be possible to filter out and automatically block notarization traffic, or if it's all encrypted with cert pinning to prevent this type of MITM+filter.

Dropping packets when there is an otherwise working connection could potentially make the delay even worse depending on timeout or retry strategy used by Apple code. I assume that in the fast case without network connection it checks the network status flag and doesn't try to do any network connection at all.

I'm still on 10.14, but I guess it will show up on Little Snitch. Unless they bundle it with some other more essential traffic.

Okay, I've tried this test on my MacBook Air 2020 several times, first by saving the "echo Hello" shell script in an editor and then, because I wasn't getting the results the author experienced, trying again exactly as he wrote it. Essentially the same result:

    airyote% echo $'#!/bin/sh\necho Hello' > /tmp/test.sh
    airyote% chmod a+x /tmp/test.sh
    airyote% time /tmp/test.sh && time /tmp/test.sh
    Hello
    /tmp/test.sh  0.00s user 0.00s system 74% cpu 0.009 total
    Hello
    /tmp/test.sh  0.00s user 0.00s system 75% cpu 0.007 total
Is it possible that Allan Odgaard, as good a programmer as he unquestionably is, has something configured suboptimally on his end? Because it just strikes me as super unlikely that Apple has modified all the Unix shells on macOS to send shell scripts off to be notarized. (From what I've read, while shell scripts can be signed, they can't be notarized, and Gatekeeper is not invoked when you run a shell script in Terminal -- although it is invoked if you launch a "quaurantined" shell script from Finder on the first run, but it treats the shell script as an "executable document." This is the way this has worked for years, as I can find references to it in books from 2014.)

I have my complaints with macOS Catalina, and I know that Apple's "tighten all the screws" approach to security is anathema to a lot of developers (and if there was a big switch that I could click to disable it all, I probably would), but I'm using Macs running Catalina every day and I gotta admit, they just don't seem to be the dystopian, unlivable hellscape HN keeps telling me they are. At least off the top of my head, I can't think of anything I was doing on my Macs ten years ago that I can't do on my Macs today. ("Yes, but doing it today requires an extra step on the first run that it didn't used to" may be inconvenient, but that's not the same thing as an inability to perform a function -- and an awful lot of complaints about modern Macs seem to be "the security makes this less convenient." There's an argument to be had about whether Catalina's security model strikes the right balance, of course.)


I don't experience a delay in Terminal.app either, but I've tried running the script with a fresh install of iTerm2 while capturing with Wireshark and it does look like the script triggers a connection to an Apple server

I initially saw the delay in Terminal.app, but then it went away! I've made sure Terminal doesn't have the "Developers Tools" permission but the network request delay is still missing.

However, I was able to reproduce this by downloading a whole new terminal app, Alacritty. With the random script and file path I can always reproduce the delay in Alacritty. My guess is Terminal.app might have some special case behavior?

See my comment above on some shell script that does the random file name stuff for you.


I just ran the same script on iTerm2 and had no delay.

I had no delay neither until I reinstalled iTerm2, I have no idea why

Obviously I can't say that's impossible, it would just be... very weird, and would seem to contradict what Apple Developer Relations was saying on Apple's devrel forums as recently as this year.

So its an actual fact documented that it happens. I agree that overall Mac os x still has a very nice ux and I'll never go back to windows.. But it's very clear apple is platforming their os to the degree they will ios. It's not weird it's happening, it's real life...

> and if there was a big switch that I could click to disable it all, I probably would

First, disable SIP to allow yourself to modify the system. Then, disable AMFI, the component responsible for code signature checking, entitlement enforcement and all that very useful stuff, with a kernel argument:

    nvram boot-args="amfi_get_out_of_my_way=0x1"
Then you should be done.

That argument reads to me like the implementer knew this stuff was obtrusive.

I might be wrong about this but if you're running a shebang'd script directly as an executable, they wouldn't need to modify the behavior of the shell itself but rather the executable loader. It would be interesting to see whether, e.g., `bash test.sh` doesn't phone home where "./test.sh" does.

10 to one says this is because you've run something calling /bin/sh before.

if he switched the /bin/sh out to /bin/zsh or /bin/bash which ever his default shell was, he wouldn't have seen the first delay.


That's plausible -- but I'd be (mildly?) surprised if Apple hadn't pre-okayed binaries they supply with the OS. Even if you flip the Super Paranoia switches in privacy settings, you don't need to give macOS explicit permission to launch Apple-supplied binaries from the Finder.

Most vendors have separate engines for detecting malicious scripts. I'd assume notarizing is more about executables, in which case it would be checking the signatures around the shell binary.

Also worth noting "echo" doesn't spawn a process but is a routine in the shell itself. If you replaced echo with something that does spawn a process "like scp" it would be interesting to see the results. And if that's doesn't introduce latency then I'd try it with some hello world programs with a UUIDv4 in the binary to ensure they haven't seen the hash before.


> Also worth noting "echo" doesn't spawn a process but is a routine in the shell itself.

In Bash echo is a builtin but /bin/echo also exists if you do actually want to spawn a process.


Maybe OP edited a few times but it doesn't look like they are doing that to me

I'm not sure I understand?

try again with a randomized filename

There was a thread on the almost-forgotten Cocoa-dev list about this: https://lists.apple.com/archives/cocoa-dev/2020/Apr/msg00008...

Catalina has a huge number of things that synchronously block application launch, and if any of them fail you get nothing but a hung app. A friend and I have a running discussion of the many ways where an application would just hang and we’d send samples and spindumps, to each other trying to figure out the right daemon or agent to kill to get the process to start responding again. It’s madness.


I tested whether running a script you just wrote really contacts Apple to “notarize” it. It does.

I first used the author’s timing method. First runs are consistently about 300 ms, subsequent runs consistently about 3 ms. Something is happening at first run.

Some in the comments are saying it’s “local stuff”, so I tested timing again with internet off. First runs go to about 30 ms, subsequent remain the same. So there is “local stuff”, but it doesn’t explain the delay.

Just to be entirely sure, I installed Little Snitch and got clear confirmation: running a script you just wrote results in syspolicyd connecting to api.apple-cloudkit.com. syspolicyd is the Gatekeeper daemon.

I don’t know what exactly is being sent. Maybe somebody else can do a proper packet analysis.


I still love macOS, a lot. Since moving over after the disaster that was Windows 8 (and by then I was already using MacBook hardware), I've become a loving power user e.g. with AppleScript and setting up hotkeys or other ways to do absolutely anything I want on the screen. It really is still as powerfully customisable as Linux. Turn off SIP if need be.

My only problem in moving to Linux software is that I prefer Apple's hardware. I'm on the 2019 16-inch MBP. Linux's compatibility with all the T2 and SSD hardware isn't there yet, but apparently it almost is.

If Linux on the T2 MBP becomes solid and stable in the next 1-2 years, after extensive testing I may move over permanently. I already use Linux on secondary computers, and I love and value its privacy. Same with my phone. I just love my privacy.

My needs are a high bar though. Productivity must be held back by nothing. I use macOS notes extensively and it syncs with my iPhone which is an extremely useful tool for me to note things down both in audio and. It needs to be reliable and - heh - 'just work'. I just discovered the cross-platform 'Standard Notes' app, with a bit more money paid out to Linux-compatible services like that, maybe it can all work. Casual photoshop can be taken care of via a VM.

Surprisingly, macOS Catalina is itself a disrupter to my productivity. It seems buggy as hell - glitchy, and weirdly slow for many extremely basic things - all since Catalina. I just don't get it. Is it caused by this article's observation? Something's definitely going on.

Maybe Apple will fix this in the next release? Like how they fixed the keyboard?

Either way, I still want to move to Linux on this fabulous (fixed) hardware that is the 16-inch MBP. (T2 issues aside.)


I have a 2019 Macbook Pro 16in and I hate it. It runs exceptionally hot (leading to massive performance problems), doesn't get enough power from the adapter to start with no battery, doesn't play nicely with my display, needs restarting every couple of days so Chrome doesn't crash and takes forever to boot.

That's just the technical problems. I'm willing to give the UI a break, since it's probably as much me adjusting as it being bad.

This is my first Apple anything, and if this is what "just works" looks like, I don't want it. I could be more productive on an Android tablet at this point.


Actually, I do agree with you with some of those observations. Apple's been trying to fix their terrible T2 issue and I suspect some of the problems lately have been them trying to prevent the T2 reboot crash, while ruining other parts of the experience in the process as a necessary compromise. It may get worse (or better) as they move to all-Arm architecture.

I also am sick of the touch bar now - after 2 years living with it. I have to press it twice to actually pause my media, because it's an LCD screen and it has to auto turn off to prevent burn-in. That's a regression from the old hard media button in the Fn row which was both instant and far easier to press. At least we got 'Esc' back.

But man, their trackpad...nothing beats it. Still.


> it's an LCD screen

OLED.


I hear OLED can be just as bad if not worse. So same diff.

Much worse. Just explaining why that would be a problem.

Mine starts spinning up the fan (theres kind of a pattern as to when), heating up the entire computer. The computer previously had been fine.

I usually have to restart and reset the "SMC" to stop the fan from nuking the computer.

I can let the computer drop to 5% battery life and the fan will turn off and the computer will cool down. Which is the opposite of what you want if it was actually overheating.


Counterpoint, I also have the 16 inch 2020 MBP as my first Mac work laptop and absolutely love it. No issues, it works perfectly, and I’m 2x as productive on it as I was on my previous Ubuntu setup.

Do you write anywhere online about your workflow setup using AppleScript? It sounds interesting. I’d like to configure my macOS experience more.

Oh it's not like I have a Cmd+<X> for every single possible task you can imagine, it's a very tailored and customised set of sometimes complicated scripts for my weird personal needs that I've built up over the years.

Each time I want to do something, I goddamn will spend 8 hours figuring it out if have to. E.g. this: https://apple.stackexchange.com/a/381441/163629 - one hotkey to change macOS Notes text into a specific hex colour (and/or bold etc). It took me a day but I worked it out. Where there's a will there's, 99 times out of 100, a way.

You can seemingly do almost anything with AppleScript. Emphasis on almost.

Here's another example: Right after I plug in my iPhone via USB, I have one hotkey to automate a little-known feature of macOS where you can turn your Mac into a speaker dock for the iPhone. Awesome thing when you have the dramatically improved 16-inch MBP speakers. Here's my applescipt for that, just customise according to your iPhone name near the bottom and try it out: https://pastebin.com/raw/9BY710Y6

YMMV, if you have additional audio devices in sound prefs so may need to change the code a bit.

AppleScript also has the ability to perform unix bash scripting and commands, so with homebrew able to install most common Linux packages, you can go wild if you want.

I'm definitely not 'advanced' applescript level, I'm intermediate. Hundreds of HN readers would know more than me. I just google and think until I find a way. I'm not a programmer.

I have other shortcuts e.g. to control the MPV media player even if it's not the currently active window. Again, weird personal needs, but awesome. AppleScript to the rescue.

FastScripts is how I assign universal hotkeys to any of my applescripts.


Would be great if you could write about the scripts you hack to optimize your workflow

I hope Apple currently has a team focused on macOS perf.

I worked on the team in charge of improving iOS (13) perf at Apple and IIRC there was no dedicated macOS “task force” like the one on iOS.

Luckily some iOS changes permeated into macOS thanks to some shared codebases.


I agree. This kind of behavior certainly smells like teams doing their development work on high-capacity low-latency networks without much performance oversight.

> I hope Apple currently has a team focused on macOS perf.

Apple doesn't give a fuck about macOS since 2015.


I wonder what % of their users are developers only begrudgingly sticking around for iOS builds.

> IIRC there was no dedicated macOS “task force” like the one on iOS

It's not surprising. Macs are less than 10% of Apple's revenue.

https://www.macrumors.com/2020/04/30/apple-2q-2020-earnings/


Except all of Apple's other devices are built on macOS. Apple's clear de-prioritization of macOS based on revenue numbers is so insane I can barely believe it's happening. If developers, who use Macs in large numbers today, go to another platform, there's very real risk that their entire empire starts to come apart at the seams. And, this may just be me being naive, but it doesn't seem like that much work to keep macOS going, all they have to do is stop trying to turn it into iOS. They are literally doing a tremendous amount of active engineering work that drives developers away from their platforms.

They are risking their entire empire because (apparently) someone at Apple has an axe to grind with macOS's Unix underpinnings. And until they start getting real consequences (developer's leaving in huge numbers), it doesn't seem like it's going to stop. The tragedy is, if they ever do reach that point, where developers are leaving in huge numbers, it'll be too late. Platforms are a momentum game, you're either going up, or you're going down. And once you're going down, you're as good as dead.


Agree. That's probably also one reason why more and more people want to use cross-platform app frameworks instead of developing for iOS natively. That way, you can do most of the dev work on Windows and Android, and you'll only need to use Mac & XCode for compiling the iOS binary.

And I'd wager that some iOS games are released without the developer ever touching XCode: https://docs.unity3d.com/Manual/UnityCloudBuildiOS.html


Signing and submitting apps to Apple is fairly annoying to do without Xcode.

Unity has a service where they do it for you.

100% agree! If more people understood this, I hope this narrative would gain some traction and eventually reach Apple management.

To me, the idea that an OS is mostly finished is completely bananas. There's so much room for improvement and hardly any of that potential was tapped into in what's starting to feel like a decade.

And if Apple had invested into a successor for Cocoa, there might be a larger gap between native apps and (Electron) web apps, leading to some lock-in. Instead most new stuff is not native and for good reasons (and I do dislike the way they don't adhere to Mac conventions, but still).

I think ultimately the problem is Tim Cook. He's too attached to Apple's stock price. I think that's the one metric that he believes rates his performance. But inertia is a bitch. Like in politics, the effects might hit hard only once he's out and it could be too late to fix by then.

If I think about how much this impacts the economy overall (i.e. make millions of knowledge workers a little bit less efficient) then I can only hope that I'll see more sophisticated organizational structures in my lifetime that prevent such erosion.


Tim Cook is Apple’s Ballmer, who is their Nadella?

I was thinking exactly this, 8 years ago. I moved from an imac + mbpro to linux only.

It took longer than expected. I even intended to buy put options, but someone I trust told me otherwise and to invest in equity instead, which I did, because I know that most buy decisions are not made rationally.

But it looks like the time has come now? On the other hand, I have been off by several years before. People are crazier than you think, especially when it comes to status and association with brands and self-confirmation of past decisions. They might well put up with Apples moves for a few more years.


But at Apple scale: 9% of $58 billion = $5.2 billion Mac revenue last quarter.

Yes, that is what drives me crazy whenever people say Mac is only 9% of revenue and they dont care about it.

If the Mac revenue was separated out on its own, it would be about Fortune 120, that is higher than Kraft Heinz. With plenty more space for growth. Apple only has 100M Active Mac users. There are 1.4B Windows PC.


OTOH when Apple was a much smaller company the mac was much more important to them and it showed.

Maybe it's not related to revenue per se, but clearly since iOS became their main thing the Mac has suffered tremendously.


Apples Macintosh division is the most profitable PC company in the world and has been for at least a decade. In fact, Macintosh is likely more profitable than all other PC companies combined.

Less than 10% is no excuse.


Like I said in another comment, is not about the revenue per se, but it's undeniable that the more popular iOS is the less Apple cares about the Mac.

Do you have a source for that claim?

It's not surprising. Macs are less than 10% of Apple's revenue.

Without Macs for developers and other content creators that other 90% doesn’t exist.


Exactly. Especially given the Xcode lock-in nonsense.

It's surprising that they don't improve the developer experience for their own developers using their own tools, including hardware.

Apple uses the same tools you do. They just might not be using it like you are; you can find a lot of features that clearly have no reason to exist outside of Apple nonetheless shipping with their software.

> Apple uses the same tools you do.

No. A special directory can be created at the root of the file system called /AppleInternal. Then, if you work at Apple, you can put some special files there that do stuff. I've read somewhere that they are able to easily disable all of this privacy protection crap and other annoying stuff.


There's nothing really special about /AppleInternal, it's just a fairly normal directory that a couple of tools change in order to do things like offer more detailed diagnostics or the option to create a Radar. On a normal internal install there are some internal utilities, many of which are listed here: https://www.theiphonewiki.com/wiki/Category:Apple_Internal_A.... But their code is all Xcode projects and stuff, it's not like they're really using special tools for themselves except in certain cases. There are a couple of internal tools that possess entitlements to bypass security, but more often than not engineers just run with the security features disabled, which you can do yourself.

That's kind of my point - it's surprising to me that they're shipping slow hardware and software, when they're used to develop that same hardware and software. Developer time is expensive.

I would actually be quite happy if the engineers were forced to work on four-year-old MacBook Pros and develop against Display Zoomed iPhone 7 and the second generation Apple Watch, using the toolchain and software they push to their developers.

Is there a list somewhere of Apple's in house dev environments or workflows? I wonder what cool tricks they use internally that could be pretty useful generally.

Nothing special that can really be talked without internal context. You can get a hint at how they use their own tools though (which are available externally) if you pay careful attention to their public appearances and presentations.

Very messy internally, every team has their own.

I wouldn't be surprised if they've determined that developers will generally put up with a bad experience in order to have access to the massive iOS market.

There isn't much incentive to improve because they know that people will buy their hardware regardless.

Not to mention people defend and market their products for free.


Maybe internally they are using a different version of macOS?

It’s basically the same ones you’re running, possibly a couple builds ahead and with all the security features turned off.

Nope

I find it funny how people are downvoting your innocent comment pointing out a fact... out of anger and hate for the actual fact :D

What changes permeated into macOS? What did your team do to improve iOS perf?

So many of the frameworks have shared code between macOS and iOS (e.g. MapKit, Foundation, Contacts etc..), so a perf fix in iOS pays dividends on macOS too.

Perf changes are too numerous to mention, I’d recommend watching last year’s WWDC keynote describing the iOS 12 v/s 13 perf advancements.


They set "fast = true" as a global constant variable.

I would give anything to have my Mac be fast again. I have no idea what changed but even 10.14 feels a whole lot slower than it was earlier. Haven't upgraded to 10.15 seeing all the negative reviews it is getting when it comes to perf. Apple needs to seriously give perf a priority for Mac. Do they really expect developers to use a Mac to develop Apps when it is slow as molasses? I shudder to think what will happen to the Apple ecosystem if developers migrate to another OS for development. Apple will come crashing down. I don't wish for that to happen but looks like there is absolutely no one at Apple focused on making it better.

Remember, people don’t write blog posts saying nothing changes. The negative reviews tend to be one of two things: spotlight reindexing shortly afterwards, or attribution error where every new thing is blamed on the OS upgrade and similar old behavior is mentally discounted. App development didn’t suddenly get “slow as molasses” and for most users the install was a reboot and back to work.

This is completely insane. I am so glad I decided years ago to leave closed operating systems behind.

This design seems to cement the trend at Apple to position their products as consumer appliances, not platforms useful for development.


> I am so glad I decided years ago to leave closed operating systems behind.

The problem is, there's nothing else out there. Everything is going to shit in one way or another. Windows is now a disaster, Linux was always a disaster in terms of user experience and isn't improving.

Mac OS was the last bastion of somewhat good, thoughtful design, user experience and attention to detail and now they've gone to shit too.


If you add "unfixable" to "disaster" the problem becomes more clear.

Windows is a unfixable disaster, you can't fix it sorry.

Mac OS is now an unfixable disaster, you also can't fix it sorry.

Linux may be a UX disaster, but you can, uniquely, modify it. You can change your UI. You can attempt to fix the problem, and have a real shot at doing so.

Linux is the only one where you can do something about the problem - which is a strong reason to prefer it.


Not only can you modify Linux in theory, it is actually getting _easy_ to do so.

The biggest reason I enjoy elementary OS as a distro is that everything lives on GitHub, package releases happen through GitHub Actions, etc. Fixing a bug can be faster than merely filing a radar in the Apple ecosystem.


>Linux was always a disaster in terms of user experience and isn't improving

I'm honestly pretty baffled as to what keeps this meme alive, as KDE and GNOME are both very popular and provide simple, intuitive interfaces for the typical user. Plasma is only complex if you're the type that really wants to customize, but there its complexity is (mostly) necessary for its wide range of possible configuration. People have this idea that desktop Linux users are all a bunch of dorks playing around with Arch and tiling window managers all day and then posting their anime wallpaper setups on /r/unixporn, but that hasn't actually been true for a long time.


Yeah Linux is awesome. I don't get the hate either. I have like 5 apps I use in Linux Mint, and they look exactly the same way they do in MacOS (Spotify, Discord, Firefox, Godot, Sublime, VSCodium, Terminal)...

The settings UIs in Mint are easily way better than in Windows and Mac.


> Linux was always a disaster in terms of user experience and isn't improving.

Nonsense, 'Linux' can be what you make it. You can have it as sleek as something straight out of the fruit factory or as spartan as a VT100 and anything in between. If you're new to the game the pre-packaged 'consumer' distributions might be a good starting point but for those with a bit of nix savvy - of which I assume there to be many on this board - those bells and whistles probably just get in the way.

If my 8yo daughter and my 82yo mother can use Linux - the latter through a remote X2go session from her kitchen table in the Netherlands to my server under the stairs in Sweden - I'd say people around here can be assumed to be able to handle it. The nice thing about 'Linux' is that you can change out those parts which you find disagreeable for whatever reason for those you like better, this in contrast to that last bastion of somewhat good, thoughtful design, user experience and attention to detail* which by your own statement has been changed into excrement. Just take out the shitty bits and replace them with something better... oh, no, not possible...

That is why the parent poster is right in this sense, things in 'Linux' land might not be perfect - and can never be 'perfect' since one person's perfection is another's nightmare - but at least you get to do something about it.


Linux was always a disaster in terms of user experience and isn't improving.

Curious: what have you tried? People who use "Linux" as a catch-all in terms of UX usually have only tried a single distribution with a single desktop environment.


I feel like people still have in mind what Linux desktop was 15 / 20 years ago. It improved a lot in the past years, battery life improved on laptops, Ubuntu that was already very stable and feature complete also got a lot of things with previous releases and I've personally been running Arch on my main computers now for 5+ years and haven't got any major issues while upgrading.

Try using the latest version of software that has a more frequent release cycle than arch. If you have an incompatibility there goes your install.

Have yet to see a distro do multi monitor hi dipi that results in readable fonts out of the box..

This gets updated yearly - https://itvision.altervista.org/why.linux.is.not.ready.for.t...


This list is quite comprehensive, but also quite boring. It's just a list of bugs and things that are suboptimal on Linux. You could write one about any operating system. Some of the items like 'such-and-such needs to be configured using a text file' are also not even real problems.

What do you mean by 'there goes your install'? There are multiple ways you could run bleeding-edge software before it's packaged for Arch. See for example every 'xxx-git' package in the AUR. Or Flatpak.


Arch does not have a release cycle, sorry.

People who have used ubuntu might want to just once try arch linux.

I had an ubuntu machine that took a while to boot even with an SSD. Later I installed arch linux on the same machine and boom! it would be to the desktop in seconds. It was night and day.


Debian is just as quick, and does not have the problematic "rolling" updates of Arch. (It does have the "testing" and "unstable" channels which are roughly comparable, but the Debian folks won't tell you to use them in production.)

> problematic "rolling" updates

Rolling updates for me have not been problematic.

I've had a few updates that gave an error message, and they were easily fixed in one minute after searching the arch website.

I think one was a key expired - I had to manually update it and redo the update process.

The other I can recall was a package that had become obsolete/conflicting and a question had to be answered.

In general rolling updates are a tiny blip every few months.

In comparison, the several debian based distributions I've run have been a "lost weekend" type of upgrade for major updates.


Debian is not just as quick (significantly slower and higher resource usage), but Arch isn't all that fast nowadays, either.

Moreover, I've been running Linux for decades now, both in my personal laptop and at work, and Ubuntu has been (mostly) frictionless for me. I'm not an average user, of course, but for most users a friendly distro would work just as well as Windows (browsing the internet, using whatsapp web, watching movies). In some cases I've had a better user experience with Ubuntu than with Windows or OS X, namely seamlessly installing a wireless HP laser printer.

I only tried Ubuntu, a few month ago. For the day or two spent with it:

- multi-language support requires a lot of work to get to the same point as macos.

In particular I use third party shortcut mappers to get language switching on left and right command keys (mimicking the JIS keyboards, but with an english international layout). That looks like something I’d have to give up on code myself.

- printer support is not at the same level.

Using a xerox printer, some options that appear by default on macos where not there on ubuntu. I’m sure there must be drivers somewhere, or I could hunt down more settings. But then my work office two other printers. It would be a PITA to hunt down drivers every time I want to use another printer.

- Hi DPI support is still flagged as experimental, and there’s a bunch of hoops to jump through to get a good setting in multi-monitor mode. Sure it’s doable, but still arcane.

- sleep/wake was weird. It would work most of the time, but randomly kept awake after closing the lid, or not waking up when opening. Not critical, but still not good (I’d ahte to have the battery depleted while traveling)

Overall if I had no choice that would be a fine environment. But as it is now, with all its quirks, I feel macos is still a smoother environment.


Fair enough. I'm not a Mac OS X user so I don't know how it would compare. I can only compare it with my past experience with Windows, and I think it's superior (for me) to Windows circa 7 -- I stopped using Windows entirely at that point, so I wouldn't know how later versions of Windows fare.

Portability is also a fair issue to raise, but it's simply not a problem for me. When I say Linux "on the desktop", I literally mean it: to me a laptop is simply a slightly more portable desktop computer. I sometimes take my work laptop to/from the office, and the battery lasts long enough for that. I'm not worried about longer trips, since I don't use laptops for that. Again, if you do care about this (which is completely fair), I'm aware many Linux distros still have issues with battery life. You certainly can't compete with a Macbook Pro, that's for sure!

I do note that my experience with printers is opposite to yours. Like I said, when trying to connect to an HP wireless printer, Ubuntu autodetected and self-downloaded the necessary drivers; however, it took a lot of patience to get it to work with a Macbook Pro. Today, that I have it configured for my Ubuntu laptop and my wife's Macbook Pro, the Mac will sometimes fail to print (the print job simply stuck in limbo) while my laptop prints reliably. Who knows?

And like I said in another comment, I game (or used to, anyway) a lot with Ubuntu, and many games are even AAA (though they tend to arrive later than on Windows).

So I really have a hard time believing Linux is not "ready for the desktop". It is, and has been for many years now.

edit: one last thing. You mentioned HDPi modes, multimonitor, multilanguage... none of those are for average users. My mom would be comfortable browsing the net, reading mail and watching movies on Ubuntu. She doesn't even know what HDPi is, nor does she want external monitors. (Spoiler: she still uses Windows because she can't learn anything else at this point... I've thought of tricking her by themeing Ubuntu to look like Windows, but that would just be mean).


With Linux you have to pay for proper support. HP is by far the best company in terms of supporting Linux printers. It isn't the Linux ecosystem's fault that other printer companies do not care.

Interesting. I regularly use RHEL (server/CLI only) but have not tried desktop Linux in a while.

I get a fair bit of weekly exposure to Windows 10 and well, it's not like heaps of fun, UX wise.

I'm reluctant to drop Apple mainly because I'm so 'tied up' with the rest of the ecosystem, iphone, Apple Music, iCloud etc.. They are not irreplaceable (for sure) but it always feels like moving away will cost way too much effort and be a pain... Well played, Apple.


> I'm reluctant to drop Apple mainly because I'm so 'tied up' with the rest of the ecosystem, iphone, Apple Music, iCloud etc.. They are not irreplaceable (for sure) but it always feels like moving away will cost way too much effort and be a pain... Well played, Apple.

This is why I don't want anything by Apple.


This is a good point.

It's really hard for me to use non i3wm supporting OSes now, even though I have to use Windows from work, and have used Macs for the better part of the last 2 decades personally and in college.


I use Linux everyday, and it's a UX disaster. I have tried Gnome, Xfce, Cinnamon, KDE, I like none of them. The only DE that I somewhat liked (Unity) was discontinued.

Linux sucks, but I use it becuase it sucks less than windows, for programming at least.


How interesting, I like Cinnamon and Gnome and KDE, but didn't like Unity. Instead, for me, the problem is poor printer support.

> Curious: what have you tried? People who use "Linux" as a catch-all in terms of UX usually have only tried a single distribution with a single desktop environment.

Yup. You've just described a disaster. How many permutations of <hundreds of distros> x <dozens of DMs> must a user try before finding a good UX?


Mac is a BSD. OpenBSD exists. FreeBSD exists. NetBSD exists.

Because there are at least four BSDs, Mac therefore isn't good.

Do you see how ridiculous applying that logic to any operating system is?

Linux isn't a disaster. It's a kernel. There are Linux distributions with great user interfaces and great UX, developed by people who are great at it. There are also distributions that aren't.


> There are Linux distributions with great user interfaces and great UX

Could you name some? No sarcasm, actually interested!


> Do you see how ridiculous applying that logic to any operating system is?

Somehow, when you ask a person about PC or a Mac, the answer is: Windows or MacOS, and then the discussion is about their quirks, or advantages, or deficiencies.

You ask about Linux, and this is what you get:

> Linux isn't a disaster. It's a kernel. There are Linux distributions with great user interfaces and great UX

So, once again: which one of the hundreds of permutations of <distro> x <DM> has a great UX?


macOS is actually kind of mediocre at being a BSD these days ;)

Ubuntu pretty much works out of the box for a lot of "regular" users (I'm excluding gaming, which also works but is not as easy).

I'm sure there are other user-friendly distros that similarly let average users browse the internet, write documents, listen to music and watch movies painlessly.


I'd say gaming on Ubuntu LTS (if not Linux in general) is quite easy provided you stay in the safe haven of games that natively support the OS, which to be fair is a pretty solid selection of games these days albeit one which is pretty much a strict subset of the games on Windows. As soon as you go outside that area and start messing with Wine or whatever all bets are off, though.

> Yup. You've just described a disaster.

Hardly. The existence of a distro I don't like doesn't degrade my experience using a distro I do like. You may as well be upset at an ice cream shop for having dozens of flavors when you only like strawberry. Choose the one you like and ignore the ones you don't. It's not rocket science, even children can figure that out.


> The existence of a distro I don't like doesn't degrade my experience using a distro I do like.

The problem under discussion here is not that of using a distro you like, but finding a distro that you like.


Linux has been a delight to use for me. Things were rough 10-15 years ago, but it's pretty amazing now.

Any distro in particular you'd recommend?

Fedora 32 Workstation is pretty good if you want to see the best of what Linux can offer. It may not be the lightest and fastest distribution but it is easy to install and everything works. You'll get to experience Gnome which is the most original Linux desktop environment and the best one in terms of user experience in my opinion.

If you want something more traditional with the start menu or dock or desktop icons, perhaps something like KDE Neon is better place to start. It might feel more familiar. Will be lighter/faster too.

Put each of them on a USB and run them live on your machine for few minutes each and see which one makes more sense to you.


Ubuntu, Pop!_OS, Fedora...

Each of them has something done better than the others, but all of them are delight to use.


not him but same experience, from my previous comment:

I would recommend: Ubuntu, Linux Mint, Elementary OS, Pop!_OS

if you want: nice experience out of the box

I would recommend: Arch, Gentoo, Debian Net inst, Void

if you want a base system and install things you want on top of it


Thank you @all for the suggestions! I'm going to set aside some time to experiment with these and see how far I get.

Gentoo needs vastly better documentation to be useful.

IMO Fedora or Ubuntu. I've used Fedora now for the last few years on Thinkpads (currently Carbon X1 6th gen) and it has been pretty much "just works"

The trick is to go all-in on KDE if you want that Windows feeling where things just work.

And in that case the distro choice should be KDE Neon.

Fedora or Ubuntu

I think the fact is there simply isn't a solution that works for both the "layperson" and highly technical people who want to do development. Laypeople cannot be trusted to admin their machines, but experts need access to those bits. Leaving a backdoor to real admin access for the experts just means laypeople will abuse those backdoors and mess up their machines again, with dire consequences for the entire planet. You see the same problem with power user UI features vs dumbing down for phones and average users. People keep trying to bridge this divide and I'm just not sure it can be done.

> Laypeople cannot be trusted to admin their machines

Yeah, but they're the ones who paid for their machines. So... you're saying they're not allowed to use them how they wish?

> Leaving a backdoor to real admin access for the experts just means laypeople will abuse those backdoors and mess up their machines again

Remembering the last 20 years of computer history, most of the critical fail wasn't caused by "laypeople abusing backdoors" but horrible security holes in popular, widely used software packages: Outlook, Flash, Acrobat Reader, Internet Explorer. Apple/Microsoft are not locking down their OSs to protect users from themselves, but rather from other developers. We, software engineers, seem to have completely failed our users as a profession.


Someone being tricked into installing malware doesn't usually make the news.

> Linux was always a disaster in terms of user experience and isn't improving.

This as true today as saying java is slow. Why not just try? You might get pleasantly surprised.


I've tried it recently and still find it true. Death by a million paper cuts.

What did you try recently? Java or Linux?

Chrome OS?

I happen to enjoy using linux on my laptop. In fact, I think it’s pretty great. But that’s because I can customize it to work the way I want—something that I found hard or impossible to do back when I was using MACOS.

I hate bloated OSs and unfortunately Mac OS is one of them. I know how everyone wants everything to work out of the box and I know it's very natural to want so but I cringe if I find out my OS doing something behind my back. That's why I'd never use Windows, Mac OS, Ubuntu, etc. They all violate my privacy and slow my system to do so.

I use Debian, I like Debian. When I run Wireshark I don't see unknown requests destined to debian.com. That is the definition of simplicity for me. And yes, it doesn't always work out of the box, you have to install some drivers, change configurations but it's getting better and easier. Yet, I'm a software developer so I understand and like that stuff.

> Linux was always a disaster in terms of user experience and isn't improving.

No, you can't define it as a disaster, it's not. If you're an end-user that understands nothing of computers maybe you can but otherwise it's not a disaster. It's just harder and getting easier by day.


>> Linux was always a disaster in terms of user experience

Try Pop_OS!. I switched from macOS and it's been a relatively painless experience with some tweaks.


The funny thing is, Linux has amazing User Experience if you go all-in on the latest KDE and its associated tooling.

I set my Mac-loving girlfriend up with Kubuntu for this reason.

I’m pretty sure that you have never use linux ... Just try it

Buy a Mac and put ElementaryOS on it to avoid the slowdown and have a slick experience.

https://elementary.io/


Might want to make it a used/refurbished Mac. Newer Macs don't run Linux well (at least as of yet); the whole T2-chip based stuff on newer machines is especially problematic.

From the comments, roughly, are you running third party "security" tools?

> Is there any "security" software running on your Mac? I've seen this sort of thing caused by that, but not in general.

> I ran the two line test and it had no delay at all. The Mac doesn't check for notarization on shell scripts or any non-bundle executable. I just did it again with a new test2.sh and Wireshark capture and there is nothing.

> I do a lot of Keychain code and I've also never seen those delays. The reason I suspect they told you not to use that API is that it's in the "legacy" macOS keychain. They really want everyone to move to the modern keychain but lots of people, myself included, still need the older macOS specific features.

> I'm not saying you are crazy, but all of these things though are the trademark reek of kernel level security software that is intercepting and scanning every exec and file read on the system. We had an issue with Cisco AMP once that took Xcode builds from under 10 seconds to over 5 minutes until we were able to get it fixed.


The only kernel-level security software on my systems is Little Snitch, and I’m pretty sure it doesn’t do anything unless there’s network activity, so it doesn’t explain anything.

Reminds me of the terrible delay I faced after having Sophos installed on my Mac.

Having to wait 5-10 seconds for a new terminal tab as Sophos churns (checking autoccomplete scripts, rbenv, etc) was infuriating. Oddly, there was fate sharing with Internet interception, so there was a good chance the browser was getting dragged down too, and vice versa.

Convincing corporate IT of how bad the problem was was maddening. Based on what this author says, 10.15 on rural internet sounds like hell.


The funny thing is its not transitive. No slowdown if you invoke bash specifically with a new shell.

% rm /tmp/test.sh ; echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && chmod a+x /tmp/test.sh

% time bash /tmp/test.sh && time bash /tmp/test.sh

Hello

bash /tmp/test.sh 0.00s user 0.00s system 83% cpu 0.004 total

Hello

bash /tmp/test.sh 0.00s user 0.00s system 77% cpu 0.003 total

vs the one from the article:

% rm /tmp/test.sh ; echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && chmod a+x /tmp/test.sh

% time /tmp/test.sh && time /tmp/test.sh

Hello

/tmp/test.sh 0.00s user 0.00s system 2% cpu 0.134 total

Hello

/tmp/test.sh 0.00s user 0.00s system 73% cpu 0.004 total

(edited for formating)


When you run "bash hello" you are calling exec() on bash, passing "hello" as an argument, which bash then reads; when you run "./hello" you are calling exec() on hello: the kernel then treats "hello" as an executable, but notes that "hello" starts with "#!" and then will run the specified interpreter for you, passing "./hello" as an argument. The kernel doesn't think of "hello" as a program when you run "bash hello".

Are you sure it's just not cached from the prior result? If I run the article's commands twice in a row, the 2nd time is faster.

I am using Ubuntu 20.04 on a Thinkpad X1 Extreme Gen2 and you would be surprised how "normal" it feels as a development machine. Sure there some little annoyances, the touchpad behaves a little worse than on windows, sound is a little worse. But the most important things, Keyboard and Screen are excellent. The system in general does not feel like the horror stories that people keep telling about linux on desktop(notebook). Now that WSL2 is getting Cuda even windows looks workable. Their new terminal app is amazing. After a decade of Mac notebooks it was quite liberating and I would not switch back even if the flaws in macOS would be fixed. It is for sure the nicest of the big 3 operating systems but for development work Ubuntu is hard to beat for me. YMMV but it won't hurt to look around you what else is there.

I've been seeing the trajectory of Windows (pre-2012 or so) -> Mac (2012 - ~2019 or so) -> Linux (~2018 - now) play out with quite a few people without any issues.

And I don't mean developers. They're all pretty educated people but it's taken me by surprise. They come to me in frustration over Mac, they don't want to return to Windows and they really, really, really want linux. I've been using linux since about 1997 so they come to me. I usually push back, thinking "do you really want a unix workstation?!" but they insist.

My strategy has been some x2xx lenovo (like x230 or so) for about $300 from ebay, 8/16gb of ram or so with an SSD, the extended battery pack, putting mint on it and then just handing it over. Everyone, much to my continued surprise, has loved it and are really happy with it.

It's happened 4 times now and I'm still shocked every time. They've told me they use youtube to figure things out.

They're fine with libreoffice, gimp does what they need, supposedly spotify works on it fine, they don't know what bash or the kernel is and it's all fine. Incredible.


Adding to anecdotal, same trajectory for me, for web development. Really happy with Manjaro on Razor Blade 15 for a year now.

I recently _really_ tried adopting Linux on a hobby development machine that I built back in 2016 (hardly new hardware -- and desktop not laptop). Sleep never worked, graphics sometimes borked, UI felt janky and inconsistent, icons are super fugly and often too theme-y to the point of being undifferentiated at a glance, HiDPI support is a giant mixed bag (in 2020), machine would randomly freeze (mostly elementOS; Ubuntu didn't freeze as much), Hauppage drivers rarely worked consistently and often required reboots, I hated the mouse acceleration curves and was horrified to learn they were effectively hardcoded in X (I'm not talking just speed which is tweakable), gstreamer was nightmare to develop for, the Ubuntu & elementaryOS stores are a joke, and the mix of apt/snap/nix was very frustrating and the opposite of user-friendly.

I switched back to my 2012 MBP and it's predictably gone well since, plus I get iMessage integration with my iPhone.

YMMV


Yeah - the hw really has to be curated. I havent tried using a machine cobbled together from various parts (custom desktop), but off the shelf quality laptops work fine for me last 2 years or so and have none of the issues you mentioned. Emphasis on quality - not cheapo models. I think if you treat Linux same as OSX and run it on known good hardware supported well by Linux you are fine today IME

>HiDPI support is a giant mixed bag I will say that this is still a thing, although with experimental gnome fractional support it works pretty well now.

Honestly I have a 2019 macbook pro 15 and have more problems with it than I do with my Thinkpad X1 Carbon 6th gen with Fedora 32.


See, that's the response I was used to and the one I expected to get from everyone.

The crazy thing is that I haven't heard it yet from the people I helped. Times may actually be changing now, just not swiftly. Perhaps it's the "decade" of desktop linux.

It's also not because linux is so great but because windows and apple are constantly stumbling over their own shoelaces and shooing customers away.


True. Amusingly, I was always trying to make Windows behave more like Unix, but now I'm trying to make Linux behave more like Mac (just a few things, like the global keyboard bindings).

The major pain points are nearly all related to lack of integration with my iPhone (with Messages being the big one, followed by Notes).


Not associated at all but due to loving it, I wanted to share PhotoPea as you mentioned Gimp.

https://www.photopea.com


try this:

$ google-chrome --app=https://www.photopea.com


Seconded. I used to work on a Mac laptop for years, then started using a beefy Linux desktop tower on the side for some work that benefited from higher hardware resources. A few months later I realized that I had slowly grown into doing all my work on Linux, even when I didn't need the hardware, mostly because i3 and apt were so much better than the Mac equivalents, and that I was only opening my Mac laptop to walk into meetings. After realizing that I ditched the Mac laptop for a Linux laptop and haven't looked back.

I still use a Mac at home for entertainment (I'm typing this comment on one), and I have to say it works much better used that way. I don't have to worry anymore about random Mac OS upgrades breaking functionality that Apple doesn't care about because it's not part of their vanilla out-of-the-Apple-Store experience, but is vital to me as a developer such as 3rd party window management, dock improvements, keyboard tweaks, or not delaying every new execution by phoning home (LMAO).


Yup. Ubuntu 20 is the first desktop linux OS that just worked. Every other Linux desktop before it has had suspend/resume issues, wifi issues, sound issues, 3d issues, ratchet settings (things that can be set but never unset without some arcane magic), weird desktop behaviors, buggy software that crashes all the time, etc etc. Yes, I've tried ALL of them, including pop os and deepin.

This year marks the first year that I can just use linux without having to debug it.


These things are highly hardware-dependent. Typically it takes a few years until support for new hardware devices, features or platforms stabilizes. But it can even take way more than that, and some less common and lower-quality hardware may fail to get support altogether.

But macOS is very hardware dependent too.

Been putting off upgrading from 16.04 finally got it working a while back and was afraid to touch it.

Might give 20 a shot


Longtime Linux user (Manjaro) and I never thought I'd see the day when I could pitch it as noticeably superior to MacOS, considering Apple's once-legendary attention to user interfaces. It seems like those days are behind us, now.

Linux as an actually better experience, without gigantic embarrassing flubs like this, is looking better by the day.


A slowdown when you run an app for the first time, for security reasons -- I wouldn't categorize that as a "gigantic embarrassing flub". I haven't noticed it, actually. But I don't run new apps every day.

I think you're misunderstanding the problem, respectfully. This is not a problem for end users. This is a problem for developers - and a gigantic, embarrassing flub is justified for something as bad as this.

Think that's hyperbole? Look at this, from the link:

> The first time a user runs a new executable, Apple delays execution while waiting for a reply from their server. This check for me takes close to a second.

> This is not just for files downloaded from the internet... this is everything. So even if you write a one line shell script and run it in a terminal, you will get a delay!

Consider a developer in this situation.

If your job involves lots of scripting - not unusual, for a dev - and you create dozens of scripts a day, or more - every single one will take about a second, and up to 7 seconds (!) to run, that first time you run it. And that could easily happen upwards of a dozen times a day, because it will happen for each script you create.

That's pretty terrible, for a developer. I don't think you can normalize startup times, for some hacky script, of 1 second as pretty okay or not noticeable. Certainly not if you're talking about a high end work machine.

Times that bad are associated with some junk laptop that's 15 years old - that's not supposed to be Apple.

Even if you build apps (I do), you might have the need to create scripts now and then, possibly even a lot of them (I do, for testing). I don't consider it acceptable to wait 1 sec+ each time I run one. It really does suggest that Apple has gotten extremely careless about their developer audience.

So, yeah - compared to that, Linux performs way better, and looks like a premium work machine by comparison.


I never intended to switch away from Mac OS; it just sort of... happened. As Mac OS has grown more paternalistic over the years without adding any notable capabilities that I care about, it's felt steadily easier to just go use Linux instead. It has its own frustrations, but it can always be made to do what I want, and then it just behaves. Starting around Ubuntu 16.04, I found that the balance of frustration was tipping; these days I don't really bother to use my personal Mac any more. I still have one for work, but I'd certainly rather use Linux there too if I had the option.

For touchpad issues in Ubuntu uninstall xserver-xorg-input-synaptics and keep only xserver-xorg-input-libinput installed.

Isn't Ubuntu much worse than this with the push for Snap packages? It can take 10-30 seconds to open software installed through it.

From what I head the snap packages complaints is a lot of FUD, ubuntu is still using normal packages except the Application Store application. You can always use Debian or Kubuntu if you prefer function over form.

I have a ThinkPad with Ubuntu 19. I'm very happy with it; it's nice to have apt, and to be able to eg use minikube with docker driver rather than a VM.

It's also true that the trackpad isn't as good as Windows. (It used to be that Mac had the best, but Catalina managed somehow to screw up the trackpad and make it laggy. Catalina has not been good for me!)


Windows is still very much subpar, even with support for CUDA in WSL2. Loading packages is terribly slow in Windows, for some reason. Also don't get me started on package management (no, Anaconda doesn't cut it).

I got pretty good results with chocolatey.

But I agree that even WSL2 didn't cut the mustard, and I doubt GPU support will fix it. MS is advancing too slow, I think.


I've gone full circle. Went from desktop linux (mostly Arch) to OSX ~7 or so years ago, and now due to a combination of frustration with the butterfly keyboards and then a slew of issues with macOS itself, I'm back to linux desktop for my dev machine.

From my perspective as a quote-unquote power user, it feels like Apple just constantly insists on shooting themselves in the foot with unnecessary and ill conceived innovations. Either way, I'm happy with my new setup and probably won't go back to macbooks anytime soon.


I would love to switch back to Linux but Apple's Retina displays are absolutely beautiful and there is no way I could enjoy going back to anything with noticeably lower pixel density on a laptop. I'd like to be told I'm wrong, but as far as I know it's not really possible to recreate a comparable high pixel density experience under Linux on a laptop.

Well, it is. However it's much easier with resolutions perfect for 2x scaling, so 4k on 15" XPS works great. As for fractional scaling (needed for 4k on 14/13") it's still kinda work in progress, I think it will be ready when chromium on wayland finally lands (I expect at least 1 more year). If you don't use electron/chrome, you can use it right now.

Obviously you can use less elegant solutions like changing fonts but it won't work with multiple displays with different resolutions.


Two years ago, I helped a friend install Ubuntu Linux on a Retina Macbook Pro, and it worked like a charm. If you're looking for a new laptop entirely, there are loads of 4K+ Linux-compatible laptops out there (ThinkPads are probably your best bet).

Thanks. What do you think about this post? The author sounds knowledgable and I think it contradicts what you said to some degree (in that the experience and app support is not good even though Linux is installed on a machine with a high dpi display):

https://news.ycombinator.com/item?id=22958647


I don't know about Ubuntu, but my experience with Gnome on Arch Linux and Arch-derived distributions has been pretty good as far as high-DPI displays go. I've only had to make minor tweaks to a few configurations here and there depending on the application.

If you want to avoid tweaking, stick to native applications, and perhaps more importantly, go for a manufacturer with proper firmware support for high-DPI screens like System76 (Adder WS), Dell (XPS 13), or Lenovo (ThinkPad P1/P53/X1).


It seems the new Dell XPS finally have a touchpad which is close to the ones on the MacBooks. The touchpad and display are the two things which hold me back from switching away from Apple.

Many of us who have been using Linux just fine on desktops and laptops for decades find those horror stories to be overstated...

I would definitely consider moving to Linux for my next laptop - unfortunately I do a decent amount of iOS development, which I realize isn't impossible to do on Linux, but I can't imagine it'd be worth the hassle. :/

When I switched, I just made the macbook not suspend on lid close, plugged it in and left it running 24/7. Then I just screen shared or ssh'd in in whenever I needed to do something iOS related.

The dual GPU is a pain in the butt since Nvidia still doesn't support Optimus on Linux (and probably never will).

Have you tried 19.10 or 20.04? Before that I had a lot of issues with my Dell XPS 9560 because of optimus, but it got a lot better in those versions. YMMV but it actually worked out of the box with nary a hint of manual configuration when I installed 20.04 recently.

Edit: should note, when I say work I mean you can switch between GPUs/launch an app on the dedicated GPU with ease.


I've tried 19.10 and Arch Linux and the only option still was to statically choose only one GPU and reboot. How does the offloading work now? I haven't heard anything about it

19.10 added the "NVIDIA On-Demand" profile in Nvidia Settings. It needs the driver version 435 or newer.

It works okay, but you have to launch processes with a specific set of env variables to use the Nvidia card.


That is not true anymore. With 20.04 it supports hybrid graphics just fine. The only issue I had was sharing cuda and OpenGL context since GL ran on the Intel card. This should not be a concern for most people I assume.

Can you run everything on the iGPU and only activate the Nvidia GPU to do the render offloading on single apps? If you can, I should try 20.04 on a laptop

Yes exactly. This way you have all the GPU memory available for accelerated apps. Not sure if it works for all use cases but worked for me.

OSX used to be the OS that started really quick, and ran really smoothly. Certainly far better than Windows. Also search was lightning fast. It was a selling point on its own. But recently it has slowed to a crawl. And I have to ask, what business is it to Apple whether I store a script somewhere? I don't even want them to have a checksum. And I don't want to go through the bother of having to change settings for it either. Do they even ask if this is OK? For me this is just yet another reason to steer well clear of Apple products in the near future. Very sad, because I really used to love their stuff.

>OSX used to be the OS that started really quick

Coldboot Windows 10 from pushing the power button to reaching the login screen is 7s for me (i7-7700, m2 SSD, 32GB RAM).

I never ever had quicker startups on OSX.


Once I tried out Mac OS X for the first time during the late 2000's it was really striking how much better OS X was, compared to Windows, epspecially for "creative professions," for video, design and the sort. But since then, I have to hand it to Microsoft; they've really stepped up their game. They even seem to be fixing some of the non-UX compatibilities now. Granted, it's nowhere near good enough, but with PowerShell it's workable, at least for the projects I'm currently working on. For the more demanding stuff, I'll probably still Vbox a Linux distro however, while that has remained completely unnecessary for me on OS X. (I'm speaking about the whole personal experience and package deal here, so that's why I'm not mentioning things like Docker.)

> OSX used to be the OS that started really quick, and ran really smoothly.

It was quite slow compared to OS 9, but even most Linux installs have way better performance on equivalent hardware. Windows really is dog slow by comparison.


This is true, but then Linux has a whole host of other issues that makes it nigh unusable for Muggles and non professionals. Thus, if they're not an avid gamer, I'd usually recommend OS X, until about 2016. Then I stopped doing that.

Damn, I too have noticed that when developing in compiled languages (C, C++, Go, Rust, what have you) the first execution after a recompile is always noticeably delayed. I thought it was odd but didn’t bother digging into it. This must be why! (Can’t recall having this problem with scripting languages, but maybe subsequent modifications don’t trigger a notarization check? Edit: Yeah TFA does mention this.)

For anyone looking for more information on what happens on the first run of an app in Catalina, see [0]. Here's a direct link to the diagram [1].

[0]: https://eclecticlight.co/2020/01/27/what-could-possibly-go-w...

[1]: https://eclecticlightdotcom.files.wordpress.com/2020/01/appf...


Can anybody actually confirm these claims? I'm no fan of the new notary system, but in my experience the behavior described is not how things work. Has there been an update or change in behavior recently?

I've been running a Debian thinkpad for the last meaningful stretch of time, but from what I recall macOS quarantines any files created by the user via an extended attribute `com.apple.quarantine`. Quarantined files are not allowed to be executed by gatekeeper. It's not about a network check, they just can't be executed. If the user removes the quarantine attribute, then gatekeeper will shut up and the files will execute normally. Alternatively, if a file has a signed hash stapled to it i.e. if it has been notarized, then gatekeeper will also allow execution after verifying the signature. This doesn't require a network check either.

Interestingly, the way to bypass the quarantine behavior is to unarchive a folder. Archives themselves include the quarantine attribute, however, files extracted from the archive using a terminal program (a "developer tools" program) don't. And so macOS doesn't care. Also tools like `curl` don't apply the quarantine bit to downloaded files so curling a binary or shell script still works just fine.


Notarization is an additional check that ensures that Apple has not revoked permission for the software to run.

It looks like my time with MacOS is rapidly coming to an end. Any Linux distro recommendations these days?

I switched almost 2 years ago after 15 years on Macs.

Fedora 32 Workstation is pretty good if you want to see the best of what Linux can offer. It may not be the lightest and fastest distribution but it is easy to install and everything works. You'll get to experience Gnome which is the most original Linux desktop environment and the best one in terms of user experience in my opinion.

If you want something more traditional with the start menu or dock or desktop icons, perhaps something like KDE Neon is better place to start. It might feel more familiar. Will be lighter/faster too.

Put each of them on a USB and run them live on your machine for few minutes each and see which one makes more sense to you.


I switched from MacOS to Linux years ago. For a developer workstation these days I'd probably either go with Ubuntu LTS or Fedora (my personal choice). Either runs fine on my XPS 13.

Note: I really wanted to like WSL, but it just didn't work for me.


Have you looked into WSL2?

I just recently switched from Mac OS to windows and it really hasn’t been a bad experience.

I would go full Linux but the drivers for the GPU on my laptop seem to be a bit of a mess currently.


GPU switching (NVIDIA Optimus and the like) seems to be a major headache to get working on Linux. My current laptop (XPS 13) only has an integrated GPU, so I ssh into a desktop for running CUDA stuff.

But no, haven't tried WSL2, I'm comfortable with my Linux setup so not to keen on messing with it at the moment :)


Ubuntu 20 has been a pleasant surprise, it seems to have turned a productivity and speed corner.. I've been getting lost in it for hours on end and forgetting to use my MacBook.

The feeling reminds me of the first Macbooks I used when switching away from Windows Vista.


That feels amazing to finally hear some good Ubuntu news. We need it. The only sleeker options for privacy (Windows and macOS) are horrendous. Thanks for sharing, might try out Ubuntu 20 then, might be as sleek at Linux Mint?

It’s funny you mention Linux Mint, it was the only other distraction I could get lost in for hours. I’d still be fine with Mintfor personal browsing. At the time, I was running mint in a vm on MacOS to try it out and Cinnamon was much more performant than Ubuntu 18. Ubuntu 19/20 however seems to have narrowed or closed that gap.

So far Ubuntu has been great as a default dev/staging workstation. It’s nice not to have to fight with homebrew or docker permissions or other issues on the Mac and spin up most anything.. and it just works.


Fedora "just works" and has the some of the more sane defaults. Only tweaks one typically needs to do is add the RPM Fusion repos and, at some point, disable/tune-down SELinux when it is a bit too paranoid.

Give Pop OS a look. It's based on Ubuntu with some additional UI polish.

https://www.youtube.com/watch?v=QGcvHMNaDd0


Pop_OS!

By far the best linux I've tried when trying to get feature parity with macOS.


After you've gotten used to Linux, you might want to try Arch.

It is lightweight, since you choose everything that is installed, sort of opt-in.

It has all the latest software.

It has "rolling releases" which means there is never a giant lost-weekend distribution upgrade.

It has the AUR (arch user repository) for just about any software ever.


I've never lost a weekend to a Debian dist-upgrade. Just read the release notes carefully beforehand, take a full backup of your data (which you should be doing anyway), make a note of any non-Debian applications you're using on that machine (that's the stuff that will need the most extensive testing post-upgrade) and it should simply work.

I have. debian, raspbian, ubuntu. A few times it has gone well, only to find there was cruft left over from previous installs.

"it should simply work" is not a given on any linux.

I'm not denigrating those distributions, there are lots of reasons to have a stable release without a lot of things changing (especially development).

It's just that changing lots of assumptions at once is fragile.


I used Arch on a server once (still running) but found the experience on Debian was more to my taste, and somehow never liked pacman. Maybe it's time to take another look. I never tried it on the desktop.

Interesting, I have opposite experience. Pacman looks so much simpler than aptitude, apt-get, apt-cache, dpkg. And makepkg - it just works. I have not managed to create packages on Ubuntu.

No outdated packages, no ppa. No upgrade. Install is rough but it nails how simple the system is.

Ubuntu is a good starting point. But there is so much more.


I agree about makepkg / PKGBUILD -- I've casually made packages.

https://wiki.archlinux.org/index.php/PKGBUILD

For debian/ubuntu it is not as straightforward.


Windows 10 with WSL if you have a laptop.

Debian or similar or ArchLinux if you have a desktop.


For reasons of personal prejudice, I'll never install any Windows version on any hardware I own. Debian was always my first choice back in the desktop linux days, and still is for servers, but I haven't looked at the landscape recently. It seems to have become more consolidated, which is not surprising but still mildly disappointing.

Edit: and WSL is not Linux


> WSL is not Linux

It is Linux as of WSL2, it's just also Windows, so you lose many of the advantages that would make a person recommend Linux in this thread.


TIL. But yes, for me, not having Windows installed is the primary advantage of any non-Windows OS.

Also my first choice for servers and have used it several times on desktop so Debian would also be my recommendation even for a desktop these days.

Plus, if you're already familiar with how Debian works it should be a no brainer. None of that Ubuntu or other Debian-derived distros with extra sugar and bloat and that many times differ from actual Debian in just the right way to keep you scratching your head.

Even Debian "stable" is pretty good for desktop these days which in the past was always notorious for having super outdated packages but has greatly improved in that regard. Obviously, "sid" is still also a good pick for a desktop if you really need to always run the latest of mostly everything.


Debian still feels like home. Unless I try a BSD or something without systemd I think this is probably where I'll end up.

Well, Debian does use systemd by default now unless you want to go through some hoops to remove it (which I believe is still possible but not sure).

I personally have really no issues with systemd and now even go as far as completely removing the ifupdown, isc-dhcp-client, resolvconf and ntpd packages in favor of having my entire network stack configured by systemd-networkd, systemd-resolved and systemd-timesyncd instead.

It's pretty much a standard now across the board and I can't really find any arguments against it besides old habits so I've embraced it. Although it's obviously a bit opinionated, there is a good deal of functionality and flexibility on that thing.


I understand but for laptops it's pretty bad these days if you want all features your laptop is providing, and a good energy management.

On mobile it's much better with Android, but Android isn't adapted to laptops. I haven't tried ChromeOS but it's pretty restricted from what I understood. WSL2 on Windows is Linux and it works great for me but I understand if you don't want windows in your life.


Depends on the laptop. I've had good experiences with thinkpads and business class Dells on Linux (and BSDs, for that matter).

Probably. My ThinkPad has so many issues and unsupported features according to the ArchLinux wiki that I don't even want to try.

Same.

>paying for windows to install linux

I would recommend: Ubuntu, Linux Mint, Elementary OS, Pop_OS!

if you want: nice experience out of the box

I would recommend: Arch, Gentoo, Debian Net inst, Void

if you want a base system and install things you want on top of it


Mint been my daily driver for a year, does a fine job so far

https://www.linuxmint.com/

It's ubuntu without the bullshit monitization.


And with a better default DE

Kids love Manjaro these days.

If in doubt just switch to ubuntu (there are better alternatives, but its a good starting point). I’m done with macos (tho i really loved it).

Thought I was going insane seeing delays myself on a daily basis since Catalina. Turns out I'm not insane but a victim of Apple's continuous neglect of Mac OS.

How can something as damning as this ever reach end consumers without getting detected?


If Microsoft wasn't doing ever worse privacy things with Windows I'd seriously look into switching away from Mac OS given the ever growing issues it's been having with every release.

The set of possible operating systems to consider does not contain two items.

It does depending on what software you want to run.

There is no actually good alternative to Photoshop. gIMP is not remotely in the same league. Pixelmator and Affinity Photo are brought up but they're also like nano vs emacs. Photoshop doesn't run on Linux AFAIK. I'm sure for a graphic designer the same is true for Illustrator. The cheaper alternative exist and you can maybe get by but there's missing so many features.

If you're into games there is really only Windows. Same for VR.

I'm sure there are other categories.

I did serious dev on Linux and that dev didn't require any games or apps so it was great and I loved it. It ran my editor of choice and otherwise I only needed a browser and a terminal. But as soon as I step out of that small subset it's pretty much MacOS or Windows only, at least for the things I want to do with my computer.


I wonder how viable just running PhotoShop in a VM is these days, if you have the extra RAM and are OK with the extra minute to boot up the VM each time to use the program?

VirtualBox has a 'seamless mode' as well, I wonder how well it works on a Linux host and a macOS/Windows guest.


I find Linux to be a usability nightmare. Weird cut and paste behavior, difficult to resize windows, terrible trackpad support. macOS and Windows will have to get a lot worse before I switch.

I found at least in Gnome and KDE Plasma window management works pretty much just how Windows works. Cut and paste it just cut and paste - Do you mean how you can select text and use middle click on the mouse to paste without even needing to do anything but select?

There are two X clipboards. They are implemented differently (as in "ownership" model of the content) and the implementation bleeds out everywhere.

You can't remove or change this behavior because some people love it.

EDIT: FWIW the above statements are oversimplifying the situation of course: https://en.wikipedia.org/wiki/X_Window_selection

And more here: https://unix.stackexchange.com/questions/13585/how-can-i-use...

Most fans of Linux will claim the fact that you can choose any number of clipboard managers to customize things to your liking is a critical aspect that draws them to the platform.

Others among us (whether reformed or uninitiated) will commonly cite this same stuff as the reasons we avoid Linux on the desktop.


Why I prefer the three button UNIX style mouse style and I don't ever seem to recall having problems with windows resizing on UNICX an unixlike systems.

how many DE did you try? you have a variety of choices now, I would recommend trying a popular one such as Ubuntu / Elementary OS / Linux Mint

You should get a very nice experience out of the box with these, which can be reproduced quite easily with less "bloated" distributions such as Arch or Gentoo if you prefer to install things yourself


Without WINE, and it’s associated instability, which operating system, other than MacOS or Windows, would run Ableton, Logic Pro, Adobe Premiere, or Final Cut Pro, all applications I depend on for my income and, due to the fact that my clients use this software, for which an FOSS equivalent or alternative doesn’t exist?

Now imagine the millions of other people in my situation and rethink your comment.


> Without WINE, and it’s associated instability, which operating system, other than MacOS or Windows, would run Ableton, Logic Pro, Adobe Premiere, or Final Cut Pro, all applications I depend on for my income and, due to the fact that my clients use this software, for which an FOSS equivalent or alternative doesn’t exist?

> Now imagine the millions of other people in my situation and rethink your comment.

The comment still holds. Linux should still be considered. I didn't proclaim that it would be a realistic alternative in every case, but I'd wager that for a large proportion of software engineering roles, it would be.

Is there software that may also be suitable for basic image and video editing work and therefore fine for a subset of these creative professionals you refer to? Absolutely. I've seen great results from folks using Blender, Inkscape, OpenShot, GIMP, Krita and others.

We shouldn't just dismiss an OS immediately, and that's what my comment was trying to get at.


At least 10.14 is supported for now.

It's really frustrating to see Apple make all these poor decisions and they almost never are willing to admit their mistakes and go back. In the rare case when they do (e.g. butterfly keyboard, Mac Pro), it takes them years to turn around.


> it takes them years to turn around.

or until they need something to throw out for investors. "dark mode" did not come about because of a technical breakthrough


That has been my view as well. It isn't Apple that is particularly good with anything Software ( I will give them they have an Edge in UX ). But Microsoft is just horribly bad every time I look at it makes macOS looks good.

Switch to Linux then.

The other thread reply on this topic notes the reasons Linux is not considered a viable desktop replacement for many people.

Personally I'd need to run a VM for a bunch of software or fight Wine. That's assuming my machine has the right hardware support for everything and even then the trackpad support is likely to not be great.


shrug I’m not gonna play a game of “why don’t you”/“yes but”.


I completely understand why things are going the way they are as our computing environment has become ever more hostile. But I am very nostalgic for the time where I would power up a Vic-20 and within seconds be able to get to work.

Teaching my daughter to program on a modern computer, we spend more time bootstrapping and in process, than we do in actual development.


That computers are just slower to interact with now is such a truism that we hardly remark upon it any more. It seems utterly insane that in the early 90's I could just run Windows 3.1 on a bit of kit that in all likelihood wouldn't even power a toaster today, and the experience was, well, frictionless. I don't recall ever thinking "wtf is this thing doing?", whereas today, by contrast, if I have the audacity to be afk for long enough for my Windows 10 box to go sleep I know I am in for an infuriating waste of minutes' worth of disk thrashing before the bloody thing even deigns to reacknowledge my existence.

I remember being able to watch network traffic and if you (or some other actual person on you network) weren't doing anything nothing would be there. Yes even if you had a few webpages open but weren't clicking anything. Now your machine's "idle" and you capture on your network interface and it scrolls at hyperspeed.

I've been doing some network programming lately, specifically low level raw socket work. Sitting there with wireshark running the sheer volume of traffic with applications dialing home was kind of shocking.

I mean, I know it's happening, I (sadly) expect it to happen now. But seeing all the bits whizzing over the wire brought home just how much your machine is reporting about what you're up to.


This is upsetting for me, too. And for a few others. But actually very few people care because they just don't see it. The people who designed it this way take care that users at large have no idea what is going on.

It's really very sad, because users have no idea what is going on and there is no incentive for bad programs to improve (actually, there is generally incentive in the opposite direction, because it's work to write well-behaving apps). Users just know that they need to keep buying new computers and that their battery life is worse, but they can't figure out why so they point fingers at everyone but who they should actually be blaming.

Remember when shitty user-hostile spying wasn't a library you included that assured you in its readme it was "made with [heart] in California"? Ah, the days when only criminals and bigcos casually engaged in shady crap.

I switched to a linux desktop full time last week because of this exact problem. VPN w/ windows would flake out on me all the time, and I got sooo tired of just...waiting. Remember when windows search worked? Like, you could press the windows key, type what you were looking for and find it? Quickly?

Being able to turn the computer on, type in my password and have it be just..ready is so incredibly refreshing. Having a terminal with 0 latency, where copy/paste is sane? Worth a zillion dollars to me right now.

Currently playing with opensuse tumbleweed, i'll probably get frustrated by something and move to arch, so I can fix that something and also be frustrated by a hundred other things.


Windows search turning into bing search is one of the most frustrating little things. You used to be able to instantly pull up files by name but now it just dumps you random garbage from the internet.

It’s still really fast if you disable Cortana and Internet search results. I launch most programs by hitting the windows key, a few characters and enter.

I’ll look into it, I would love to have that functionality back

Rumors on the internets have spoken positively about Opensuse Leap & Tumbleweed, any truth to that?

I don't have a ton of experience with other options, but 2 weeks in and tumbleweed has been pretty plug and play! 0 issues getting my netcore/python/golang/docker dev stack up. I get a weird popping noise in my usb dac at the login screen but that's the only issue I've had so far. Teams screen sharing even works perfectly! I chose it over Ubuntu 20 because I knew I wanted kde and it seems like a first class citizen in tumbleweed, while still being vaguely stable. Not-quite-bleeding edge! I ran freebsd/kde for fun back in the halycon days of lamp stack and gnome never felt...right to me when I would test drive Ubuntu desktop.

Good to know. Personally I think that Ubuntu has gone downhill. I preferred unity over gnome. On a fresh install of Ubuntu, gnome is confusing with it's split with two taskbars that has some overlap in functionality.

Another vote from me for tumbleweed.

I call this 'Outsourcing the cost of development to the user'...

Getting knowledgeable people costs money so we build more abstractions that lower the cost of development and pass the costs of development from the company to the user in the form of requiring more hardware to do the same thing.

How come I need 16Gb of RAM these days when 8Gb did it yesterday? How come my phone needs 4Gb of RAM while my 2012 tablet had 1Gb? Sure the hardware is cheaper but we're still not using the hardware to it's fullest.


My 256MB RAM, 900Mhz Duron machine (single core, naturally) in ~2002 (IIRC?) could do just about everything my modern one can. We even had video chat! It was just much lower res. The limiting factor in online stuff was, by far, connection speed, not the power of my hardware. That was about the point where the hardware was fast enough and had enough memory that I could multitask in a modern way without hitting problems like popping/stuttering audio or bad swap issues. Aside from legitimate increases in memory use for higher-res media, most everything since then, from my perspective, has been pure bloat. Why does 16x that memory and two cores at double the clock feel insufficient for extremely similar workloads and software feature-sets? Fucking bloat is why. Largely, but far from solely, web-tech infesting everything.

Before that, my 64MB RAM 100mhz Pentium could usually have a couple things open before it'd hit swap too badly. I'm talking like Word and a web browser, not calc and notepad. None of the equivalent programs to those can even open all on their own in a footprint smaller than 64MB these days, let alone with other programs and the OS in the same space. Hell, how many operating systems fit in that with a GUI as capable and usable as, say, Win98se (let alone something really incredible on the performance front, like BeOS)?


I agree with the main sentiment, but I have made my peace with it. Mainly Java and Electron based apps because they do provide us with a nice thing that was impossible years before unless you wanted to become a digital hermit: Linux on the desktop.

I can now use simplenote, discord, slack, the jetbrains dev suite, visual studio code, and this is without including separate developments like Steam, which has made it effortless to switch between Windows, Linux and Mac.

That being said, I still consider Mac OS the superior OS (this call home issue from the article aside), mostly because the font rendering still works better after all these years, Windows and Mac still have better quality software available for them, and Mac still does not have the forced updates as Windows does. Also I have noticed that in Ubuntu, some electron apps like Simplenote, the copy and paste of text is funky at times, like not even letting me select stuff.


The reason is very simple: developers don't want to develop anymore, they just want to offload real programming to third party libraries, where what used to take 100 lines of code to accomplish will take 10K or more (because, obviously, the library will do the most general version of what it wants to do). All this is considered "good development practices", which means that programs will inflate to take whatever memory is available and run slower for as long as we continue to use the same practices.

and is absolutely encouraged by google and amazon, as delivering that bloat makes them money

What’s the point of cheaper disk and ram, and faster systems if not for supporting higher level abstractions?

To watch more, higher-def cat videos faster. No need to get lost in the weeds of higher level abstractions to do that.

is this a serious question ?

And now that "the web is the internet" even more than ever, developers and designers are giving us spinners/loading indicators ALL THE TIME. At least in my tabs they are.

The web is much, much, much slower than it used to be.


> Windows 10 box to go sleep I know I am in for an infuriating waste of minutes' worth of disk thrashing before the bloody thing even deigns to reacknowledge my existence.

Yeah, what the heck is this? I use a win10 box solely for gaming, and every single time I wake from sleep, Antimalware Executable keeps my machine from doing anything for several minutes. It's infuriating.


Silly user. The computer exists to update itself. Whatever trivial task you want to do is a secondary concern.

You joke, but there is a surprising amount of software that does not have its user as the primary thing it cares about.

Just get a proper antivirus and it will probably disable the built-in security suite for you

While making your computer even worse?

For many years, I had a very nice experience with NOD32. By far the best antivirus I have used in terms of UI and resources. Well, admittedly not that high of a bar.. but they really seem to care about efficiency and and elegance.

Considering the built in one is pretty slow (and gives useless notifications), I expect it would be an improvement.


I recall windows 95/98 being pretty slow to boot. I also recall being warned by teachers not to move the mouse while things were booting as that would allegedly slow things down further. These days the only real time I wonder "wtf is this thing doing" is when I'm waiting about 5-10 seconds for my mac to wake up from sleep.

Surprisingly, wiggling the mouse actually speeds up some windows operations.

https://retrocomputing.stackexchange.com/questions/11533/why...


Win 95 and its descendants had legendary poor boot times.

Things finally improved with XP, but W3.1x and W95 were anything but fast - unless you were playing Solitaire.


Here is a Pentium 200Mhz starting Win95, only about 20 seconds from "Starting Windows 95" to the login screen. 40 seconds including the full powerup/BIOS sequence. Not too bad.

https://www.youtube.com/watch?v=PwRR7-P-8fc


> It seems utterly insane that in the early 90's I could just run Windows 3.1 on a bit of kit that in all likelihood wouldn't even power a toaster today, and the experience was, well, frictionless. I don't recall ever thinking "wtf is this thing doing?" ...

I generally agree, but I sometimes ran Windows 3.0 on a 386SX-16 in the early 90s, and often wondered why it ran so slow on my admittedly underpowered but supported system.

At some point I read (perhaps in Compute! or BYTE) that Windows made something like 20 or 30 syscalls to draw one line of a window's border. That seemed exceptionally inefficient to me, so I stopped using Windows. I generally worked in DOS, but if I wanted a GUI, Geoworks provided an experience at least ten times better (subjectively) -- smooth UI, ability to multitask, a surprisingly good word processor and other well-designed software included.


Are you on a hard disk drive? I have bestowed upon myself the unique misfortune of running Windows 10 on a spinny disk.

This has quietly become a pretty serious issue. Most software developers have simply stopped caring about systems with traditional HDDs. This is even true on Linux - I found out a while back that all the KDE developers are using SSDs, which is why they weren't fixing issues where startup time is affected by disk latency. I eventually gave in and bought a 250 GB SSD for my old laptop, there was simply no other option.

If that’s what you really want, grab a used ThinkPad and put Arch Linux on it. It will boot in a few seconds and is much more powerful than a Vic-20.

Still doesn't give you a programming environment, unless you want to do bash.

How does that even make sense? It’s an OS, go grab a Desktop Environment and download nvim, VSCode or whatever.

The original line that I was responding to was

> Teaching my daughter to program on a modern computer, we spend more time bootstrapping and in process, than we do in actual development.

Arch Linux does not help with this, unless you make it boot into a VIC-20 emulator or something. Arch can help with boot speed, but once you're booted you're back in a full modern OS. So fine, install VSCode and Python... okay, now you get to figure out libraries. Manage terminals. Arrange a filesystem. This is not getting you closer to the VIC-20 or C64's "boot into BASIC".


This is very possible on Arch Linux, moreso than other distributions. After installing Arch, just run the following two commands:

  sudo pacman -S xonsh

  chsh --shell /usr/bin/xonsh
Bam! You're booting straight into a full Python environment when you turn on your computer. This is similarly achievable with other languages as well, including BASIC.

How about Processing. https://processing.org/

How does that even make sense?

Because that was the experience on those old machines. Switch it on, straight to BASIC prompt in a second or so. If you want to program it’s frictionless. And you can’t break it because BASIC is in ROM.


Flexibility vs complexity is a slippery slope.

If you want that today get a BBC microbit, switch on and you're directly in a python environment

Doesn't arch come with python & gcc out of the box?

No, although `pacman -Syu python base-devel` isn't exactly a burden. But then what? If you're trying to get back to a simple "turn on computer, land in simple programming environment", how does it help that you have python and gcc available? You still have to manage libraries, learn to use a compiler, and all the other joys of modern development. The only thing Arch Linux gained you was a bit simpler OS and maybe better boot times.

Yes it does. When you pacstrap you include base devel. From that moment onwards your you will have a full programming environment all ready to rock and roll on your installation.

Yes, and you have a full operating system and all the joys of modern development. You absolutely do not have anything like a VIC-20 that you can power on end have a basic programming environment 5 seconds later. At best, you turn it on and 5 seconds later have a python shell, where you can do a certain amount of development before you get to experience the joys of managing libraries and dependencies. Thus bringing us back to what I perceived as the primary complaint that there's way too much setup and baggage required just to get to the actual programming part.

You can use python without needing to manage any packages -- you'll have to write most things from scratch, but isn't that the hardware BASIC non-internet experience regardless?

We're moving away from general purpose computing, and Apple is one of the greatest forces in this.

Also, they are a threat to a free market for software, as they regulate their walled garden with arbitrary rules and skim off a lot of value.

I honestly don't understand why a large portion of developers have so much love for Apple. I'm personally a proud owner of a desktop PC with an ASUS motherboard. It serves me fine, and gives me full control over the software installed on it. I'm not a laptop-person but I believe there are many perfectly capable non-Apple laptops out there.


Because for those of us that care about graphics and selling desktop applications, it is mostly Apple, Google or Microsoft platforms.

At the Computer History Museum, I use an IBM 1401 mainframe (1959). When you hit the power button, relays go ch-ch-chunk and it's immediately ready to use. Because it has magnetic core memory, it even has the previous program already in memory, preserved over power-down. Computers have taken many steps backwards as far as startup time. Of course, loading a new program from punch cards is slow, so some things have improved :-)

I've spent surely coming up on years watching and reading all the content you've either created or helped produce. Indeed some things may have improved, but I sure enjoy the heck reading and watching all your exploits with 'legacy computing'!

Watch a repl.it boot. It is the new joy, for children, to see an entire machine appear before their eyes and be able to instantly code away on it.

On the plus side, emacs now starts far faster than most computers.

> I completely understand why things are going the way they are as our computing environment has become ever more hostile.

care to elaborate a bit? what did you understand?

i just can't get my head around this idea that most non-mobile OSes have become such hostile environments...

yes, the population at large only uses their phones and tablets and doesn't care much. but they would be left without any entertainment if it wasn't for those of us who still need decent non-mobile environments.


So, the question is will people get to a point and say enough is enough? And if so, will enough people be saying it for it to make a difference?

It takes less than five seconds for my Windows 10 to go from asleep to ready for work, and that includes logging in with Windows Hello (the fingerprint reading is crazy fast).

I've been using linux distros (~5 years of Ubuntu and ~3 years of Arch) before switching to macOS somewhere around 2013-2014. And now years later I'm thinking about moving back. But every time I'm think about this I start with digging about current Linux situation and every time I realise than it is still a horrible system for anything outside of work, especially if you can't really do without a decent UI\UX.

Apple's ecosystem is also an issue. iOS + macOS is still much better than anything on the market (no alternatives really).


Switched from macOS this year having used it for about 8 years to first PoP_OS and now Manjaro. Both were great (GNOME environments) and very productive for both development and general use. I really like the streamlined, "get out of your way" UI.

I would say go for it, I'm glad to not be dealing with any of this nonsense, while paying a premium for it.


I've seen both of them, but the "get out of your way" UI is a limited feature. Apps are still do not respect the rest of it.

You install this new distro (like Elementary if it's still alive) and fall in love with the new Finder clone. But then you install twitter client, torrent client and a dozen of other everyday apps. And they all look terrible. And feel even worse. People still don't care.

As much as I hate certain things about macOS - I'd still chose it over Manjaro for example (haven't really tried PoP)

And not to mentions things like continuity and handoff. I can live without being able to copy paste token from my phone to my computer but this is so convenient T_T


Makes sense.. especially if you're still hooked into iOS. I had already given up iPhone couple years earlier so was easier I imagine.

I just use messages.google.com and save it as an app shortcut, and Telegram native app, and both work well. And generally am fine with web apps if a native app doesn't look right. But finding the right native app for the desktop environment can be an issue. The GNOME skinned apps are pretty nice.

And Manjaro has the AUM for plenty of available tools and such. But that's more dev focused


Yes, UI consistent mostly in terminal and chromeless applications. Really shows how bad alternative OSes are.

Seriously though with i3, beautiful fonts, so much in the browser it's not bad.


> twitter client, torrent client and a dozen of other everyday apps

I don't install any of that in work machines, and I'd hope most devs don't either, specially if the company owns the device.

If you really need those, why cannot you use the browser?

> continuity and handoff

Why do you need that for development?

Even if your workflow requires it for some strange reason, why don't you use an alternative? There are plenty of ways to pass data between devices.


I think you are missing a point here.

tl;dr: I don't have and don't want to have two PCs for two use cases.

I have my personal macbook that I use for work (development) and everything else. I use it when I have to be at the office or when I want to work outside of my apartment. Needless to say I want my personal computer to have applications that I use. For both - work and ... not work.

>> continuity and handoff

>Why do you need that for development?

I don't. I don't use a computer only for development (see above). But even during development something it can come in handy. For example when you are working on a service that has sms auth. Can I just put in 6 digits by hand? Sure. But having them being copied from you phone for you is very convenient.


That is definitely not wise.

Many companies lock down devices for good reason. For starters, to prevent employees doing that and risking the entire company.


I use my work machine for work and my personal equipment for everything else. My iPhone is more standalone then they used to be. I don't see any reason why I'd ever connect my personal phone to my work computer. So I don't see many downsides to making the switch.

Well, I don't have 'work' computer. I have my personal macbook and even more personal iMac.

Obviously in case you work only at the office or you use your computer only (lets say 90% time) for work - than there is no problem.


When I used my personal machines for everything, then I isolated my work from everything else. Remote servers are perfect for this, then you can just ssh in from any machine and do your work.

Linux on the desktop has been my daily driver for years (mainly xfce and gnome).

I use linux to watch movies, create music, play games and everything else. What exactly makes it a "horrible system outside of work" for you?


>Linux on the desktop has been my daily driver for years

Same for me, I've even been a maintainer of one (ONE! lol) AUR package.

>especially if you can't really do without a decent UI\UX.

Outside of a few Electron-base apps and maybe a few native gtk\kde one - everything looks like a work of high schooler. Nobody thinks about the UI\UX.

Compare Things3 and something from linux word. Or Bear. Or Twitterrific\Tweetbot.

But go no further than your system's settings: https://imgur.com/a/p0kl7wM - wtf is this? You have a window that takes 80% of your screen some huge ass controls that still take some 20% of the the whole view. Who thought this was a good idea?

Gnome 3 is even worse (I loved gnome2 back in 2009)


PC + WSL + somewhat illicit OS X VM has been a dream for me as a former Mac user.

My mother asked me to help her out with her win 10 installation on her work notebook. This was terrible.

UI is still inconsistent between apps, sometimes it feels like you are using 3 different OS from 3 different time periods. But you can get used to that I guess.

OS settings are still a strange place created to make an average user (or someone who haven't been using the OS for more than a decade) feel as an idiot.

No, amount the Big Three - Windows is the last place I'd look moving too. At least Linux gives me freedom at the expense of UI\UX. Windows give me... well games. I can't thing of any other reason to install linux except competitive gaming.


Interesting it’s possible that we have different priorities, but I’m not bothered by UI inconsistencies. I use chrome, office, adobe suite, a trading application, games, VSCode, they all have different interfaces that I know how to navigate. I agree that the settings can be tough. Half the time you are in “new” stuff and half the time you’re pulling up the screens from XP. I just google what I need to do though, and never have trouble getting it done.

> priorities

Not priorities but rather attitude maybe? (Not sure if the best word but this is the best I can think of with my english, hopefully it doesn't sound offensive or tactless)

Imagine you have a car. Great engine, relatively comfortable seats, a new set of tires and a body so ugly you want to ram it into a wall everytime you are behind the wheel. It does its job well but you do not enjoy the time with.

Being able to enjoy my time with a device or an OS (or any other thing or person for that matter) is what I want. Obviously sometimes the issue is on my part.


Give windows 10 and WSL2 a try. With the new terminal and editor it is really a neat setup. macOS is hard to beat in terms of smoothness and looks but unfortunately it gets more and more clunky for working.


> iOS + macOS is still much better than anything on the market (no alternatives really).

The Windows + Linux combo is way better for all productivity, gaming and development than the mess macOS has become since Jobs passed away.


I'm too much into gaming this days, PS4 is enough for me.

As for the rest I've commented about win10 https://news.ycombinator.com/item?id=23274273 and Linux distros: https://news.ycombinator.com/item?id=23274492

I still find macOS to have best balance of productivity, development and feel. Windows is still terrible and linux is just for work.


The issue is that you claimed that "there is no alternative to macOS", but you are talking about your particular use cases (not gaming) and subjective opinions (does not like Win10, does not like Linux).

macOS’ only strength for development is the ability to target iOS. For the majority of developers, a Windows/Linux setup is better because it covers everything. Linux is the best environment for most dev fields. Windows is the best for some of them (graphics, gamedev, C#).


No, I didn't claim this. Unless you are trying to take one phrase out of context and dance on it.

What I said is that macOS is the only OS that provides the needed balance of everything (except gaming). Other platforms are not alternatives because you have to chose - either you are getting a good dev machine that is not a enjoyable to use for other use cases, or you are getting windows which is not enjoyable for the reasons I've described in the other comment. The only two reasons to chose windows (as I see it) are gaming (and game development maybe) and windows (often enterprise) development.

To sum it up with an analogy and close the topic: a truck is not an alternative to a volvo s60 just because it is also a car and can do even more than a volvo s60.

PS:

>macOS’ only strength for development

This is your second comment where you for some reason ignore most of my comments and focus just on what suits you.


>this

these


Their "see!" shell script example is a bit rubbish because I get 0.012s, 0.005s on this Mac laptop whilst getting 0.022s, 0.023s on Linux box 1 and 0.006s, 0.006s on Linux box 2.

Changing the filename to test2.sh on the Mac (which should trigger the delay, right?) gets 0.006s, 0.006s.

I don't think the shell scripts are doing what they claim (and wouldn't the second run be faster anyway because of caching?)


If they are caching based on inode, this will not invalidate the cache. Do cp test.sh test2.sh and try again.

I feel like cp might do an APFS CoW and this might still cause problems…

No, even "cp -c" creates a new inode.

Sorry, when I said "changing the filename to test2.sh", I meant in the commands run, not `mv test.sh test2.sh`. i.e. I have both `test.sh` and `test2.sh` in `/tmp` now.

I've been forced to update to this pile of shit because latest iOS requires latest Xcode which in turn requires Catalina. It's a nightmare.

First off the new apps (music, podcasts, etc) are terrible. They killed off iTunes but replaced it with much worse. These apps don't behave like standard macOS apps, the UI is full of inconsistencies and is just so empty. This website has nice examples of the failures of modern Mac OS: https://annoying.technology

For some reason after updating the "new updates" badge was stuck on the system preferences icon (and even on the preference pane itself) despite no updates being available. I ended up having to delete a plist and reboot to fix it, apparently a common issue.

The Mail app will now randomly play the "new mail" sound. I can't confirm it for sure but I'm assuming it's treating read, existing mails when they are moved to the trash/archive or newly created drafts. They screwed up the mail app, a problem that has been solved for decades. WTF? The worst is that I see no major changes in there, so why touch the mail client in the first place if you're not even going to give me additional features in exchange?

Xcode was stuck upgrading in the App Store. It would start the process and never make any progress. Cancelling it had no effect. Rebooting cancelled it but the second attempt, while making progress, ended up failing with a generic error message with no actual information. Logs are useless because they're being spammed by all the background processes even during normal operation making it impossible to find anything. Finally the third attempt succeeded.

1Password now takes 5 more seconds to unlock my password database. Somehow this disgrace of an OS slowed down the password hashing process by an order of magnitude.

Switching screen resolutions or connecting to an external screen takes a good 10 seconds of flickering and frozen UI before everything starts working again. This is now actually worse than both Windows and Linux. I dread moving the laptop or touching the USB-C cable (also because USB-C is so brittle) when it's connected to an external monitor out of fear that it'll disconnect/reconnect and I end up in a 30-second cycle of flickering.

I upgraded a couple of days ago, so those are not early bugs. Apple had a year to fix all of this. The Xcode thing might be an isolated issue but there's no excuse for the general performance penalty or the stuck update badge which has many hits on search engines suggesting it's a widespread issue.


> I've been forced to update to this pile of shit because latest iOS requires latest Xcode which in turn requires Catalina. It's a nightmare.

I'm literally halfway there as I type this, Xcode 'installing components'. Having to upgrade essentially everything just to get the right dev tools for the current iOS is madness, feels like buying a new house to fit the new coffeemaker...


I install new versions of Xcode about every two weeks on average. The amount of time it takes to have a new Xcode running is at least an hour: first you download a massive XIP, then the system "verifies" it forever when you try to open it, then it takes forever to unarchive because it's huge, then you need to copy it from ~/Downloads to /Applications which takes another couple of minutes. Then you hit the component installation part… (I think this step has something to do with installing new MobileDevice frameworks?)

Forcibly relocated to a refugee camp tent with leaking water pipes next to your air mattress. But at least everything around you in your tent is white, flat, and material and your coffeemaker works.

> The Mail app will now randomly play the "new mail" sound.

It’s not quite random: it plays the sounds as it gets new email, but then it takes anywhere between a couple of seconds to a minute for the new email to be visible in the UI. Infuriating.

> Xcode was stuck upgrading in the App Store. It would start the process and never make any progress. Cancelling it had no effect. Rebooting cancelled it but the second attempt, while making progress, ended up failing with a generic error message with no actual information.

I just normally kill the store-related daemons when that happens.


Re: downloading Xcode, this page has saved me hours: https://stackoverflow.com/questions/10335747/how-to-download.... It's just a list of direct links to each version of Xcode at apple.com. Mystery why Mac App Store downloads still can't be bulletproof after all these years.

This one drives me nuts. I mean what in the hell is that downloading doing that it manages to fail arbitrarily. This is downloading files, how the fuck can it be so complicated and broken.


I actually prefer the App Store approach because that way the majority of my updates are in one place and can be done automatically in the background. The problem is that it used to work fine and they managed to break it.

I usually keep at least one prior release of Xcode on my machine, up to the latest patch for its series. So right now I have 11.5 and 11.4.1. I've hit so many problems with new versions in the past. I wish I could just let MAS handle it for me, but it's just never been an option, aside from the issues it has actually working.

I don't share your issues with Catalina [1] but I have to agree Podcast app's UI design is very strange. The primary interface should be the "Episodes" tab.

Just like Twitter's UI, app developers think they know what content is best for you with a 'feed' or 'featured'... they've completely abandoned chronological ordered lists of content unless you click 2-3 buttons.

[1] Catalina has been painless for me, not sure why my experience was different than everyone else


I also upgraded days ago, assuming they would have had time to fix the bugs. However, I can say the USB-C external screen flicker was plaguing me before the upgrade and hasn't gotten worse. Turning off hot corners, oddly, helped, although the problem hasn't gone away.

I've had a similarly painful experience upgrading last week. Though it doesn't seem quite so bad as the posters above, and after making a few fixes most everything is back to normal.

My one remaining serious annoyance is that my external monitor color settings are screwed up and there appears to be no fix. Reds are purple and everything is just a little washed out, which is a shame for a 4k monitor that was beautiful with Mojave.

Strangely, right before the computer restarts, or if booted in safe mode the color starts to look perfect again, but I can't seem to replicate that in normal operation.


I have this issue constantly, even the laptop screen itself will get 'washed out'. The solution is to go to Displays > Colour Profiles and change the profile to any other one and then change back to the default.

> My one remaining serious annoyance is that my external monitor color settings are screwed up

Could it have something to do with Night Shift? Have you tried enabling and disabling it and see if it fixes that?


Our help desk is wise enough to keep existing mac users on the oldest supported macOS version; but inevitably at some point in the future they'll have to roll out the latest version. This will be the week when I will exchange my macbook for a Windows 10 ThinkPad. A lot of our dev teams have moved to this setup alreay using WSL or a VM for Linux if really needed and it has been really smooth (our helpdesk staying on top of the Active Directory and Windows Update management game also).

If WSL turns out to be insufficient, https://multipass.run/ is worth a look.

Or, you know, just run Linux outright.

Do you know of anything similar that supports GPU acceleration?

I share almost all of these issues. What drives me super nuts is the multi-display support which NEVER "just works".

I have to disconnect and reconnect USB-C 3 times, turn off the second monitor, switch inputs, restart the €3000 machines twice or whatever. So annoying, how does this pass QA at all?

Also, don't setup and use multiple users at the same time. That's really messy as well.


Since Steve left us, over time I've witnessed so many issues crop up in the Apple ecosytem, for users/customers and developers, and it's clear that there's nobody to be shit-scared of anymore at Apple.

So many recent things would have pissed him off.

There's no way the 'notch' would have appeared. Nor the fact that the iPhone camera design stopped the device sitting flat on a surface.


if Steve were still alive, iOS would never have been as open as it is today.

They don't give a shit if you're not using an Apple monitor. Witness the ProDisplay, which doesn't even have a power button, and talks to the computer to turn on.

Your experience certainly sounds bad, but none of this is normal; mail sound, USB-C cable brittleness, 1password slowness, all of it works nicely for me.

Have you actually done anything to try and fix these issues? Because this is not typical

I use 1password and it doesn't take 5 seconds to open. Did I accidently install linux or something? because since it's the OS causing your delay it would be causing me to have the same delay.

xcode installs just fine for my entire team. Just did the update myself, worked just fine.

I plug into a dock and undock constantly during the day, and while it could be quickinger, 10 seconds and flickering is NOT my experience.

and what the fk are you doing to your connections that you consider usb-c brittle?!?


There's a lot more non-determinism in a modern MacOS install than you imagine. "WFM" doesn't invalidate the anecdote to which you reply. TFA is about putting network requests in system calls ffs.

OP is a typical Apple "You're holding it wrong" reaction. It's never Apple's fault when its OS doesn't work right - it's always the user's fault. Despite the user paying a premium for Apple, or Apple having control over hardware its OS works with.

I've just tried connecting to my external monitor again and 10 seconds is exactly how much it took - no exaggeration there. The internal monitor goes blank for 1 or 2 seconds, then both monitors turn on and it takes another ~8 seconds for the UI to adjust and the windows to be moved to the proper place.

> you consider usb-c brittle?!?

It's much easier to unplug USB-C than HDMI or DisplayPort, for one. USB-C itself is a terrible mess that requires an engineering degree to figure out what's compatible and not, and maybe it's just me and I have a shit hub but I had an external hard drive crash midway through a file transfer due to power issues despite being powered by a Apple charger (the hub and all the peripherals went dark and the laptop stopped charging, then started cycling on and off where every time the drive tries to start up again it kills everything).


What makes you think that your experience is the typical one? I've had these problems as well and so have a lot of people I've talked too. Obviously that's just more anecdotes and doesn't prove anything but neither does your comment.

> You can test this by running the following two lines in a terminal:

>

> echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && chmod a+x /tmp/test.sh

> time /tmp/test.sh && time /tmp/test.sh

Am I missing something here?

I just did this, and the timing between the first and second run was barely noticeable -- in fact, the first run was slightly quicker:

> echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && chmod a+x /tmp/test.sh time /tmp/test.sh && time /tmp/test.sh

> Hello

> /tmp/test.sh 0.00s user 0.00s system 55% cpu 0.006 total

> Hello

> /tmp/test.sh 0.00s user 0.00s system 41% cpu 0.010 total

This is on macOS 10.15.4.


I had put off upgrading for a long time because nothing good can come from running the latest stable release. They've never been stable. But Apple sort of forced me to update recently since wanted to back up my phone, which I wanted to do before switching to a new one. I imagined that it would be better after a year. Boy was I wrong, and I regret doing it much. It has been a constant pain ever since, bluetooth is completely broken.

- My external trackpad isn't able to connect, at all. Audio devices require that I kill coreaudiod before connecting, otherwise they just disconnect after a few seconds.

- I can wake the laptop with a bluetooth keyboard, but when it's awake the keyboard stops working. Flipping the switch on the backside of the keyboard lets it reconnect again.

- There are transitions that you cannot disable that makes your laptop feel super slow. In Mojave you could disable them, in Catalina you can't unless you want to run with SIP disabled.

- There's also a super fun bug with mobile hotspot failing to activate, and there's no way for you to just manually connect to your own hotspot, it has to go through this bluetooth activation, even though your mobile hotspot is visible and connectable on all other devices. You end up in situation where you connect to your friends hotspot and they connect to yours, since neither of you are able to connect to your own.

I've given up. The quality control in Apple is down the drain, and have been for quite some time. I'm fixing to downgrade to Mojave this weekend, hopefully that will make it more stable. But I'm not holding my breath. To add injury to insult I'm on my third broken keyboard now. Next time it breaks I might just use the consumer laws and make them refund the laptop so they'll have to take a big loss for creating such a flawed device.


Those all sound like unusual problems. What external hardware and phone are you using?

I've drunk the cool aid. Never drink the cool aid: iPhone 11 Pro, Magic Trackpad and Keyboard, AirPod Pro and Bose QC35. If you search for these issues on the community forums or web in general you'll see that it's quite common, and it all started with Catalina.

Some brave people that were running the public beta reported these issues to Apple, but we're now four point releases in and still no fix. Apple seem to not even want to acknowledge the issue, they just send users to their FAQ which sums up to "have you rebooted?"

The issues seems to start if you have bluetooth devices connected and your laptop becomes memory constrained. And after that it's in a broken or bricked state it seems. You can do tricks like killing coreaudiod to get audio devices to connect, but trackpad is still broken.


> Another way to reduce the delays is by disabling System Integrity Protection. I say reduce, because I still do get some delays even with SIP disabled, but the system does overall feel much faster, and I would strongly recommend anyone who thinks their system is sluggish to do the same.

The tone of this article reminds me of a passage from the seminal Google+ Platforms Rant:

> Like anything else big and important in life, Accessibility has an evil twin who, jilted by the unbalanced affection displayed by their parents in their youth, has grown into an equally powerful Arch-Nemesis (yes, there's more than one nemesis to accessibility) named Security. And boy howdy are the two ever at odds. > But I'll argue that Accessibility is actually more important than Security because dialing Accessibility to zero means you have no product at all, whereas dialing Security to zero can still get you a reasonably successful product such as the Playstation Network.

https://gist.github.com/chitchcock/1281611


I made the jump to a System76 Adder WS laptop and pop!os for development after buying the lemon first gen MBP with the terrible keyboard. It was my seventh and possibly last MBP (including powerbooks before it).

I was considering one of the new 13” MBPs but that seems unlikely if injecting network latency into syscalls is the direction things are going.

If you’re not building Mac/iOS apps, find a Linux laptop you can tolerate for development and an iPad Pro for everything else.


Thinking about it, this probably also gives Apple a ~fairly accurate set of usage stats for software.

All they'd need to do - and it's very simple - is count the number of requests of each given hash lookup.

Since they know the hash for each of their own executables, that gives a direct count of "most used" through to "least used" programs.

Not sure if they'd have the hash for third party executables though, to know what the given hash request corresponds to.

If they receive the hash for 3rd party executables when developers sign things, then Apple seems like it's able to generate usage stats for their entire OS and 3rd party app ecosystem.


This seems like a natural outflow of a company design process that (a) prioritizes security highly (b) prioritizes regular users over developers (c) does not allocate sufficient resources to the product to thoroughly cover all the bases (d) is developed by people in North America, for whom the USA === the whole world, and are used to near 100% seamless internet connectivity with latency < 20ms.

I love macOS, but their software generally has issues with flakey internet connectivity and long latencies - down here in South Africa, ~400ms RTT is not uncommon.


Up until the release of Catalina, I've always upgraded to the latest version of macOS within a month or two. But some of the changes this time is really stopping me from upgrading.

As of Catalina, there's no sane way to install the Nix package manager without losing functionality because macOS now disallows creating new files in the root directory[1]. Nix stores its packages in the /nix directory and it's not possible to migrate without causing major disruptions for existing NixOS and other Linux users. This is too bad, since apart from Nix being a nice package manager, it also provides a sane binary package for Emacs. The Homebrew core/cask versions only provides a limited feature set[2][3].

[1]: https://github.com/NixOS/nix/issues/2925

[2]: https://github.com/Homebrew/homebrew-core/issues/31510

[3]: https://github.com/caldwell/build-emacs/search?q=support+is%...


Brew never had this problem because they chose a sane path without corrupting the system directory. It’s a bad design on part of NixOS and one can even say the changes in the macOS were designed to encourage good/sane design.

> Brew never had this problem because they chose a sane path without corrupting the system directory.

Ha, no. They did the absolute worst thing they could have done and now that they are popular they think they "own" /usr/local. (They used to camp out in /usr, but Apple rightfully put a stop to that real quick when SIP came out.)


This is why, of the two, I prefer Macports.

Also, Macports never phoned home to Google without asking permission or notification, unlike Homebrew.

I'm much happier with their stance on it, too: https://lists.macports.org/pipermail/macports-dev/2019-March...

Happy MacPorts user of just over a year as well, for a variety of reasons I won't get into here but that being one of them.

Very satisfied MacPorts user since 16 years. I really don’t get why brew is a thing...

Nix living at a predefined path is integral to how it works. An executable does not dynamically link to a generic "ncurses" but (via rpath) links to a specific compiled version of ncurses (such as /nix/store/81rb87agmp9cbsvg2xm2n4kp9c6309lv-ncurses-6.2). This is the root of all the benefits of Nix such as being able to install things side-by-side that use different versions of things or upgrade and rollback without problems.

That predefined path being the same (/nix) across all users of nixpkgs is required to be able to share binary packages (you could perhaps build everything from source, but that's a lot of time, more time even than something like gentoo because package updates require all dependencies to be rebuilt as well).

You can call it an insane choice or bad design, but there aren't a whole lot of options here. Could Nix move to a different path? Maybe, but is there a path that all operating systems could abide? If the new path stops working in some future OS, will it still be insane and bad design? Again, maybe, but I happen to love Nix and I use is on macos because it makes my life easier (and I'm on macos for work reasons). I'm willing to bend and do a lot of legwork to be able use Nix, and I'm upset with the Catalina situation.

Can follow some discussion here https://github.com/NixOS/nix/issues/2925


Unix OS variants have pretty standard paths like /opt or /usr.

Going with /nix was basically the best way to run into trouble.


It could have been /opt/nix and been compliant with FHS, and kept all the benefits you mention.

Hindsight is 20/20. It wasn't /opt/nix for reasons I do not know. In the context of NixOS, there's little reason to consider FHS. Only when using Nixpkgs outside of NixOS does the /nix choice look poor. I don't know which came first.

It's not really a desirable feature, but a limitation of the tools it has to work with, where e.g. specifying an rpath of $NIXROOT/store is not possible.

That's an interesting point. But it's not just rpaths, there are many references to things within the nix store. I suspect it would quite difficult to make them bound at runtime or something, but would be nice if possible.

The Nix abides.

> Brew never had this problem because they chose a sane path

How so? Taking over /usr/local as Homebrew does is guaranteed to cause conflict. Using a dedicated file hierarchy as Nix does is quite reasonable and there's nothing magical about rooting it at /.


How does it "take over" /usr/local? You can still `./configure --prefix=/usr/local` on your own software and things continue to work as long as you're not installing the same thing that brew is.

> How does it "take over" /usr/local?

Because it shoves all its shit there without asking.

Macports actually did it correctly and IME never had any issue.


Installing several versions of the same piece of software is central to Nix.

While locking all needed versions for a specific application provides stability, I can't believe it doesn't come without a large increase of complexity, especially in connection security upgrades which triggers other libraries to need an update as well.


> Brew never had this problem because they chose a sane path without corrupting the system directory.

That's a hilarious assertion. Back in the days brew's takeover of /usr/local caused OSX upgrades to get stuck for hours on end (some folks reported more than 12h).


Writing file to /nix shouldn't corrupt the system directory either. What exactly do you mean by "bad design"?

Exactly. What's more, if we're talking about user hostility, how hostile is when a software doesn't provide a configurable install dir? It's literally a single damn variable!!

> doesn't provide a configurable install dir

This is completely false. You can change the installation directory at the cost of losing binary packages. When you change it, packages would be built from source instead. This is what Homebrew does too.

What's more, I don't think many package managers provide this option. Not apt, not yum.


Homebrew itself recommends you not do this, and while it is getting better at working in this case you will still run into issues if you try to do certain things.

[Spack](https://spack.io) uses patchelf and additional tooling to relocate it's binary packages to other paths. It generally works, although one has to special case things that burn their install directory into their builds (e.g. Perl).

it's a single variable which many parts of the system need to have knowledge about, some parts which have basically no way to feed in a variable. You can change the root directory in nix, but that invalidates all binary packages, in part because rpath is not at all configurable.

This is not the case. The problem is that caching is based on the default path which is /nix. So they would have to rebuild all caches.

Maybe they shouldn't have built it that way then. In my experience nix is nothing but a huge pain in the ass if you don't buy fully into the system, weird design decisions and all

For me it's aperture. I like the interface better than lightroom, and I don't want to pay a monthly fee to have access to my photo library which I only add to once in a while. It's a shame because it's a great piece of software, and even the UI doesn't feel dated, but I just won't be able to run it if I upgrade.

For what it's worth, Aperture, iPhoto and iTunes can be made to run in Catalina. People figured out last year what hacks were needed and there is a tool called Retroactive that will automate the steps:

https://github.com/cormiertyshawn895/Retroactive

Got some discussion on HN [1] about 3 months ago amongst other places, cool bit of sleuthing in the vein of efforts to get versions of macOS running on Macs older than officially supported. Personally I'm somewhat resigned to needing VMs to run certain older software, with a big one for me being Creative Suite CS6. Like you I have no interesting in buying into Adobe's subscription lock-in. But it's nice that some stuff can keep running without that layer for a while longer. Hopefully it'll still be possible in 10.16.

----

1: https://news.ycombinator.com/item?id=22454069


For a modern, subscription-less alternative to CS6 look at serif's affinity suite (no direct lightroom equivalent there though)

There's a fix tool/hack to run Aperture on Catalina, called Retroactive.

https://github.com/cormiertyshawn895/Retroactive

It also works for iTunes and iPhoto. Sadly it won't fix any of the other known Catalina issues, of course! ;)


Might want to look at Capture1 at this point.

The UI is way worse than either Aperture or Lightroom, but the editing is powerful, and you can download the full version for free if you have a Fuji or Sony camera, IIRC.

It’s a capped version with some missing functionality (like layers), but it’s still a great piece of software.

IMHO the original choice of the path seems incredibly ill-advised and the main burden lies with the original developers.

sometimes old errors and mistakes come back and bite


If you truly want to be "cross-platform" with long-term future proofing in mind, `/nix` is (edit: was) probably the most stable choice.

I get it, people are sensitive about the root directory. "But it's where ALL the stuff lives!". So yeah, try not to ever run 'rm -rf /' (even though this is blocked in most cases now).

But why make it completely inaccessible for creating files/directories in? So much hand-holding for people to make it impossible for a user to ever make a mistake just locks down the ecosystem more, forcing developers to implement proprietary hacks that don't scale properly.

`/var/opt/nix` and `/opt/nix` are options, sure. But you cannot guarantee that those directories will exist on every platform. And if you have to create them, why is this better than `/nix`?


If you have to `mkdir /nix`, what's wrong with `mkdir -p /opt/nix`? I don't see how one is "more stable" than the other. The big difference between the two is the later conforms to convention while the former doesn't.

`mkdir -p /opt/nix` assumes that there is a convention, and that this is the correct convention - which may not be the case for every situation, and would result in creating unnecessary nested directories.

You could make a more sophisticated installation script that attempts to install Nix into conventional locations depending on the specific operating system - or user input - but if you want a simple catch-all, simple installation script `/nix` was a perfect cross-platform installation location, until now.


> `mkdir -p /opt/nix` assumes that there is a convention

A correct assumption on virtually all relevant extant systems...

> which may not be the case for every situation

In the supposed scenario where the assumption isn't correct, the downside of /opt/nix vs /nix is basically insignificant. What's the overhead of one level of directory nesting, a single extra inode? Big whoop.


It only seems that way now because some platforms have begun locking down their root directories. Nix, by design, doesn't conform to the FHS way of organizing directories so it made perfect sense to use /nix when the decision was originally made.

> Nix, by design, doesn't conform to the FHS way of organizing directories

That's why /opt/ exists. What's wrong with /opt/nix/ ? Or /var/opt/nix/ for read-write files that need not be a fixed part of any package installation (the Unix equivalent of system-wide "Application Data").


Or NIX_PATH, or ~/.nix, et c.

I am infinitely tired of this node_modules “we know better than you, it isn’t configurable and will never be configurable so stop asking” hubris. It’s not open source entitlement to say that a maintainer with that attitude is bad and wrong.

My homebrew is installed to ~/Library/Homebrew and while they claim it’s unsupported, it works, and if it stops working, then I’ll stop using Homebrew.

I don’t trust software that demands root when it doesn’t need it.


You can use an alternate path with Nix. When you choose to do that, you will have to build all packages from source instead of installing prebuilt binaries.

Nix isn't designed as an application. It's designed as a system package manager.

It's also an application, it just happens to manage other applications

> Nix, by design, doesn't conform to the FHS way of organizing directories so it made perfect sense to use /nix when the decision was originally made.

Refusing to conform to the FHS doesn't mean their decision made sense; refusing to conform to the FHS means they made a bad decision in the past and everything progressed from there.

It doesn't 'seem that way now because some platforms have begun locking down their root directories'; it seems that way because creating arbitrary directories in / is a terrible idea, and has been at least since I started using UNIX/Linux systems in the 90's.

Fact is, they made a bad design choice, and now it's come back to bite them (and their users) in the ass.


Not conforming to the FHS is what makes Nix possible. You won't get Nix's reproducibility without it.

Can you explain the reasoning here? I can see it being _easier_ than doing it the right way but have trouble coming up with a scenario where it makes it _impossible_.

I'm probably missing something, and please let me know if so and why, but it sounds like a chroot could solve path reproducibility.

/opt/nix is FHS compliant and would work fine.

> creating arbitrary directories in / is a terrible idea, and has been at least since I started using UNIX/Linux systems in the 90's

Why?


Because the root directory might be on a very small partition (perhaps only a few hundred megabytes), while other mount points like /usr might have more space; the only things which should be in / are the things which are necessary to mount the other filesystems (perhaps through the network using NFS).

(Yes, nowadays hard disks are much larger, we have things like initrd, and we now make /bin and /sbin symlinks to within /usr, but the parent comment did mention the 90s...)


What is special about /nix that would make it better suited elsewhere? Aesthetic? Clutter? I don't think there are any technical reasons why the root of the filesystem is important. The /nix folder is just another folder with some ACLs/Permissions (however OSX works, idk)

Historically / has been reserved for the use of the Unix system (the distribution that packaged it, not the computer you're running on). Local programs were installed to /usr/local. Packages installing themselves in /packagename are making your root directory like Windows' Start Menu. Furthermore, if your, say, Physics department has 20 machines, your sysadmin would install everything on an NFS share, which probably got mounted at /opt. Your sysadmin definitely did not want to mount /this, /that, /theother.

So while /nix is no problem from the filesystem driver, it is completely flaunting established Unix norms.


Everything not specified in the FHS is reserved for use by the administrator. The FHS isn’t all-encompassing. It’s a contract about what directories the OS wont touch.

Generally you’re right and if you make a piece of software not follow the FHS you better have good reason. Nix, I think, makes a solid case since existing outside of the FHS is the only safe way to not conflict with every package manager.


You're not _wrong_, but I'm not sure that those reasons really mean anything. They were all new and arbitrary at some point and we're a long way from Unix.

> it is completely flaunting established Unix norms.

Also, _nix_ is completely flaunting established Unix norms in more ways than one. /nix is where all of the nix stores go, the immutable bundles that get pieced together to form all of the stuff you install. It could go in /var/nix or wherever, it doesn't really matter.

But putting it in /nix is kind of nice in that it's so different from the purposes of the rest of the filesystem. /nix doesn't behave like the rest of a normal Linux system, so it is separate. You can still symlink from /usr/local/bin/foo -> /nix/store/abcdef-1.1.0/bin/foo so the rest of your system has the same expectations.


Why are you apologizing for Apple? I too have always had my own path in / (/u for my NFS mounted homes). I guess I just learned of yet another reason I will never go to Catalina (or buy any more macOS hardware).

I second this. Any tool which creates its own directory in the filesystem root (and cannot run from any other location) is inherently doing it wrong by any measure.

By your own words, tool like apt, yum, and pacman would all be "doing it wrong." It's just wrong to blindly apply any rules without considering the various presumptions that justifies it. Specifically, the general advice of not creating directories in the filesystem root mainly applies to individual packages and is inadequate for system-level package managers.

That's not necessarily true because it ensures that you own an entire namespace separate from the OS install, which in Nix's case makes a lot of design sense given its use case(s).

What should the default nix store path have been then?

In my very limited (I don't use nix) opinion, the default of /nix isn't an issue, but rather:

> and it's not possible to migrate without causing major disruptions for existing NixOS and other Linux users.

Software that can't be re-parented without breaking is destined to create problems for users... eventually.


Unfortunately, what you're asking for is fundamentally impossible with binary package managers.

What it was: `/nix` Or maybe `/notroot/nix` to make people happy.

"The root directory is untouchable" is a new fear-based imperative that would have been hard to predict.


The obvious option would be /opt/nix, /usr/local/nix, or something to that effect. /nix is a clearly obviously bad choice, and now we're starting to see why.

The problem is that /opt/nix isn’t safe from the OS and Nix is explicitly software that doesn’t follow the FHS so it makes no sense to install it in a prefix.

/opt/local/nix is probably safe.


/usr/local/something

/usr/local is a prefix and contains local software that follows the FHS (i.e. libs in lib/, docs in doc/ binaries in bin/). Nix explicitly doesn’t do that so it would be inappropriate to install it there.

You can install Nix without losing functionality, it’s just annoying because it requires setting up a separate volume, and if you want it encrypted and available before the GUI session restores then you have to use a login script to force-mount it. Personally I just keep my Nix volume unencrypted because I don’t build any proprietary software in it and I don’t care if someone can see what I have installed.

I really wish Apple would give third parties the ability to create firmlinks (or at least give Nix one), or barring that, give us a sane way to mount encrypted volumes at the same time that the system volume is unlocked.


You can create permanent symlinks inside / by creating a file called /etc/synthetic.conf - 'man synthetic.conf' has the full documentation. This sounds like it would solve the issue?

It's funny, I just had to do this a few days ago.

This comment has worked for me on two machines: https://github.com/NixOS/nix/issues/2925#issuecomment-539570...


There's just so many problems with that approach:

1. You have to create a separate volume just to install a package manager, which is a poor user experience

2. A separate volume means FileVault won't work out of the box

3. The volume can be mounted only after GUI apps are brought up

4. Restoring after sleep might fail because of 3

All of these are mentioned in the Github issue, but it might be hard to find because it requires so many clicks and scrolling to view the whole thread.


1 — Sure. But Nix isn't exactly the most friendly package manager to begin with. I wouldn't recommend it if you're not comfortable creating volumes.

2 — Could you explain? Mine is on and working, I didn't need to do anything else.

3 — Is this if you have login items that need nix to be available? I don't have this so I haven't noticed.

4 — I've never run into this, but again I might just not use Nix for the kind of things that would cause issues.


It's not that installing Nix is impossible on macOS, it's just that it has some hard-to-ignore limitations now.

1. Having to create a volume when a plain old directory should suffice is insane. It's creating a hassle for no good reason for users.

2. /nix would be unencrypted by default if kept in a separate volume. There's also the problem of how to unlock it upon boot.

3. Login items is a very common use case so not supporting it would be problematic for many users.

4. Unreliable sleep is an even bigger problem.


I believe Nix actually picks a volume so that it can be encrypted, and it uses one of the many ways to run a script before login (some of which still happen to work) to decrypt it?

Thanks for explaining! It sounds like I am just lucky with my set up not to run into issues. Hopefully they come up with a solution soon.

I understand the purpose of notarization but I feel like they could've come up with a much better solution to this. A network call __everytime__ someone runs an executable is not acceptable. But for the cases where the user is offline, Apple must keep a list of notarized apps on the machine...

Nearly every article I see about macOS or Windows these days further confirms to me that switching entirely to Linux was the right call. Maybe 2020 will be the year of the Linux Desktop by default.

anyday now...

Did apple make any comments on this? I haven't been able to find any public responses from them. I'm really interested on reading their side of things. This is quite jarring, it's hard to believe it is a thing. However, as I read through tests people did, it seems just as bad as it sounds.

I was actually getting a mac mini now that I'm working from home (I thought I'd get better integration with some of the company's wfh infrastructure while still having a unixy environment, so a win/win situation), but I cancelled the purchase after reading this. I get that you can jump some hoops and set some apple specific flags to things so that it works better, but the reason I wanted a mac was to make things easier and not having to look into obscure APIs and features to get simple things working. I was really looking forward to that, but I don't feel that sort of investment will be justified with issues like this in their OS :/


This is frankly hyperbole. A single checkbox in a GUI menu that is routinely accessed for managing other system-wide sandbox privileges isn't exactly obscure. It also isn't some difficult, inconvenient task. It needs to be done once.

From what I've read it's not available by default and you need to run some commands (which seem to be hard to google). And that solves only part of the problem, the article had other examples that may be harder to solve. It seems like, if your internet connection is not great, then you're going to have a bad experience.

This is also the case with APFS on rotational disk drives. Why does APFS perform so much worse on HDD vs SSD? Will Apple fix it? https://bombich.com/blog/2019/09/12/analysis-apfs-enumeratio...

APFS was not designed for spinning disks. No, they won't fix it; because they don't even sell a computer that ships with only a spinning disk (asterisk on the iMac's hybrid drive). HFS+ is still available, just use it if you need to format a spinning disk. I think this is a very different type of issue, with much more reasonable trade-offs.

Perhaps related: "How come someone notarized my app?"[0]

It mentions that anyone with an apple developer ID can notarize a qualifying app and submit this notary to the Apple Notary Service. However, the proof of notarization—the notarization ticket—might not be stapled to the application.

In the case of no stapled ticket, Catalina contacts the notary service to see whether a ticket exists. If so, the app is good to go.

[0]: https://eclecticlight.co/2020/05/22/how-come-someone-notariz...

EDIT. More informative link here[1]. It specifically outlines what happens on first run of an app. (and there's a great diagram if you scroll down)

[1]: https://eclecticlight.co/2020/01/27/what-could-possibly-go-w...


I feel like the continual development of MacOS is making it worse and worse. Similar to Windows, where every extra feature causes more and more complications.

But alas the 1000s of engineers gotta be put to work somehow.


There are significantly fewer than 1000 engineers working on macOS.

Increasingly I find macOS only to be tolerable with iCloud (and Siri, location, suggestions, bug reporting, et c) entirely disabled, and Little Snitch’s built in/automatic whitelisting for Apple services disabled, and most of the background processes entirely denied networking access. It phones home constantly even with all of the services disabled/opted out.

It’s indeed a huge mess, from a privacy standpoint too, not just a performance one. It’s sad also to lose things like AirPlay or iMessage as collateral damage in the process. :/

I just can’t tolerate a machine that hits the network hundreds of times a day when doing normal computing tasks that do not involve the network. They even tolerate this sort of spyware in App Store apps, too.

Is it too much to ask for a polished workstation OS that lets me boot and edit a local text file of notes and save and quit without notifying 4 different parties that I did so?


and there are a lot of background processes.

running just firefox and terminal, ps -ef|wc -l is 198

and many of them have no reason to be on my system.


I run a pihole at home, which has intermittent issues. When macOS can't resolve a hostname, almost every user-facing UI grinds to a halt. It's truly bizarre. Applications won't launch, menus don't respond, etc. Feels like a decade ago when your spinning disk was going bad. Not cute :(

If it checks with Apple servers every time you execute a new binary, what happens if you don't have an Internet connection? Are you just unable to run new code?

> One way to solve the delays is to disable your internet connection.

I think it just skips the checks if internet isn't available. But doesn't that kind of defeats the point of notarization?


Hopefully you're also less likely to get new unsafe binaries when disconnected. But it's all still awful.

The linked website isn't loading, so I don't know what it says, but: if we're talking about notarization, you can "staple" the notarization to a .app or a .pkg, which means you don't have to do the internet lookup at all, and you can run the apps without having access to the internet. I'm not sure about the technical details, but I would assume you add some sort of signature that's like "This .app with hash X has been notarized and it's fine" signed by Apple's secret key.

EDIT: how to staple: https://developer.apple.com/documentation/xcode/notarizing_m...


That doesn't help with self-written code, however, since you can't notarize without internet either.

The article says "One way to solve the delays is to disable your internet connection" so I assume it just doesn't bother with notarization when you do that.

Which makes a mockery of the whole security angle - how can this be utterly essential for security while connected and then just tossed aside as optional as soon as you exit Wifi range? It can't be both.

> If it checks with Apple servers every time you execute a new binary, what happens if you don't have an Internet connection? Are you just unable to run new code?

It waits 5 seconds while trying to connect, and then it gives up and caches the program as un-notarized, allowing it to run faster on later executions.

Notice that notarization seems to be disabled if the network is disabled from within the OS. To observe the 5 second delay you need to cut the connection outside (e.g., on your router), while the mac still thinks it is connected. I observed it by running catalina inside a virtualbox, and disabling its network.


> With internet enabled, it was reproducible by relaunching the application and triggering the code that called SecKeychainFindGenericPassword.

I have issues with a lot of APIs, but SecKeychain has got to be one of the worst. I don't think it's gotten any love in many, many years. Unlike literally every other Apple API that a Macintosh application might reasonably use, you call its functions (even from Swift) by passing strings as (length:UInt32, data:UnsafePointer<Int8>?) pairs, and getting results out by passing (length:UnsafeMutablePointer<UInt32>?, data:UnsafeMutablePointer<UnsafeMutableRawPointer?>?) pairs, and checking OSStatus return values. Every aspect of it is painful.

In Apple's "Documentation Archive" there's three "Sample Code" downloads related to Keychain. The newest one is for TouchID, and the oldest is for PowerPC. This is an area of the OS that doesn't get much attention.

> This issue has been reported to Apple and assigned FB7679198. Apple has responded that applications should not use this function, though the documentation for SecKeychainFindGenericPassword does not state that it is deprecated

I see that it's now grouped in a section of the docs called "Legacy Password Storage", but not actually "deprecated". Strange. That means you won't get any indication of its non-current status from Xcode, or even reading the release notes.

I like that there's a newer (and presumably less awful) interface. I don't look forward to having to rewrite/retest that corner of my application. Seeing all the CFString/CFDictionary casting and OSStatus checking with the new functions, it still doesn't look all that great.


What a ridiculous feature. The people involved in making this decision ought to be fired.

I'm showing 20-200ms longer on first run of the exec. Modified the test script a bit to show that it doesn't happen again if you modify the executable's contents.

    echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && \
    chmod a+x /tmp/test.sh && \
    time /tmp/test.sh && \
    time /tmp/test.sh && \
    echo 'echo Hello2' >> /tmp/test.sh && \
    time /tmp/test.sh

Another slight modification to make this show the effect every time:

    f=$(mktemp) && \
    echo $'#!/bin/sh\necho Hello' > $f && \
    chmod a+x $f && \
    time $f && \
    time $f && \
    echo 'echo Hello2' >> $f && \
    time $f

On my system:

    Hello

    real 0m0.131s
    user 0m0.001s
    sys 0m0.002s
    Hello

    real 0m0.004s
    user 0m0.001s
    sys 0m0.002s
    Hello
    Hello2

    real 0m0.004s
    user 0m0.001s
    sys 0m0.002s

I got hit by this yesterday, borgbackup (installed using home-brew) had a 5 second delay on every invocation.

Setting Terminal as a Developer Tool in Security&Privacy fixed it


One frustrating experience on the Mac is keyboard shortcuts.

Yes, they have polished the GUI, which makes it easy to navigate by mouse. But, when you need to work in speed mode, then you reach for the keyboard shortcuts.

The problem, is that there are plenty, too much sometimes, and they are often inconsistent between applications.

And yes, the Mac has a keyboard shortcut assignment tool, but it often doesn’t work correctly.

I must give credit to Microsoft here. They at least seemed to have perfected most of the common keyboard shortcuts.

Some good features about Windows shortcuts.

1. Alt-Spacebar to open the windows control menu, to move, minimize, maximize, or close the window.

2. Alt combinations are used to control the active Window application itself.

3. Alt-F4 to close the window. But, I would have preferred Alt-Escape instead, to close the window.

4. Control key for shortcuts inside the application. Like, Ctrl-C for copy. O for open. P for print. Etc.

5. Then the Windows key, to control Operating System level shortcuts. Like Win-M to minimize all windows. Win-L to lock the computer. Win-R to launch a command.

Some feature I would like are to use, Win-Spacebar to open a command search, similar to Win-R, but with the ability to list all possible commands. Similar to activating the command palette on VSCode.

And Ctrl-Spacebar, to activate keyboard commands for the active window. Kinda like Emacs, where I can run macros on it, like highlighting the words that I want, and execute something on it, like changing to uppercase, or converting to comma separated, or whatever else is needed.


this has always been the case. the underlined shortcuts in menus are a godsend in non-osx OSes. I am still astonished at the hostility of macos when it comes to Yes/No dialogs - you usually can't hit Y or N! This changed at some point after snow leopard. If I could run HDCP on my old macbook, I'd still be using snow leopard. aesthetically, they have made no innovations of use since then.

This seems to be, once again, a case of user experience being degraded due to lack of attention, testing and measurement of impact by security engineers.

Once you have security engineers, security is no longer the responsibility of all engineers equally, and you've already lost at security.

I have been running OpenBSD for all my dev work in a VM for quite some time now.

This just makes me wanna start using it for more things besides dev work :(


Windows + VSCode + WSL2 + Terminal + PowerToys = Just one love, never looked back.

The only problem I have with that is "Windows"

I'm currently trying to figure out how to emulate windows from a *nix distribution using qemu. I plan to use this as a "home lab" (k8s cluster or just plain fucking around), but still retain the ability to play an occasional AAA game.


You don't need to emulate windows if you have windows as parent host ;). Windows with WSL is the best linux desktop which i had for past 20 years

Just did a test using the command the author listed. Benchmarked on ArchLinux and got 0.00s. I then did the same test on MacBook Pro and got 0.332s. I feel like that's pretty bad. 0.332s might sound inconsequential, but that's just for a single echo command. I would imagine it gets exponentially worse as your executable grows in complexity.

The weird thing is the price of windows laptops have skyrocketed with the shortages. New MBPs are cheaper than X1 Carbons and XPSs with 10gen chips.

New MBP with a 10th gen chip is a $600 upgrade over the base model with an 8th gen chip.

Every other week Lenovo has some crazy 25-50% off coupon for their laptops.

How do people put up with the complete brokenness in commercial OSes? Is this really better than having to edit the occasional config file?

Personally, I know which process to kill when things go south. It's not early to acquire this information, though.

Last year I was preaching that if you can't develop in a submarine or a space station (or on the metro), from a fresh git clone to your next git push, then your development environment is broken and you should burn it to the ground and start over.

It'll be interesting to see how much power we developers will let Apple take from us before we jump the garden wall.


Interestingly, I hear that iPads cannot be used on the ISS because apps will stop launching if you disconnect from Apple's servers for too long.

Src?

I'm getting 10-15 minute beach ball of death freezes on a month old MBP 16". That recur until I hard reboot. I can't open the 'force quit applications' window during this nor the apple menu. Can't reboot or shutdown from the cli or otherwise. Some apps lose network connections, some don't. The entire system becomes unusable. It requires a hard reboot. I think it's related to Intellij IDEA and similar IDEs somehow, but profiling those shows the slowdown is not in their apps but in the OS. It won't start with anything plugged into the USB ports, not even just power. Been trying various things but if it doesn't go away, I will return this when the Apple store here reopens. The only good thing about this coronavirus is that I've had more than 14 days to test this and find out what a clusterfuck this OS is even on a $4400 brand new mbpro. Do they even test anything anymore?

Do you think developers make up a significant portion of Mac buyers? I think it's possible, but I'm not sure.

I am pretty sure the laptop market has been shrinking generally (as more people have a phone but no laptop). And most developers I know have macs. They probably don't want to make the OS significantly worse for developers...


After this, you can be sure the developer interest will go down even further

This why having a vibrant open-source ecosystem is so important. Firstly, the needs of users is the main priority (as opposed to profit or liability minimization or advertising...), and secondly, users have so many options to pick from. For example, if you don't like systemd, you are free to pick an OS without it.

I don't want to send over the Internet a record of every program I run. Is there a way to opt-out completely?

Buy a machine not from Apple.

Unplug from the internet.

I used to use Mac pretty heavily for design and audio work, but around 10.14 because of Apple switching the way they do things, I've now entirely switched to Windows for that, and Linux for everything else. I just don't want to deal with the nonsense described in this post, among several other things.

“ Another way to reduce the delays is by disabling System Integrity Protection. I say reduce, because I still do get some delays even with SIP disabled, but the system does overall feel much faster, and I would strongly recommend anyone who thinks their system is sluggish to do the same.”

Nope.


"Another way to reduce the delays is by disabling System Integrity Protection."

Definitely agree on this one here - I've noticed a big speed improvement when disabling SIP debugging with "csrutil enable --without debug" while in recovery mode.

I should note that the main reason I disable SIP isn't for speed, but to install the yabai window manager to make Aqua far more useful as a developer. I wrote a recent blog post on this, actually (https://triosdevelopers.com/jason.eckert/blog/Entries/2020/5...).


I believe disabling System Integrity Protection actually carries over to everything you boot off the computer.

> [...] it appears that low-level system API such as exec and getxattr now do synchronous network activity before returning to the caller.

WTAF. If this is really true, this is a reason for me to leave the platform for good. This is just in-acceptable in so many ways.


> a degraded user experience, as the first time a user runs a new executable, Apple delays execution while waiting for a reply from their server.

Wow, this is extremely infuriating! I just ran the "hello world" test script with the network connection disabled and it took 5 seconds to run!

     $ echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && chmod a+x /tmp/test.sh
     $ time /tmp/test.sh && time /tmp/test.sh
     Hello
     /tmp/test.sh  0.00s user 0.00s system 0% cpu 4.991 total
     Hello
     /tmp/test.sh  0.00s user 0.00s system 77% cpu 0.005 total

I'm so confused about the comments here.

There are a bunch of people who can't reproduce the slowness at all, but nearly all downvoted or you have to wade through 100's of comments to get to them.

The majority of comments are just dumping on Macs, nothing whatsoever to do with the content of the article, and seem to be blindly assuming it's true.

And I can't seem to find any substantive discussion of whether this is actually real or not, or just some weird bug on the author's machine.

I don't see any evidence that Catalina is "slow by design", just a single anecdote from the author. I was definitely hoping for some more substantive critique/discussion...


Op linked validated bug reports.. One of which Apple responded with "by design" of which op derived the title.

The down votes are because it seems pretty clear that the people who don't experience have long lived instances of their os and likely have grandfathered or disabled security settings. There are a lot of people saying ita pretty easy to replicate with a new os.

And it is, I just did it. Did you?


No they didn't, there's no link. They said it's "FB7674490" but Googling that reveals nothing, so I can't read it.

I don't know what the bug report said, or what specifically was by design. Surely "the entire machine freeze for 1-2 seconds every 10th minute, not to mention everything just being sluggish" is not by design.

And I was unable to replicate it (I was one of the comments that got downvoted), although I don't have the luxury of trying a fresh OS. I haven't disabled any security settings, and I don't know what would have been grandfathered -- that's not mentioned anywhere in the article as a factor.

So that's what's bothering me -- the assumption that contradictory evidence isn't valid while the original post somehow is, and no discussion around that, or what tradeoffs there might be.

Now, finally, there are actually some substantive comments from people testing it. There wasn't before though, and it's still unclear as to whether this really is bad design, a wise tradeoff, or if the author's machine has something else going on. Because their experience of a frustratingly slow Mac is just not the norm at all.


Did you run the test yourself? Why do you assume people are blindly assuming it's true? For me first run was 0.5s, second run was 0.004s, so there's definitely something going on.

I did. It got downvoted with no replies. I don't have any security settings changed or anything. First and second run were both around 0.005s.

That's why I wrote this new comment, in the hopes that maybe it would be seen.


Weird. I just noticed that the difference was only the very first time I ran that test. After that the second one was only 100% faster than the first one. That could easily be explained by filesystem / caching things.

> There are a bunch of people who can't reproduce the slowness at all, but nearly all downvoted or you have to wade through 100's of comments to get to them.

It's possible that they have certain security features disabled.

> The majority of comments are just dumping on Macs, nothing whatsoever to do with the content of the article, and seem to be blindly assuming it's true.

Welcome to Hacker News…this is common on any discussion on any topic, especially one that many people can understand in some way.


I've noticed the negativity on macOS. There may be reasons for it, I don't know. I'm pretty happy with it and I've started skipping some discussions because of the amount of comments that lack any curiosity, or worthy discussion.

It's not just macOS. What you really want is a topic that most commenters have no background knowledge or preconceptions about, and you have to make sure that you can't link to one in any way whatsoever. The latter is a little hard to do, because people will cling to the most tenuous of relationships in order to be able to provide their input: you could be talking about a Windows API and someone will bring up EEE through some convoluted path and from there the conversation will go downhill. The best comments are the ones on articles about dolphin psychology or whatever and someone might ask a simple question and a real expert will chime in with something like "I have worked with dolphins for 17 years and also I wrote my doctoral thesis in cetacean-human interactions" and it's just a page of an interesting viewpoint that you just never knew about.

With Apple degrading the developer experience with each release and Microsoft working hard on things like WSL(2) and the new "package manager" I think within a year or 2 lots of developers will go back to Windows-based machines.

As a security engineer myself, what Apple is doing here is completely fucking insane. I honestly cannot believe that anyone thought it was a good idea.

Has anybody in the tech media picked up on this? Doesn't seem like it from a cursory browse of my favorite sites (HN do your magic) This seems like something that Apple really ought to be taken to task for. I'm sure the privacy concerns if not the performance will rile up the broader non-HN public if only the information reaches them. Perhaps then we can get Apple to move to a less stupid system.

An issue I've been dealing with forever on my mbp 2013 is the machine just pausing input for 2-4 secs (video and audio don't hitch, just keyboard/mouse input).

I recently took the trouble to completely wipe the disk and reinstall macos mojave and it's still happening so it's not due to cruft installed over time in OSX. I dunno. I'll deal with it until it gives up the ghost and probably move to a windows machine with the work they're putting into WSL2


High quality laptops shipping with Linux have been available for some time now. I know of a couple of companies that are providing an option for employees to switch.

This coupled with the horrible docker 100% cpu usage bug (https://github.com/docker/for-mac/issues/3499) might be the top reasons why I hate WFH right now. My Linux desktop in office was so much faster at everything (granted its desktop vs laptop but still, it's a laggy mess developing on OSX now)

It gets even worse. I was doing some web dev in the last couple months and I noticed that my "localhost" was ridiculously slow. At first, I thought it was NPM/Gulp but then I noticed that it behaved irrationally, sometimes it is slow and sometimes it works.

The problem was: Parental Control. Apparently, every request was checked and thus slowed the whole thing down. Needless to say, a couple days at least were wasted in this.


Just switch to Windows and WSL. For most cases, it works just great/not noticeably slower.

There's a lot of bullshit on Windows too but nothing near OSX levels of wannabe big brother shit.

Can't think of a better long term short right now in the market than Apple (and sister cult Tesla but the electric story is at least in the early days so they may do ok)


Windows has SmartScreen and MAPS (which was previously called "SpyNet") turned on by default, on top of telemetry level that goes to eleven and cannot be turned off in consumer editions.

They're not implemented in a braindead way that's being discussed here but they're at the same level big brotherness-wise, if not worse.


The only time I’ve seen similar delays is when my mac decides it needs to do something on an external disk that needs to spin up. I have a 12Tb external that can take 10 seconds to spin up, so get a 10 second stall waiting for I/O once in a while.

I do wonder if the author has something similar going on, either with a directly attached disk or a network share.


Did the site get hit by the Slashdot effect? Can't access it.

Archive: https://web.archive.org/web/20200522164507/https://sigpipe.m...


Apple has an opportunity here - to fix all these issues in the first release of ARM macOS and disable some more functions that "don't really work well" or are "insecure" - all of a sudden ARM Mac will be so much better there will be many blog posts and videos about it smugly proclaiming how Intel could not keep up!

I intend to stay on Mojave for as long as possible, but I am curious to try out Catalina. I believe it is easy enough to install Catalina on an external SSD. My concern is whether this would be safe enough and if my computer would remain unmodified (e.g. could there be changes to firmware settings or firmware updates?)

Sorry but it's just not happening for me, on macOS 10.15.3, on my late 2016 MBP. (And I've certainly never done anything like disable SIP.)

I run the commands and get:

  Hello
  /tmp/test.sh  0.00s user 0.00s system 8% cpu 0.045 total
  Hello
  /tmp/test.sh  0.00s user 0.00s system 75% cpu 0.005 total
If I'm reading this correctly, the first run takes less than a twentieth of a second, and the second a two-hundredth? I've never experienced anything like "have the entire machine freeze for 1-2 seconds every 10th minute". And I have the slowest internet package I can buy.

The only delay that's ever noticeable is when running a program I've installed for the first time, which yes usually seems to take a few seconds, before often telling me the application couldn't be verified or something, do I want to run it anyways. Which makes sense if you're running a checksum on a 400 MB application binary. But after that first time, starting an app is always instant.

Can anyone else elucidate what the author is talking about? They're presenting it as a universal, but maybe there's something else going on with their machine? Clearly something's wrong on their end, but possibly it's just some kind of bug. I'd avoid jumping to conclusions that executables taking a second to launch is "by design".

EDIT: switching from zsh to sh gives more granular results:

  Hello
  
  real 0m0.009s
  user 0m0.002s
  sys 0m0.003s
  Hello
  
  real 0m0.005s
  user 0m0.001s
  sys 0m0.003s

I can see the delay when I remove my terminal from the DevTools permission in Security preferences.

So it's real.

However, scripts are NOT notarised, so what is it doing?

EDIT:

So after digging the scripts are being "checked" for malware, as part of XProtect.

This is interesting, it seems to be hashing scripts and testing to see if its known malware.

Anyway, easy to disable, but weird stuff.


"Modern" OSX, iOS, and Android are so secure and safe they even protect you from using your computer.

10.15.1 and then 10.15.4 both introduced random kernel panics on my iMac. Only way to solve was to reinstall MacOS on top of itself (via Recovery, kept files/apps intact).

Still no idea what or why the panics would happen, or why the reinstall solved it.

Catalina has been a very bumpy road for me so far.


Just wanted to drop this here but WSL & WSL2 makes a compelling case to move to Windows.

Man, I think I was having this issue earlier in the year and thought it was some funkyness with the firewall or application -- custom golang apps.

Who at apple thought it was a good idea to hop on the internet when invoking an application without any warning? This is loony.


I don't think they do the notarization for shell scripts and program you build from source. I've been doing large scale software development on my Catalina for quite some time and I observed zero performance degradation compared to previous OS X version.

I really hope the mess that is Catalina is fixed in the next round, or I might be on Mojave until I can switch to another OS. I've been on macOS for a long time, and I really like it. I'm productive on it. But Catalina... no, I won't touch that.