Hacker News new | comments | show | ask | jobs | submitlogin
Nvidia is reportedly in ‘advanced talks’ to buy ARM for more than $32B (www.bloomberg.com)
1105 points by caution 6 days ago | hide | past | web | 673 comments | favorite

This is quite concerning honestly. I don't mind ARM being acquired, and I don't mind Nvidia acquiring things. But I'm concerned about this combination.

Nvidia is a pretty hostile company to others in the market. They have a track record of vigorously pushing their market dominance and their own way of doing things. They view making custom designs as beneath them. Their custom console GPU designs - in the original Xbox, in the Playstation 3 - were considered a failure because of terrible cooporation with Nvidia [0]. Apple is probably more demanding than other PC builders and have completely fallen out with them. Nvidia has famously failed to cooporate with the Linux community on the standardized graphics stack supported by Intel and AMD and keeps pushing propietary stuff. There are more examples.

It's hard to not make "hostile" too much of a value judgement. Nvidia has been an extremely successful company because of it too. It's alright if it's not in their corporate culture to work well with others. Clearly it's working, and Nvidia for all their faults is still innovating.

But this culture won't fly well if your core business is developing chip designs for others. It's also a problem if you are the gatekeeper of a CPU instruction set that a metric ton of other infrastructure increasingly depends on. I really, really hope ARM's current business will be allowed to run independently as ARM knows how to do this and Nvidia has time and time again shown not to understand this at all. But I'm pessimistic about that. I'm afraid Nvidia will gut ARM the company, the ARM architectures, and the ARM instruction set in the long run.

[0]: An interesting counterpoint would the Nintendo Switch running on an Nvidia Tegra hardware, but all the evidence points to that this chip is a 100% vanilla Nvidia Tegra X1 that Nvidia was already selling themselves (to the point its bootloader could be unlocked like a standard Tegra, leading to the Switch Fusee-Gelee exploit).

You are not wrong, but the facts you have cherry picked fail to portrait the whole picture.

For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.

The claim that Nintendo is the only company nvidia successfully collaborates with is just wrong:

- nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards

- nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology

- nvidia collaborates with OS vendors like microsoft very successfully

- nvidia collaborated with mellanox successfully and acquired it

- nvidia collaborates with ARM today...

The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...

I mean, this is not nvidia specific.

You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).

I mean, you even try to paint this as if Nvidia is the only company that Apple has parted ways with, yet Apple has long track record of parting ways with other companies (IBM PowerPC processors, Intel, ...). I'm pretty sure that the moment Apple is able to produce a competitive GFX card, they will part ways with AMD as well.

> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong [...]

Hey! Wait a second, there. Nvidia isn't bad because it has a properietary Linux driver. Nvidia is bad because it actively undermines open-source.

Quoting Linus Torvalds (2012) [0]:

> I'm also happy to very publicly point out that Nvidia has been one of the worst trouble spots we've had with hardware manufacturers, and that is really sad because then Nvidia tries to sell chips - a lot of chips - into the Android Market. Nvidia has been the single worst company we've ever dealt with.

> [Lifts middle finger] So Nvidia, fuck you.

Nvidia managed to push some PR blurbs about how it was improving the open-source driver in 2014, but six years later, Nouveau is still crap compared to their proprietary driver [1].

Drew DeVault, on Nvidia support in Sway [2]:

> Nvidia, on the other hand, have been fucking assholes and have treated Linux like utter shit for our entire relationship. About a year ago they announced “Wayland support” for their proprietary driver. This included KMS and DRM support (years late, I might add), but not GBM support. They shipped something called EGLStreams instead, a concept that had been discussed and shot down by the Linux graphics development community before. They did this because it makes it easier for them to keep their driver proprietary without having work with Linux developers on it. Without GBM, Nvidia does not support Wayland, and they were real pricks for making some announcement like they actually did.

[0]: https://www.youtube.com/watch?v=iYWzMvlj2RQ

[1]: https://www.phoronix.com/scan.php?page=article&item=nvidia-n...

[2]: https://drewdevault.com/2017/10/26/Fuck-you-nvidia.html

In recent years, Linux computers have evolved to a major revenue source for Nvidia thanks to deep learning. However, not desktop users are behind this but servers, due to Nvidia's proprietary CUDA API. If they open sourced it or rebuilt it on top of mesa, it'd make it easier for AMD to implement CUDA, getting access to the deep learning ecosystem that's currently locked into CUDA. Nvidia's sales would take a huge drop. So I think it's even more likely that their drivers remain proprietary.

I don't have so much of a problem with CUDA staying closed, but rather Nvidia sabotaging Nouveau through signed firmware which they don't release (and obfuscate in their blob). Nouveau would be probably be decent by now (not as fast or feature complete, but usable for real workloads on newer cards) if it weren't for the fact that Nvidia has added features which have the direct effect of making it impossible to have a competitive open source driver.

Maybe something will change on this soon. There was speculation about this: https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-O.... But I'm not holding my breath, and it would be nice if the solution wasn't "wait and hope until Nvidia releases the software necessary to control their GPUs".

> I don't have so much of a problem with CUDA staying closed, but rather Nvidia sabotaging Nouveau through signed firmware which they don't release (and obfuscate in their blob)

Do you have more info on this ? There is a big difference between not supporting open source, and actively sabotaging it. What are they doing, exactly ?

Call me back when ROCm supports their flagship graphics card.

The 5000 series has been out for an entire year without ROCm support at this point.

ROCm exists but many frameworks don't support it.


Yes, they only can do only one good thing - go bankrupt. That would be the best news for open-source.

Should open-source perhaps drop NVidia, instead of the other way around?

This divestiture happened long ago, if your running Linux not on a server, then Intel and AMD are the only players making chips that work right out of the box.

Even there though, only Intel has a buttery smooth experience. Ryzen for laptops is half baked (terrible USB C docking performance, occasional crashing on Windows and Linux with the 2xxxU series CPU/GPU chips) and AMD GPUs still require manual intervention to load proprietary firmware.

AMD does make some performant mobile GPUs though, they work well in Debian!

Which is funny because AMD drivers on linux has been nothing short of trouble (both open source and proprietary) and Nvidia’s blob just works.

Sadly I have to agree. Ryzen 3400G here, getting hardware transcoding on the iGPU is something I still haven't sorted out. There have been several recent issues in kernel, AGESA firmware (I suspect there might be newer versions with potential fixes that my mobo manufacturer hasn't released yet; this is Patch B) and drivers. I've had several rounds of hunting down and compiling various versions of driver packages, modules and kernels from source, trying third-party PPAs, to no avail. The amdgpu/amdgpupro mess adds another layer of confusion.

I am not sure if I am missing some update, need to set some undocumented kernel flag and/or BIOS setting, if it's a software issue or Í just made a mistake somewhere. Debian 10/11.

Meanwhile, as much as I wanted to get away from Intel, their drivers have never posed any issue at all.

I believe AGESA Patch B broke something for the APUs, you should try either upgrading or downgrading your BIOS but what worked for me was downgrading to AGESA ABB, both Windows and Linux has stopped crashing now, although I still get the occasional lockup when browsing with Firefox on Linux. I found out the culprit after stumbling into this thread: https://old.reddit.com/r/AMDHelp/comments/gj9kpz/bsod_new_pc...

[citation needed] My experience directly conflicts with this and IIUC most GNU/Linux users have exactly the opposite impression. Maybe you're thinking of some past situation?

Take any AMD 5000 series card. I had a top of the line 5700.

Still no driver for compute 1 year later. I'm so happy i decided to return it and switch to intel instead of waiting for AMD or some random joe on their free time to add support for it to their open source driver.

So yeah. I'd take a working proprietary driver over no driver any day.

Here, my card on the travel netbook that I use,


The open source driver is kind of ok if the only thing we expect from it is getting a working X session.

Now if one wants to do some complex OpenGL stuff, then it might work, or not.

No I am thinking of the situation where the open source driver doesn’t have opencl support and the AMD drivers (fglrx) doesn’t compile or requires dependencies so old that it was dropped by the packagers for Arch Linux, all this for a few years until they come out with something that actually works when I’ve never ever had an issue with Nvidia. Also AMD never ever figured out how to fix screen tearing, or do so in a sane that doesn’t involve trial and error editing xorg.conf.

Even on Windows AMD drivers are the most unstable, bugged software that’s even been shipped. It’s been a long standing joke that AMD “has no drivers”.

Ryzen 3700X and Radeon 5700 XT here. Not a single problem.

I have recently used Radeon 550, 560, 570, 5500 on AMD 5050e (yes, that old!), Ryzen 1600 (non af), 3100, 3600 and all have worked fine, Ubuntu 16.04, 18.04 and 20.04. In fact on average I have found the various hardware configurations to be about 5% faster on Linux than Windows.

It's anecdotal of course, but my RX560 has been absolutely flawless on both Ubuntu and openSUSE, literally out of the box support on a standard install.

You don't even have to go to open source. You can see this hostile behaviour from their top-paying clients!

Microsoft own previous gen Xbox emulator on the next gen xbox (i think it was original xbox emulated in the 360, but i might be wrong) was impacted by the team having to reverse-engineer the GPU because nvidia refused to let the emulator people to have access to the documentation provided to the original team.

Stumbling around Google didn't find me much more info on this, do you have any citations or keywords I could follow up on?

> Quoting Linus Torvalds (2012) [0]:

Is this an Ad Hominem ? Linus does not mention there a single thing that they are actually doing wrong.

> Drew DeVault, on Nvidia support in Sway [2]:

Nvidia has added wayland support to both KDE and GNOME. Drew just does not want to support the nvidia-wy in wl-roots, which is a super super niche WM toolkit whose "major" user is sway, another super super niche WM.

Drew is angry for two reasons. First, sway users complain to them that sway does not work with nvidia hardware, which as a user of a WM is a rightful thing to complain about. Second, Drew does not want to support the nvidia-way, and it is angry and nvidia because they do not support the way that wl-roots has chosen.

It is 100% ok for Drew to say that they don't want to maintain 2 code-paths, and wl-roots and sway do not support nvidia. It is also 100% ok for nvidia to consider wl-roots to niche to be worth the effort.

What's IMO not ok is for Drew to feel entitled about getting nvidia to support wl-roots. Nvidia does not owe wl-roots anything.


IMO when it comes to drivers and open-source, a lot of the anger and conflict seems to steem from a sentiment of entitlement.

I read online comments _every day_ of people that have bought some hardware that's advertised as "does not support Linux" (or Macos, or whatever) being angry at the hardware manufacturer (why doesn't your hardware support the platform that says it does not support? I'm entitled to support!!!), the dozens of volunteers that reverse engineer and develop open source drivers for free (why doesn't the open source driver that you develop in your free time work correctly? I'm entitled to you working for free for me so that I can watch netflix!), etc. etc. etc.

The truth of the matter is, that for people using nvidia hardware on linux for Machine Learning, CAD, rendering, visualization, games, etc. their hardware works just fine if you use the only driver that they support on the platforms they say they support.

The only complaints I hear is people buying nvidia to do something that they know is not supported and then lashing out at everybody else due to entitlement.

What exactly are you invested in in this discussion? We were originally discussing Nvidia's business practices don't match ARM's business practices. But you seem to just want to take on people's personal views on Nvidia now.

You're now somehow arguing with people that they should stop complaining about Nvidia's business practices. I would agree with that in the sense that Nvidia can do whatever they want: nobody is obliged to buy Nvidia, and Nvidia is not obliged to cater to everyone's needs. It's a free enough market. But even if you don't agree with some/most of the complaints surely you must agree that Nvidia's track record of pissing of both other companies (and people) is problematic for when they take control of a company with an ecosystem driven business model like ARM's?

> I would agree with that in the sense that Nvidia can do whatever they want: nobody is obliged to buy Nvidia, and Nvidia is not obliged to cater to everyone's needs. It's a free enough market.

I'd agree with you this is OP's argument, however it's main flaw is in explicitly omitting the fact that NVidia is not the only party that's "free" to do things.

We're not obliged to buy their cards and we aren't obliged to stay silent regarding its treatment of the open-source community and why we think it would be bad for them to acquire ARM.

I am always amazed at the amount of pro-corporate spin from (presumably) regular people who are little more than occasional customers.

> We were originally discussing Nvidia's business practices don't match ARM's business practices.

We still are. I asked about "which specific business practices are these", and was only pointed out to ad hominems, entitlement, and one sided arguments.

Feel free to continue discussing that on the different parent thread. I'm interested on multiple views on this.

> You're now somehow arguing with people that they should stop complaining about Nvidia's business practices

No. I couldn't care less about nvidia, but when somebody acts like an entitled choosing beggar, I point that out. And there is a lot of entitlement in the arguments that people are making about why nvidia is bad at working with others.

Nvidia has some of the best drivers for Linux there are. This driver is not open source and distributed as a binary blob. Nvidia is very clear that this is the only driver that they support on Linux, and if you are not fine with that, they are fine with you not buying their products. This driver supports all of their products very well (as opposed to AMD's, for example), its development is made by people being paid full time to do it (as opposed to most of their competitors which also have people helping on their drivers on their free time - this is not necessarily bad, but it is what it is), and some of their developments are contributed back to open source, for free.

People are angry about this. Why? The only thing that comes to mind is entitlement. Somebody wants to use an nvidia card on Linux without using their proprietary driver. They know this is not supported. Yet they buy the card anyways, and then they complain. They do not only complain about nvidia. They also complain about, e.g., nouveau being bad, the Linux kernel being bad, and many other things. As if nvidia, or as if the people working on nouveau or the Linux kernel for free on their free time owes them anything.

I respect people not wanting to use closed source software. Don't use windows, don't use macosx, use alternatives. Want to use linux? don't use nvidia if you don't want to.

> their hardware works just fine if you use the only driver that they support on the platforms they say they support.

Except that they stop supporting older HW at some point. That, together with occasional crashes learned me not to buy nVidia HW again.

> Nvidia has added wayland support to both KDE and GNOME.

NVIDIA insisted on pushing its own EGL streams even as the wider community was moving in a different direction.

They suffer from a major NIH syndrome and do not know how to work with others at all.

Speaking as a Linux graphics developer, I can confirm that NVidia indeed is a pretty terrible actor. There could be a viable Open Source driver for most of their GPUs tomorrow if they changed some licensing, NVidia knows this.

A NVidia purchase of ARM would also create a lot of conflicts of interest.

And selling for only 32 billion seems really low for a company that significant.

$18B less than Tiktok (rumored valuation at $50B). That's hard for me to grasp.

How many ARM chips will the average consumer buy in their life? How much profit will ARM make on each of those chips? How many Tiktok videos will the average consumer watch in their life? How much profit will Tiktok make on each of those views?

It is quite simple really. You seem to be under the mistaken impression that valuations in tech are based in any way on logic. They're not. They're completely hype-driven.

That's how this year's myspace, which people will have trouble remembering 5 years from now, can get a higher "valuation" than a large semiconductor company with a 30 year track record.

Investors and the financial sector are proving time and time again that they're unable to learn from their mistakes, through no "fault" of their own, because apparently it's human nature to just be horribly bad at this.

It amazes me that people think investors somehow learned anything from the dot-com bubble, given they've been repeating all of their other major mistakes every odd year or so.

... Meanwhile Microsoft is busy buying Tik-Tok (eye roll)

I don’t want no ads on me CPUs

I am starting to see a lot of these similar comments around the internet.

I think, it is because people are now so used to Apple and Amazon's trillion valuation, with Apple closing in to 2 Trillion, people think $32B is low or ( relatively ) cheap.

Reality is ARM was quite over valued when it was purchased by Softbank.

Another context to add to this.

This might also be part of SoftBank’s fire sale which bought ARM for $32B just few years ago (2018?)

Company valuation usually involves more tangible things, like revenue and profits, or at least expected revenues. "Importance" can vanish really fast.

We don't really live in that world anymore – what is value investing when the Federal Reserve sets the price? More and more, "intagibles" like brand value are becoming more important on balance sheets than investors want to admit.

> There could be a viable Open Source driver for most of their GPUs tomorrow if they changed some licensing

I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.

Do you have a source explaining what licensing changes they would have to make and what impact would that have for Linux and Nvidia ? I'd like to read that.

Nvidia has a very different model for what they're trying to get out of their drivers. They spent something like 5x more on the number of driver developers than AMD, then would send engineers to work with AAA game studios to "optimize their games" for Nvidia. Good so far. But then these optimizations went so far as fixing (in the driver) broken game code. Like apparently it was so bad that games were being shipped without issuing BeginScene/EndScene on DirectX.

Hence AMDs push for Mantle then Vulkan. The console like API is the carrot to get people to use an API that has an verification layer so that third parties can easily say "wow what a broken game" rather "wow this new game runs on Nvidia and not AMD, what broken AMD drivers".

Nvidia open sourcing their drivers completely destroys a large chunk of their competitive advantage and is so intertwined with all the IP of the games they have hacks for that I'd be surprised if they ever would want to open source them, or even could if they wanted to.

More docs would be nice though.

The problem was never opening the existing driver.

It was:

- all kinds of problems wrt. the integration of the driver in the Linux Eco system, including the properitary driver having quality issues for anything but headless CUDA.

- nvidea getting in the way of the implementation of an open source alternative to their driver

But most of these games don't even exist on Linux.. So they wouldn't have to fix all that stuff.. As a Linux user I'd gladly do without that bloat anyway (also explains why a "driver" has to be 500MB lol)

> But then these optimizations went so far as fixing (in the driver) broken game code.

AMD does the exact same thing and always has. When you see shaders come down the wire you can replace them with better-optimized or more performant versions. It's almost always fixing driver "bugs" in the game rather than actual game bugs. And the distinction is important.

I do agree with you, but that element is something everyone has to do to remain competitive in games. Developers will only optimize for one platform (because they're crunching), and 9 times out of 10 that's a RTX2080Ti.

> I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.

This persistent bit of FUD really needs to die. Yes, you have to be careful, but at this point it's ridiculously well-known what is obviously correct and what is obviously incorrect when dealing with GPL. I'm sure there are some grey ares that haven't been worked out, but avoiding those is fairly simple.

Nvidia is already in a weird grey area, releasing binary blobs with an "open source" shim that adapts its interfaces to the GPL kernel. As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.

I don't know if statement about interfacing with GPL is true or not but your statement first calls it a FUD, meaning you believe it is false, and then you say all the driver code should be considered derivative work and therefore be subject to GPL, meaning that original statement you called a FUD is actually true. Seems to me that a lot of GPL advocates are actually responsible for a good part of the GPL FUD.

> This persistent bit of FUD really needs to die. [...] I'm sure there are some grey ares that haven't been worked out

Way to contradict yourself.

> but avoiding those is fairly simple.

[Citation needed].

> As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.

Maybe this is what needs to happen to force companies to change their mindset, but where I work, lawyers tell us to (1) never contribute to any GPL'ed based code, (2) never distribute GPL'ed code to anybody (e.g. not even a docker container), etc.

Their argument is: a single slip could require us to publish all of your code, and make all of our IP open, and to make sure this doesn't happen, an army of lawyers and software engineers and managers would need to review every single code contribution that has something to do with the GPL. So the risks are very high, the cost of doing this right is very high as well, and the reward is... what exactly ? So in practice this means that we can't touch GPL'ed code with a 10 foot pole, it is not worth the hassle. If I were to ask my manager, it will tell me that it is not worth it. If they ask their manager, they will tell them the same. Etc.

BSD code ? No problem, we contribute to hundreds of BSD, MIT, Apache, ... licensed open source projects. Management tells us to just focus on those.

Nouveau is a highly capable open source driver for NVIDIA GPUs based on reverse-engineering.

For some older card generations (e.g. GTX 600 series) it was competitive with the official driver. But in every hardware generation since then, the GPU requires signed firmware in order to run at any decent clock speed.

The necessary signed firmware is present inside the proprietary driver, but nouveau can't load it because it's against the ToS to redistribute it.

Most GPU features are available but run at 0.1x speed or slower because of this single reason. Nvidia could absolutely fix this "tomorrow" if they were motivated.


Solution: Download the Nvidia blob, isolate the binary firmware, extract, load. Be fun? Absolutely not. Desperate times and inconsciable acts of legalism call for equally extreme levels of overly contrived legalism circumvention.

At this point I've gotten so bloody tired of the games people play with IP, that I'm arriving at the point I think I wouldn't even mind being part of the collateral damage of our industry being burned to the ground through the complete dissolution of any software delivered or related contract. If you sell me hardware, and play shenanigans to keep me from being able to use it to it's fullest capability, you're violating the intent of the rights of First Sale.

To be honest, I think every graphics card should have to be sold bundled with enough information for a layperson (or I'll throw out a bone,a reasonably adept engineer) to write their own hardware driver/firmware. Without that requirement, this industry will never change.

Wow, that does suck. Imagine them doing the same thing with ARM.

AMD and Intel had been providing open source drivers for their gpus for long time and they've yet to have had any legal problems with it.

AMD didn't directly open source their driver because if legal issues.

The point is it's not about open sourcing your properitary driver but about not getting in the way of an alternative open source driver, maybe even letting it a bit of an have even if just unofficially.

I thing if I where nvidea I might go in the direction of having a fully it at least partially open source driver for graphic stuff and a not so open source driver for headless CUDA (potentially running alongside a Intel integrated graphics based head/GUI).

Through I don't know what they plan wrt. ARM desktops/servers so this might conflict with their strategies there.

Perhaps Nvidia's ASICs and/or firmware contains more legally dubious components. ;-)

The GPL can't cause your code to automatically become GPL licensed. It can only prevent you from distributing the combination of your incompatibly licensed code and others' GPL code.

>I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.

The only tricky things involve blatantly betraying the spirit of the agreement while trying to pretend to follow the letter and hoping a judge supports your interesting reading of the law.

Even so there is no provision in law wherein someone can sue you and magically come into possession of your IP.

It would literally require magical thinking.

Out of interest how much work does ARM put into the Linux kernel, OSS compilers, OSS libraries.

Quite a lot.

Take a look at a recent snapshot of changesets and lines of code to the Linux kernel contributed by various employers: https://lwn.net/Articles/816162/

Arm themselves is listed at 1.8% by changesets; but Linaro is a software development shop funded by Arm and other Arm licensees to work on Arm support in various free software, and they contributed 4% of changesets and 8.8% by lines of code. And Code Aurora Forum is an effort to help various hardware vendors, many of whom are Arm vendors, get drivers upstreamed, and they contributed 1.8% by changesets and 10.1% by lines changed. A number of other top companies listed are also Arm licensees, though their support may be for random drivers or other CPU architectures as well.

However, Arm and companies in the Arm ecosystem do make up a fairly large amount of the code contributed to Linux, even if much of it is just drivers for random hardware.

And Arm and Linaro developers also contribute to GCC, LLVM, Rust, and more.

I think the main question is weather they will "just" own AMD it weather they will absorb it.

There’s no defending Nvidia’s approach to Linux and OSS. It is plain awful no matter how you try to twist the reality. And it is actively damaging because it forces extra work on OSS maintainers and frustrates users. You should not be required to install a binary blob in 2020 to get basic functionality (like fan control) to work. Optimus and wayland support is painfully, purposely bad.

Also the Nvidia Wayland support is horrible.

> You should not be required to install a binary blob in 2020 to get basic functionality (like fan control) to work.

You are not required to do that. Use nouveau, buy an AMD or intel GFX card.

You are not entitled to it either. People developing nouveau on their free time don't owe you anything, and nvidia does not owe you an open source driver either.

I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.

I don't use nvidia GFX cards on linux anymore (intel suffices for my needs), but when I did, I was happy to have a working driver at all. That was a huge upgrade from my previous ATI card, which had no driver at all. Hell, I even tried using AMD's ROCm recently on Linux with a 5700 card, and it wasn't supported at all... I would have been very happy to hear that AMD had a binary driver that made it work, but unfortunately it doesn't.

And that was very disappointing because I thought AMD had good open source driver support. At least when buying Nvidia for Linux, you know beforehand that you are going to have to use a proprietary driver, and if that makes you uncomfortable, you can buy just something else.

> You are not required to do that. Use nouveau, buy an AMD or intel GFX card.

Has internet discussion really fallen this low that all needs to be spelled out and no context can ever be implied?

We're in a thread about NVidia, so of course OP's talking about NVidia hardware here. Yeah, they can get AMD, but that does not change their (valid) criticisms of NVidia one bit.

> I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.


Windows and macOS have different standard for drivers than many Linux users do. Is it really that surprising that users who went with an open-source operating system find open-source drivers desirable too?

I find it really weird to assume that because something is happening somewhere, it's some kind of an "objective fact of reality" that has to be true for everyone, everywhere.

When you shop for things, are you looking for certain features in a product? Would you perhaps suggest in a review that you'd be happier if a product had a certain feature or that you'd be more likely to recommend it?

It's the same thing. NVidia is not some lonely developer on GitHub hacking during their lunch break on free software.

Do you also assume that the kind of music you find interesting is objectively interesting for everyone?

This has nothing to do with entitlement. It's listing reasons for why someone thinks NVidia buying ARM is a bad idea.

> Is it really that surprising that users who went with an open-source operating system find open-source drivers desirable too? When you shop for things, are you looking for certain features in a product? Would you perhaps suggest in a review that you'd be happier if a product had a certain feature or that you'd be more likely to recommend it?

It is to me. When I buy a car, I do not leave a 1 star review stating "This car is not a motorcycle; extremely disappointed.".

That's exactly how these comments being made sound to me. Nvidia is very clear that they only support their proprietary driver, and they deliver on that.

I have many GFX card from all vendors over the years, and I've had to send one back because the vendor wasn't honest about things like that.

Do I wish nvidia had good open source drivers? Sure. Do I blame nvidia for these not existing, not really. That would be like blaming microsoft or apple for not making all their software open source.

I do however blame vendors that do advertise good open source driver support that ends up being crap.

What does any of this have to do with nvidia buying or not buying arm ? Probably nothing.

What nvidia does with their GFX driver can be as different from what ARM does, as what Microsoft does with Windows and Github.

I would probably agree with you if everything was modular and commodity and easily swappable. If I decide I won't buy hardware with nvidia in it, that chops out a chunk of the possibly laptops I can have. It means I can't repurpose older hardware; sure, hindsight may be 20/20, but perhaps I didn't have the foresight 7 years ago to realize I'd want to run Linux on something today (yeah, older hardware is better supported, but it's by no means universal). It means that I can't run some things that require CUDA and don't support something like OpenCL.

And you can argue that that still is all fine, and that if you're making a choice to run Linux, then you have to accept trade offs. And I'm sympathetic to that argument.

But you're also trying to say that we're not allowed to be angry at a company that's been hostile to our interests. And that's not a fair thing to require of us. If nvidia perhaps simply didn't care about supporting Linux at all, and just said, with equanimity, "sorry, we're not interested; please use one of our competitors or rely on a possibly-unreliable community-supported, reverse-engineered solution", then maybe it would be sorta ok. But they don't do that. They foist binary blobs on us, provide poor support, promise big things, never deliver, and actively try to force their programming model on the community as a whole, or require that the community do twice the amount of work to support their hardware. That's an abusive relationship.

Open source graphics stack developers have tried their hardest to fit nvidia into the game not because they care about nvidia, but because they care about their users, who may have nvidia hardware for a vast variety of reasons not entirely under their control, and developers want their stuff to work for their users. Open source developers have been treated so poorly by nvidia that they're finally starting to take the extreme step of deciding not to support people with nvidia hardware. I don't think you appreciate what a big deal that is, to be so fed up that you make a conscious choice to leave a double-digit percentage of your users and potential users out in the cold.

> None of the drivers on my windows and macosx machines are open source. They are all binary blobs.

Not sure how that's relevant. Windows and macOS are proprietary platforms. Linux is not, and should not be required to conform to the practices and norms of other platforms.

> But you're also trying to say that we're not allowed to be angry at a company that's been hostile to our interests

This company in no way shape or form is obligated to cater to your interests. In this case it would likely be counter to their interests.

I don't know why this is downvoted. If any, Nvidia has been providing quality drivers for Linux for decades, and it was the only way to have a decent GPU supported by Linux in the 2000s, as ATI/AMD cards were awful in Linux.

> For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.

But for entirely different reasons. Apple switched from PowerPC to Intel because the PowerPC processors IBM was offering weren't competitive. They switched from Intel for some combination of the same reason (Intel's performance advantage has eroded) and to bring production in-house, not because Intel was quarrelsome to do business with.

Meanwhile Apple refused to do business with nVidia even at a time when they had the unambiguously most performant GPUs.

Apple didn't part ways with Intel and IBM because they were difficult to work with. They parted ways because Intel and IBM fell behind in performance. Nvidia has certainly not, and Apple has paid a price with worse graphics and machine learning support on Macs since their split. It's clearly different.

Correct, IBM didn't care to make a power efficient processor and Motorola didn't see the benefit in Multimedia extension in their processor because they needed them for their network devices.

Nvidia introduced a set of laptop GPUs that had a high rate of failure. Instead of working with and eating some of the cost of repairing these laptops they told their customers to deal with it. Apple being one of their customers got upset and left holding the bag of shit and hasn't worked with them since.

Intel and AMD have used their x86/AMD64 patents to block Nvidia from entering the x86 CPU market.

Nvidia purchasing ARM will hurt not the large ARM licensees like Apple and Samsung but the ones that need to use the CPU in a device that does not need any of the Multimedia extensions that NVidia will be pushing.

With intel it's a bit more complicated than that. I think running Macs on their own processors is a bit cost saver for them, and allows more control. And intel's CPU shortages have hurt their shipping schedules. I don't think this transition is about CPU performance.

But yeah I don't think it's about collaboration either.

> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...

Out of curiosity, is there any large open source product from NVidia? I can't think of any.

Only for their own hardware (RAPIDS, all the cuda libraries, etc.), which other companies like AMD have just forked and modified to work on their hardware keeping the algorithms intact.

NVIDIA contributes mostly to existing open source projects (LLVM, Linux kernel, Spark, etc.), see https://developer.nvidia.com/open-source

Not agreeing or disagreeing, just want to point out that LLVM was always open source and wasn't developed by Apple. Apple just happened to hire the dev who initially wrote it.

Most LLVM development has to my knowledge been funded by Apple. LLVM and Clang has seen the bulk of their work done on Apple payroll.

It is a bit like WebKit. It was based on KHTML which was an open source HTML renderer. But Apple expanded that so greatly on their own payroll that it is hard to call WebKit anything but an Apple product.

This is not true anymore. Apple funded a significant part of LLVM/Clang work in the 2010s, and then again with the aarch64 backend, but nowadays Google and Intel contribute much more to LLVM than Apple.

Yep. I didn't meant that Apple created LLVM, only that Apple contributes _a lot_ to the open source LLVM project.

I think the common theme in your examples is that in these situations other parties bend to Nvidia's demands. Nvidia has no problem with other parties bending to their demands. But when another company or organization requires Nvidia to bend to their demands, things go awry almost without exception.

EDIT - for added detail:

> - nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards

Most (all?) of which bend to Nvidia's demands because Nvidia's been extremely successful in getting end users to want their chips, making the Nvidia chip a selling point.

> - nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology

IBM bends to Nvidia's demands so POWER can remain a relevant HPC platform.

> - nvidia collaborates with OS vendors like microsoft very successfully

Microsoft is the only significant OS vendor with which Nvidia collaborates successfully. It's true - but for the longest time Nvidia would have been out of business if they didn't. I will concede this point, but I don't find this is enough to paint a different picture.

> - nvidia collaborated with mellanox successfully and acquired it

Mellanox bent over to Nvidia's demands to such an extent that they were acquired.

> - nvidia collaborates with ARM today...

Collaboration in what sense? My impression is that Nvidia and ARM have a plain passive customer/supplier relationship today.

> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...

Nvidia is humongously behind their competitors Intel and AMD in open source contribution while having a large amount more R&D in graphics. They are terrible at open source compared to the "industry standard" of their market, and only partake as far as it serves their short term needs.

They are perfectly entitled to behave this way, by the way. But Nvidia's open source track record is only more evidence is that they don't understand how to work in an open ecosystem, not less.

> You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).

The "whataboutism" is valid but completely irrelevant here. I would also not appreciate Apple buying ARM.

> For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.

Apple has parted ways with Intel, IBM, Motorola, Samsung (SoCs) and PowerVR for technology strategy reasons, not relationship reasons. Apple had no reason to part ways with Nvidia for technical reasons (especially considering they went to AMD instead), but did so because of the terrible relationship they built.

> Apple had no reason to part ways with Nvidia for technical reasons (especially considering they went to AMD instead), but did so because of the terrible relationship they built.

I'm typing this on a MacBook with an Nvida GPU that was created in 2012, many years after the failing laptop GPU debacle. AFAIK, Apple used that GPU until 2015?

I'd wager that Apple has been using AMD for something as mundane as offering better pricing, rather than disagreement 12 years ago. (Again: despite all the lawsuits, Apple is still a major Samsung customer.)

> I'd wager that Apple has been using AMD for something as mundane as offering better pricing, rather than disagreement 12 years ago. (Again: despite all the lawsuits, Apple is still a major Samsung customer.)

This used to be true, as Apple swapped between AMD and Nvidia chips several times in 2000-2015. Then Nvidia and Apple fell out, and Apple has not used Nvidia chips in new designs in 5 years - a timeframe in which Nvidia coincidentally achieved its largest technical advantages over AMD. Apple goes as far as to actively prevent Nvidia's own macOS eGPU driver from working on modern macOS. A simple pricing dispute does not appear to be a good explanation here.

Whatever the reason may be, the fact that Apple used Nvidia GPUs until 2015 at least debunks the endlessly repeated theory that it was because of the broken GPUs of 2008.

What hardware does apple use for all of its machine learning training (e.g. for Siri, etc. ) ?

I never quite got how the Apple story was an indictment of NVIDIA. The first-gen RoHS-compliant solders sucked and would fracture... how exactly is that NVIDIA's fault? In fact, wouldn't Apple have been the ones who chose that particular solder?

It is the same issue that caused the Xbox 360 red-ring-of-death, and caused "baking your graphics card" to become a thing (including AMD cards). It basically affected everyone in the industry at the time, and Apple would not have gotten any different outcome from AMD had they been in the hotseat at the time. They were just throwing a tantrum because they're apple damn it and they can't have failures! Must be the supplier's fault.

That one has always struck me as a "bridezilla" story where Apple thinks they're big enough to push their problems onto their suppliers and NVIDIA said no.

And as far as the Xbox thing... Microsoft was demanding a discount and NVIDIA is free to say no. If they wanted a price break partway through the generation, it probably should have been negotiated in the purchase agreements in the first place. NVIDIA needs to turn a profit too and likely structured their financial expectations of the deal in a particular way based on the deal that was signed.

Those are always the go-to "OMG NVIDIA so terrible!" stories and neither of them really strike me as something where NVIDIA did anything particularly wrong.

> Microsoft is the only significant OS vendor with which Nvidia collaborates successfully

Canonical, which ships nvidia's proprietary driver with Ubuntu, is another quite major OS vendor that collaborates with nvidia successfully. Recently, Ubuntu's wayland-based desktop environment was also the first to work with nvidia's driver (the results of this work are open source).

Apple is parting with Intel over quality control issues. That is the same reason why Apple parted with IBM and Motorola that and Intel chips were faster.

You will find ARM Macs are cheaper than Intel Macs, even if not as fast but consume less power due to mobile technology.

Microsoft had the Surface tablet with ARM chips and an ARM version of Windows which didn't sell as well, but then they are not Apple who won't make the same mistakes as Microsoft.

I much prefer it if we focus on NVidia since its the subject matter, rather than go into a pitless discussion

Well if Nvidia acquires ARM, just maybe it will push apple to riscV

Not really. There is nothing any future owner of ARM can do to cut Apple out. Which is why they are not interested in purchasing ARM themselves. They co-developed ARM6 with Acorn and VLSI. Their license allows them to build on to the ARM core. Most Nvidia can do is try to outperform Apple but it will come at a lost of customer that don't need desktop features like GPUs. https://en.wikipedia.org/wiki/ARM_architecture#Architectural...

Whataboutism. All you did was show that Apple is also bad.

No. What I did show is that any multi 10.000 employee FAANG has thousands of projects on flight, some of which are good, some of which are bad.

This contradicts the claim from the OP that suggests that all the projects from one of these companies are all bad.

I'm not normally one to defend Nvidia (paratiulalrly not from my Linux laptop), but at least the Xbox and PS3 issues never really seemed to be their fault from what I've heard on the grapevine.

Xbox: The Xbox's security was broken, and Nvidia apaprently took the high road, claimed a loss on all existing chips in the supply chain (claiming a loss fo the quarter out of nowhere and tanking their stock for a bit) and allowed Microsoft to ship a new initial boot ROM as quickly as possible for a minimum of cost to Microsoft. When that new mask ROM was cracked within a week of release Microsoft went back to Nvidia looking for the same deal and Nvidia apparently told them to pound sand and in fact said that they would be doing no additional work on these chips, not even die shrinks (hence why there was no OG Xbox Slim). There are other reasons why Microsoft felt like Nvidia still owed them albeit, but it was a bit of a toxic relationship for everyone involved.

PS3: they were never supposed to be the GPU until the eleventh hour. The Cell was supposed to originally crank up to 5GHz (one of the first casualties of the end of Dennard scaling, and how it affected Moore's law as we conceived it) and there were supposed to be two Cell processors in the original design, and no dedicated GPU. When that fell through and they could only crank them up to 3.2GHz, they made a deal with Nvidia at the last second to create a core with an new bus interconnect to attach to the Cell. And that chip was very close to the state of the art from Nvidia. Most of it's problems were centered around duck taping in a discrete PC GPU into the console with time running out on the clock, and don't think that anyone else would have been able to deliver a better solution under those circumstances.

Like I said, Nvidia is a scummy company in a lot of respects, but I don't think the Xbox/PS3 issues are necessarily their fault.

I would agree that general Nvidia troubles don't particularly stand out in the PS3 hardware design clusterfuck, but Microsoft's and Nvidia's falling out really is indicative of terrible relationship management even if it was from both sides. Again my point is that Nvidia just doesn't understand how to work together, how not to view everything as a zero sum game. That doesn't mean that Nvidia is the only bad actor in these situations, but Nvidia really does end up in these situations quite a lot.

That is just not true. And I am with parents I dont normally come and defend Nvidia.

If by working with everyone meant stepping back and relenting in every possible way then Nvidia would not be profitable. I am not sure why Microsoft felt they were entitled to Nvidia. And Nvidia just said no. It was that simple.

Nvidia wants to protect their business Interest, and that is what business is all about. And yet everyone on the internet seems to think company should do open source or throw resources into it etc.

I've already mentioned in the top parent comment that Nvidia is perfectly entitled to behave this way. They clearly know how to run a successful business in this way. I have bought Nvidia chips in the past and will continue to do so in the future when they are the best option for my use case - I don't really try to personify companies or products like this.

I am just pointing out that Nvidia's evident opinion on how to run a business (their corporate culture) is not in line with cultivating an open ecosystem like ARM is running. And the cultivation of this ecosystem is ARM's key to success here. Nvidia is entitled how to run a business how they want, but I'm very much hoping that that way of working does not translate to how they will run ARM.

People everywhere in this thread are having huge difficulty separating the point "Nvidia's way of doing business does not match ARM's" with "I have personal beef with Nvidia's way of doing business". I'm trying to make the former argument.

> That is just not true.

Out of curiosity - what isn't true here? Am I missing facts, or are you expressing disagreement with my reading of the business situation? If the latter is based on some understanding I have some personal beef with Nvidia, then please reconsider.

On the plus side it will probably really give RISC-V (and other platforms) a boost for the risk averse businesses.

The ARM company is not just about the instruction set architecture. The ISA wouldn't be interesting at all if no good processors were built with the ISA [0]. For RISC-V to succeed, it requires a company that builds some good processor designs for it - for smartwatches, smartphones, tablets, laptops desktops - and licenses that to others. That company (one or multiple) does not (yet) exist, and is not easy to build.

[0]: Which is exactly why SPARC, and with one exception Power is dying, and why RISC V is yet to deliver. Nobody (bar IBM's POWER line) is building good processors with those ISAs that make it worth the effort to use. Nothing to do with the ISA - you just need chips people are interested in using.

Yep. It's difficult to build a community-- you need to have enough mass to get further interest in. From a business view you end up with the question of "Why bother with RISC-V when ARM is doing what we need and has enough critical mass to keep things going forward?"

About the only thing that could force that to change would be another company buying up ARM and changing the licensing mechanisms (e.g. pricing or even removing some license options) going forward.. or just wrecking the product utterly.

I do think RISC-V has an opportunity here, but only if ARM sells out to NV and NV screws this up as hard as they're likely to in that situation.

> For RISC-V to succeed, it requires a company that builds some good processor designs for it - for smartwatches, smartphones, tablets, laptops desktops - and licenses that to others

The way I see it is that this may actually generate incentive for someone to do that. One of the reasons that that isn't happening yet is because there's no real need with ARM vendors supplying and no real chance with ARM vendors as competition. This could, in theory, clear the way.

This is assuming it would have to be a new company, rather than an existing company like Qualcomm or AMD which could produce a processor with a different ISA if nVidia/ARM became unreasonable to deal with.

This is particularly true for Android because basically the entire thing is written in portable languages and the apps even run on a bytecode VM already, so switching to another architecture or even supporting multiple architectures at the same time wouldn't be that hard.

> This is particularly true for Android because basically the entire thing is written in portable languages and the apps even run on a bytecode VM already, so switching to another architecture or even supporting multiple architectures at the same time wouldn't be that hard.

Google could easily afford to design their own RISC-V CPUs and port Android to it, if they thought it was in their strategic interests to do so.

I think it really depends on how nVidia-owned Arm behaves. If it behaves the same as Softbank-owned Arm, I don't think Google would bother. If it starts to behave differently, in a way which upsets the Android ecosystem, Google might do something like this. (I imagine they'll give it some time to see whether Arm's behaviour changes post-acquisition.)

Given the geopolitical/geoeconomic struggle between the US and China i wouldn't be surprised if China will pivot into RISC-V arch.

And given that Nvidia is a US company, that makes them quiet toxic for a Chinese company to source from.

China is pushing hard both on alternative ISAs like RISC-V and for control of ARM IP.


I’m not saying your claim is wrong but it’s not clear to me how that article backs the claim that China is pushing hard on RISC-V. It seems more like the old CEO of Arm China was doing no good very bad things, most likely for his own benefit.

“Arm revealed that an investigation had uncovered undisclosed conflicts of interest as well as violations of employee rules.”

That was intended to support the second part of my comment (although I can definitely see how that wasn’t clear - sorry about that). A lot of Chinese companies have come out with new RISC-V designs and it’s clear they’re prioritizing making it a possible alternative platform in the case that Arm can no longer be used. The RISC-V foundation also decided to move from the US to Switzerland to avoid the exact sorts of restrictions that have been placed on Arm.


>Arm China CEO Allen Wu has refused to step down after being dismissed by Arm's board

The story you posted is incredible. Does this happen anywhere else in the world?

This can happen anywhere in the world. In order to remove a CEO, you have to follow the proper process. Allen Wu claims that the process wasn't followed and, therefore, his dismissal was illegal and void in effect.

It's not like he was dismissed and he just didn't leave his office. He's challenging the legality of his dismissal.

Ah, but what will it do for the RISC-adverse?

a•verse /əˈvɜrs/ (adj.): Having a strong feeling of opposition to; unwilling: Not averse to spending the night here.

a•verse (ə vûrs′), (adj.): Having a strong feeling of opposition, antipathy, repugnance, etc.; opposed: He is not averse to having a drink now and then.

source: https://www.wordreference.com/definition/averse

Fine, I mistyped. I don't precisely need or desire additional spelling classes from a mysterious online person.

No, I'm sorry, I didn't get your pun and thought you were alluding to the spelling in GP's post.

I didn't mean to bother you, I've been pedantic, thanks for pointing it out.

Well, besides ARM, there is also MIPS core or ISA that IC makes can also buy a license and embed into their products in the same fashion as ARM.

They can also design their own ISA. An ISA is a document, they can write their own. Now, can you think of reasons why they wouldn't want to do it that don't also apply to MIPS?

I was thinking the same thing. Nvidia could start charging obscene fees for ARM licenses, but then RISC-V is poised to receive more investment and become increasingly mainstream. Not such a bad thing. We can switch architectures. The toolchain is maturing. Imagine if more companies started making high-performance RISC-V chips?

I was going to say MIPS but yeah.

Don't forget the GeForce Partner Program they pushed a while back which required partners to make their gaming brands exclusive to GeForce products. They ended up cancelling it and I bet the reason was due to all the anti-competitive violations the FTC would have slapped on them.

While Nvidia has a vastly superior product to AMD and Intel, they have less than 20% of the GPU market. Intel has held greater than 50% market share since 2010.

It is very hard to make an anti-competition case against someone who is consistently 2nd and 3rd in the market.

The “GPU market” is not an ideal market (E.g. with total fungibility of goods, low non-elastic demand, etc) - Intel has most of the share of GPUs sold because it’s impossible for NVIDIA - or anyone else - to compete with Intel in the spaces where Intel supplies without competition: CPU-integrated GPUs.

On a related note: with PCs now definitely heading towards ARM, this is a sensible move by NVIDIA: they could now sell GeForce-integrated ARM chips for future Windows and Linux boxes - and then they would be the ones with the dominant marketshare.

If only a point of balance. Intel integrated GPU is safe choice on Linux. If it was not there would be space for competitors - entry level GPUs, AMD iGPU.

The anti-competitiveness isn't the market isn't all GPUs, it's gaming hardware. The sole providers GPU providers for that space is just AMD and Nvidia.

Nvidia's GPP would require manufactures such as ASUS, Gigabyte, MSI, HP, Dell, etc. to have their gaming brands only use Geforce GPUs. So all the well known gaming brands such as Alienware, Vodoo, ROG, Auros and Omen would only be allowed to have Geforce. nVidia already has aggressive marketing plastering their brand across every esports competition, which is fair game, but the GPP would be a contractual obligation to not use AMD products.

Which is a perfectly reasonable and legal thing to do, even if you don't personally like it.

Nike is the exclusive clothing brand of every NBA team. American Airlines is the exclusive airline of the Dallas Cowboys. UFC fighters can only wear Reebok apparel the entire week leading up to a fight.

Heck, I worked for a company that signed an exclusive deal to only use a single server vendor.

> Heck, I worked for a company that signed an exclusive deal to only use a single server vendor.

How did that work out? Did your company secure a good rate - and/or did the vendor become complacent once they realised they didn’t have to compete anymore? Did the contract require minimum levels of improvement in server reliability and performance with each future product generation?

It's a tough spot to be in. AMD and Intel split the largest chunk of the cake because their products are cheap, so they make money in volume.

The only reason nvidia has 20% of the GPU market at all is because their products are better, but without volume, there is very little separating you from losing the market.

If NVIDIA slips over AMD and Intel perf wise during one generation, the competition will have cheaper and better products, so it's pretty much game over.

>If NVIDIA slips over AMD and Intel perf wise during one generation

It has happened many times.

In both graphics and compute ?

Its ok for nvidia to release an architecture that does compute very well, but barely improves graphics, and vice-versa.

But I don't recall any compute generation where there was a better product from the competition.

>While Nvidia has a vastly superior product to AMD

Which product? It can't possibly be their GPUs you mean because that would be hilariously wrong. That is like saying a Lamborghini is a better car than a VW because it has a higher top speed.

To me -and to many many buyers- AMD is the superior product. To most Intel has the best product by far (business laptop, Chromebook, etc.)

I'm curious what ways you think AMD GPUs are better? I can think of dozens of ways NVIDIA GPUs are better, struggling to think of any for AMD.

AMD has the best iGPUs available, which (unlike Intel's) are actually fast enough to play a lot of games. They're also significantly more power efficient as a result of 7nm. For any use where this is fast enough -- and this is a huge percentage of the PC market -- nVidia has no answer to this.

AMD is the only option for a performant GPU with reasonable open source drivers. Intel has the drivers but they don't currently offer discrete GPUs at all. nVidia doesn't have the drivers.

AMD makes it a lot easier to do GPU virtualization.

AMD GPUs are used in basically all modern game consoles, so games that run on both are often better optimized for them.

They also have the best price/performance in the ~$300 range, which is the sweet spot for discrete GPUs.

Performance per dollar, and usually more energy efficient. NVIDIA coasts on their proprietary extensions imo.

Maybe it just dawned on them that calling upon the wrath of an army of angry gamers isn't such a good idea.

But it still killed Kaby Lake G. What could have been... sigh

Also, it must be hard to trust them as the creator of the designs you use if they also compete with you directly in the market.

At this point, do Apple and Qualcomm even depend on ARM's new designs? In the same way that AMD branched from Intel but are still mostly compatible, can the same thing happen in mobile chipsets?

Apple very much does not and is reported to have a perpetual license to the ISA. Apple will most likely be fine even in the worst case scenario.

Qualcomm however has been rebranded/tweaked (which is unclear) ARM standard CPU core designs since 2017. They very much depend on ARM doing a lot of heavy lifting.

A few years ago 15 companies, among which Qualcomm, Apple, Intel, Nvidia, Microsoft, Samsung, Huawei and more had an architecture license which is perpetual so that probably puts them all on safe ground. I'm sure that the specific licensing terms can vary but I doubt someone like Qualcomm didn't take any precautions for exactly such eventuality given how much they rely on being able to ship new ARM based SoCs. They probably gave up on designing custom cores because of the effort/costs involved and the fact that 99% of the Android market doesn't really require it. But they'd still have to ship ARM cores, standard or not.

Apple is probably the safest of the bunch given how they helped build ARM.

ARM announced it would cut ties with Huawei after the US ban but reconsidered the decision less than half a year later so I assume that the architecture license is either usually iron clad or simply too valuable to both sides to give up.

> more had an architecture license which is perpetual

Architectural license is not necessarily perpetual

> Drew declined to comment on whether the deal was multi-generational


So they may have license for ARMv8 but not future ISAs like ARMv9.

Perpetual means they can indefinitely deliver as many designs as they want using the ISA they licensed. Not "perpetual for everything ARM present and future". It's similar to the perpetual multi-use license except the holder has more freedom with the modifications and customized designs. All other licenses are time limited.

And again, the terms of the license may vary. I have the impression that Apple has a far more permissive license than anyone else out there for example.

Ah, I was not aware of this agreement.

Qualcomm has shown in the past to be able to build great custom ARM CPUs not based on an ARM standard design. But it seems they decided the investment was not worth it after their custom Kryo design (which was not a complete failure but definitely not better than what ARM was producing at the time). But I think they'll need to go back to their own silicon at some point if this acquisition happens.

For sure Huawei and Samsung (and smaller manufacturers like Rockchip, Mediatek, Allwinner) don't have an impressive track record designing custom CPU IP and definitely not custom GPU IP. These guys should be terribly alarmed if this were to happen.

All the latest Snapdragons use bog standard ARM cores.

I imagine there are some ongoing license fees paid for ARM.

> This is quite concerning honestly. I don't mind ARM being acquired, and I don't mind Nvidia acquiring things.

You should mind both of these things. The more oligopolistic technology is, the worse.

Other than that - fully agree with your concern. As a GPU developer I'm often frustrated with NVIDIA's attitude towards OpenCL, for example.

if so, could this end up as good for RISC-V in the long run?

Definitely in the short run, because of the understandable fear from NVIDIA's competitors to use their (now) technology. Maybe in the mid run if those fears begin to crystallize. Unlikely in the long run, I'd assume NVIDIA would spin ARM off before killing it entirely, buying ARM would be a multi-billion investment.

Indeed. Nvidia shouldn't be allowed to buy it, given their status (not just a reputation) of an anti-competitive bully.

But anti-trust is so diluted and toothless these days, that the deal will probably be simply rubber stamped. If they aren't stopping existing anti-competitive behavior, why wouldn't they allow such bullies to gain even more power?

Yeah, I, too am worried about this. I would have rather had a conglomerate of companies with Apple being one buying them and keeping them private. But oh well. Hopefully Nvidia does right by all of ARM’s existing customers.

With this buy Nvidia has GPUs, CPUs, networking, what else do they need to be a vertically integrated shop?

RAM, manufacturing, storage, cooling, power supply, ... depends on how far you want to vertically integrate everything.

Let Nvidia have ARM, they'll run it into the ground and speed up Risk V adoption.

If all 'closed' companies would support Linux as well as NVIDIA does then I would throw a party. Keep in mind that they don't have to open up their stuff. Instead, they support it to the hilt and as long as I've been using Linux and Nvidia together (2006 or so) they've never let me down.

> Nvidia is the only suitor in concrete discussions with SoftBank, according to the people.

Would Arm stakeholders (i.e. much of the computer industry) prefer an IPO?

In 2017, Softbank's Vision Fund owned 25% of Arm and 4.9% of Nvidia, i.e. these are not historically neutral parties, https://techcrunch.com/2017/08/07/softbank-nvidia-vision-fun...

After WeWork imploded, https://www.bloomberg.com/opinion/articles/2019-10-23/how-do...

> Neumann created a company that destroyed value at a blistering pace and nonetheless extracted a billion dollars for himself. He lit $10 billion of SoftBank’s money on fire and then went back to them and demanded a 10% commission. What an absolute legend.

Is the global industry (cloud, PC, peripheral, mobile, embedded, IoT, wearable, automotive, robotics, broadband, camera/VR/TV, energy, medical, aerospace and military) loss of Arm independence our only societal solution to a failed experiment in real-estate financial engineering?

ARM was public for a long time. In 2016 they were taken private by Softbank via a $32bn acquisition.

ARMs IPO value is under $10B, revenue/profits too small, growth not strong enough.

Why would Arm be valued at $10B publicly and $32B+ privately? Nvidia shareholders would be paying a premium for ... what exactly? Did Softbank overpay for Arm?

Is Arm not profitable as a standalone business? They recently raised some license fees by 4X.

I don’t believe NVidia will pay $30B. But certainly they might believe ARM has value outside its current cash flow and mediocre growth. Like strategically combining technologies.

I’m skeptical that will work, but Son was dumb enough to pay $31B with no strategic value.

I’m skeptical that will work, but Son was dumb enough to pay $31B with no strategic value.

At the time I thought Son had clever telco synergies in mind, but I gave him far too much credit

ARM is actually an okay choice and $31B probably wasn't even that out of whack.

The problem is that Son needs cash, so he's flogging off everything he can to get it.

>Softbank overpay for Arm

Grossly so. They paid like 45 percent above what the stock was trading at the time.

That's a pretty normal premium for a buyout. You'll never be able to buy a company completely just by using the stock price * number of shares.

The company is currently valued by analysts as high as $40 billion. Most seem to believe it's worth more than the $32bn Softbank paid in 2016.

> Nvidia shareholders would be paying a premium for ... what exactly?

They'd be paying a premium for a path to an all-nvidia datacenter & supercomputer.

Consider HPC applications like Oak Ridge's Frontier supercomputer. They went with an all AMD approach in part due to AMD's CPUs & GPUs being able to talk directly over the high-speed Infinity Fabric bus. Nvidia's HPC GPUs can't really compete with that, since neither Intel nor AMD are exactly in a hurry to help integrate Nvidia GPUs into their CPUs.

This makes ARM potentially uniquely valuable to Nvidia - they can then do custom server CPUs to get that tight CPU & GPU integration for HPC applications.


There is [0] https://en.wikipedia.org/wiki/NVLink

which is supported by [1] https://en.wikipedia.org/wiki/POWER9

those two combined give you [2] https://en.wikipedia.org/wiki/Summit_(supercomputer)

currently the worlds number 2 supercomputer(only very recently dethroned) according to the article.

Installed at Oak Ridge.

So they are already there, just needing some premium POWER?

Can't they make custom Arm server CPUs without buying Arm, as the Amazon/Annapurna team and others have done with their Arm licenses?

Amazon paid 350MM for Annapurna, ~ 1/100th of 32B.

For embedded devices, Nvidia already ship Jetson boards with Arm CPUs and Nvidia GPUs.

Sure, but they'd need to buy or build a CPU design team. Which they'd get as part of buying ARM, the teams that make the Cortex reference designs.

Why would nvidia overpay by 22 billion dollars?

Because the sources for this story are investment bankers desperate for bidders?

An ARM owned and fully controlled by NVIDIA is probably worth more to them than an independent and reasonably neutral ARM who's willing to do business with NVIDIA's competitors. Maybe not $22B more, though.

That would imply the purchase is for the purpose of actions that could cause regulators in multiple countries to block the purchase.

There’s an opportunity for arbitrage here then...

Not really. You can't exactly cut in and buy arm from SoftBank for 16 and flip it to nvidia for 32. What's your pitch to SoftBank?

Doing IPO would mean they will use the money raised meaningfully. Shareholders probably see more upside with Nvidia integration. I’m not really sure what ARM need a bunch of money for in an IPO, they are pretty established.

> I’m not really sure what ARM need a bunch of money for

To buy themselves back from owner Softbank, who can return money to investor Saudi Arabia? https://www.cnbc.com/2018/10/23/softbank-faces-decision-on-w...

If that is the main idea your S1, not gonna get much interest unless you don’t raise much.

The goal of independence is typically to execute on a vision.

According to some comments in this thread, the alternative is the slow destruction of the neutral Arm ecosystem. While some new baseline could be established in a few years, many Arm customers could face a material disruption in their supply chain.

With the US Fed supporting public markets, including corporate bond purchases of companies that include automakers with a supply chain dependent on Arm, there is no shortage of entities who have a vested interest in Arm's success.

If existing Arm management can't write a compelling S1 in the era of IoT, satellites, robots, edge compute, power-efficient clouds, self-driving cars and Arm-powered Apple computers, watches, and glasses, there will be no shortage of applicants.

IPO'ing a business that only makes revenue from licensing arrangements is a recipe for disaster

ARM was publicly traded between 1998 and 2016. In that period its value multiplied about 25x, not counting the premium of the acquisition. Could you elaborate, please? Where do you see the disaster? (Honest question).

Because of Apple. Not to mention their last 40% price increase was because of the Vision fund's nonsense.

Publicly traded companies that rely on income from "licensing" peak in revenue then stagnate because innovation becomes harder to come by.

Apple is a small, although significant, part of ARM's total market share. And that 25x is, as I said, without taking into account the premium. If you do, and there are good arguments to do so, the valuation growth is 35x, in almost 20 years.

Regarding innovation, ARM's been at it since 1990. I'm sure it's not the same now as it was 30 years ago, but we're well past the point where one can reasonably fear it to be an unsustainable business. Last time I heard numbers, they were talking about more than 50 billion devices shipped with ARM IP in them. That is a massive market.

You don't answer my question. Why wouldn't licensing businesses work as publicly traded companies? What's the fundamental difference, specially in an increasingly fabless market, between a company licensing IP to other companies and a company selling productized IP to consumers?

How so? It seems like lots of businesses run successfully on that model for indefinite periods.

If a business is viable as a private company (as ARM certainly is), why wouldn't they be viable as a public company?

This is troubling.

At the moment ARM lives or dies by the success of the ecosystem as a whole.

When its owned by a customer this may no longer be the case and there are huge potential conflicts of interest. For example, would an Nvidia owned ARM offer a new license to a firm that would be a significant competitor to an existing Nvidia product (eg Tegra)? Will Nvidia hinder the development efforts of other competitors? Will Nvidia give itself access to new designs first? How will it maintain appropriate barriers to the flow of information about competitors new designs to its own design teams?

I can see this getting very significant regulatory scrutiny and rightly so.

Once ARM is American it will escalate trade war with China. There's no way China can quickly create any competitive platform.

Not sure ARM UK is really in control of ARM China anyway.


Does ARM China actually do any engineering or is it just licensing?

Not aware of any design but probably working with local firms on implementation of ARM designs. Happy to be corrected.

China is already aware and aiming to supply 70% of their own demand for chips. [0]. Thanks to that, we might also see a rise in RISCV chips [1] which could even get the attention of other states besides China and India.

I think Trump started something unintentionally that might put other countries in a better position to deal with the American semiconductor hegemony 5-10 years from now.

IMO, Nvidia should be allowed to buy ARM just for it to get bad enough for people to want to buy non NVIDIA products. For years NVIDIA has had shitty business practices, but I bet most people on HN (and the rest of the world) don't give 2 shits about competition and market leadership. They just buy NVIDIA because it's the standard.

Things have to get worse before the get better. It's really, the only way humans and the public seem to be able to learn.

0: https://www.bbc.com/news/business-50287485

1: https://venturebeat.com/2019/12/11/risc-v-grows-globally-as-...

It's Nvidia circling back for a kill on intel. They didn't go head to head, even they wanted to. Instead they built a completely different space within data centers for them, got a foothold, expanded (mellanox) and now going for the missing piece, which will also allow them to expand the battleground with intel outside of datacenters. Interesting times and Nvidia, so far, showed they know their strategic moves.

NVidia is awful when it comes to FLOSS support. If they get ARM, at least it may accelerate RISC V developments.

Nvidia is making a play for the data center business not the desktop business. Especially with its Mellanox acquisition, Nvidia wants to build high performance data centers (they have GPUs and networking but need CPUs since most computing is general purpose). I doubt that they'll succeed though since they don't have the software layer to provide the 'private cloud' that they are looking to build.

They basically want to run a cloud built around feeding enormous amounts of data to their GPUs, for AI operations and the like. This is also why they bought SwiftStack in March: to manage that storage.

Source: job interview

ARM isn't great, I don't see much difference in GPU documentation between the two.

This is true and fair. However ARM does a _lot_ of ecosystem open source support for their ISAs.

ARM makes GPUs?

I’m holding a reference copy of aarch64 in my hands and I’ve written half an assembler from it.

Granted SoCs often are just as bad as nvidia, but that’s not ARM’s fault or problem, really.

> ARM makes GPUs?

They do [1], but don't provide documentation for them.

[1] https://en.wikipedia.org/wiki/Mali_(GPU)

Ah, I never made the connection before.

NXP (formerly Freescale) generally has good docs and tools for their iMX series SoCs. For some of them you have to sign up to get the reference manuals though.

you do realize that Nvidia is a major backer of RISC-V and already uses it on GPU's Turing and newer?

Yeah but they have no avenue to control it. Not like ARM and its ISA licensing.

RISC-V is inherently a customizable ISA though, whereas ARM implementations are very specific about what they require to be called an "ARM processor". This wouldnt change from this acq.

Would they tone it down if they acquired ARM?

no. they're isolated for a reason, with the RISC-V processor being used as the controller to manage the behavior of the other parts of the chip. beyond just licensing ARM is expensive because it's required to implement a lot. With that chip being RISC-V they can make it as minimal and perfectly tuned as possible, so it's slow when it can afford to be cheap and fast when it needs to be.

That isn't the same at all. Canonical being a major backer of Linux is significant, but Linux is significantly open and diversified to where it does not stand or fail by one party, abet there are some that have more influence than others.

The other part of it is Softbank will be rescued from their idiotic and incredibly wasteful investments. They bought so much crap and had too much money and they still managed to buy a few valuable things, probably by accident. It's slightly infuriating. At least I'm happy to see amd succeeding by actual intelligent engineering.

Wasn't Softbank involved with Nvidia as well? That's probably the connection.

So basically, it will be two companies which own both the CPU and GPU stack (AMD/ATI and Nvidia/ARM) and intel will just sort of end up at the wayside. Not really what I expected.

FWIW, Intel has been pushing towards launching a dedicated discrete GPU platform. If/when they recover, this would place us in having three distinct CPU/GPU companies.

If I was Intel, I would be going straight for the TPU market. GPU have a bunch of legacy from the G=Graphics legacy. The real money maker is not likely to be gamers (although it has been healthy enough market). The future of those vector processing monsters is going to be ML (and maybe crypto). This is the difference between attempting to leapfrog compared to trying to catch up.

> The future of those vector processing monsters is going to be ML (and maybe crypto)

That's a heavy bet on ML and crypto(-currency? -graphy?). Has ML, so far, really made any industry-changing inroads in any industry? I'm not entirely discounting the value of ML or crypto, just questioning the premature hype train that exists in tech circles (especially HN).

>That's a heavy bet on ML and crypto

Well, yes that is the point. My theory is that the gaming market for GPUs is well understood. I don't think there are any lurking surprises on the number of new gamers buying high-end PCs (or mobile devices with hefty graphics capabilities) in the foreseeable future.

However, if one or more of the multitude of new start-ups entering the ML and crypto (-currency) space end up being the next Amazon/Google/Facebook then that would be both unforeseeable and unbelievably transformative. Maybe it won't happen (that is the risk) but my intuition suggests something is going to come out of that work.

I mean, it didn't work out for Sony when they threw a bunch of SPUs in the PS3. They went back to a traditional design for their next two consoles. So not every risk pans out!

A ton. Look at the nearest device around you, chances are it runs Siri, Alexa, Cortana, or Google voice assistant. This will only grow.

Same with machine vision. It's going to be everywhere — not just self-driving trucks (which, unlike cars, are going yo be big soon), but also security devices, warehouse automation, etc.

All this is normally run on vector / tensor processors, both in huge datacenters and on local beefy devices (where a stock built-in GPU alongside ARM cores is not cutting it).

This is a growing market with.a lot of potential.

Licensing CUDA could be quite a hassle, though. OpenCL is open but less widely used.

> Has ML, so far, really made any industry-changing inroads in any industry?

Does the tech industry not count, or are you only considering industries that are typically slower moving?

The tech industry claims it applies machine learning all over the place, but I doubt it actually moves the needle much.

> Has ML, so far, really made any industry-changing inroads in any industry?

It is (IIRC) a pretty fundamental part of self driving tech. I honestly think this is what drives a lot of Nvidia's valuation.

nvidia's largest revenue driver, gaming, made 1.4B dollars last year (up 56% YoY). nvidia's second largest, "data center" (AI) made 968M (up 43% YoY). Other revenue was 661M. Up to you if nvidia's second largest revenue center, of nearly a billion/year is "industry changing"

> crypto(-currency? -graphy?)

TPUs are massively parallel Float16 engines - not really applicable to anything outside of ML.

> The future of those vector processing monsters is going to be ML (and maybe crypto).

Hopefully some of those cryptocurrencies (until they get proof-of-stake fully worked out) move to memory-hard proof-of-work using Curve25519, Ring Learning With Errors (New Hope), and ChaCha20-Poly1309, so cryptocurrency accelerators can pull double-duty as quantum-resistant TLS accelerators.

I'm not necessarily meaning dedicated instructions, but things like vectorized add, xor, and shift/rotate instructions, at least 53 bit x 53 bit -> 106 bit integer multiplies (more likely 64 x 64 -> 128), and other somewhat generic operations that tend to be useful in modern cryptography.

This is what they tried to do with Nervana and are trying again to do with Habana.

The one thing I don't get is, there are a lot of machines out there that would gain a lot from specialized search hardware (think about Prolog acceleration engines, but lower level). For a start, every database server (SQL or NoSQL) would benefit.

It is also hardware that is similar to ML acceleration, it needs better integer and boolean (algebra, not branching) support, and has a stronger focus on memory access (that ML acceleration also needs, but gains less from). So how comes nobody even speak about this?

I don't understand how database servers would benefit. You would have to add the search hardware directly to DRAM for any meaningful gains.

You would need large memory bandwidth and a good set of cache pre-population heuristics (putting it directly on the memory is a way to get the bandwidth).

ML would benefit from both too, as would highly complex graphics and physics simulation. The cache pre-population is probably at odds with low latency graphics.

They have. Their first purchase (Nervana) hasn’t worked out for them so they are now working through their purchase of the more conventional Habana.

Intel did go for the tpu market. It was called the nervana chip, and they cancelled it.

From what we know so far (https://www.tomshardware.com/news/intel-xe-graphics-all-we-k...), it will be a while before Intel competes in the GPU space. The first offering of Xe graphics (still not out yet) will probably not be competitive with cards that AMD and Nvidia released over a year ago.

If Intel can survive their current CPU manufacturing issues, manage to innovate on their design again, and manage to improve the Xe design in a couple generations, they might be in a good position in several years. I (as a layman) give them a 50/50 shot at recovering or just abandoning the desktop CPU and GPU market.

> The first offering of Xe graphics (still not out yet) will probably not be competitive with cards that AMD and Nvidia released over a year ago.

Being a only a year behind market leaders with your first product actually seems pretty impressive to me. Especially if that's at the (unreasonably priced) top of the line, and they have something competitive in the higher volume but less sexy down market segments.

Intel's i860 was released in 1986, that evolved to the i740 in 1996, and later on to the KNC, KNLs, Xeon Phis, etc.

The >= KNC products have all been "one generation behind" the competition. When the Intel Xe is released, Intel will have been trying to enter this market for about 30 years.

This market is more important now than ever before. I hope that they keep pushing and do not axe the Intel Xe after the first couple of generations only to realize 10 years later that they want to try again.

I wonder, why Intel abandoned the mobile market? I think only a single smartphone was released using an Intel CPU.

They had a bad product that they had to sell at below cost to get market share. The x86 tax is pretty small for a big out of order desktop core but it's much more real at Atom's scale.

I think radio tech is an issue in the mobile space. It is heavily patented and the big players who can integrate this technology into their chips can offer much more energy efficient solutions.

Intel had that in-house too... it just also kinda sucked too...

That is the thing with a lot of these side projects Intel is always working on. It would be great if they actually delivered good products, but they often spend billions acquiring these companies and developing these products only to turn out one or two broken products and then dump the whole project.

I think this time is different with Xe, but I can't blame anyone for looking at the past history and being dubious that Intel is in it for the long haul.

I had one model from Asus, battery life was terrible and if you were doing anything remotely intensive it doubled up as a nice hand warmer...

I know that these probably are solvable problems, but they left a pretty bad taste...

The first TAG Heuer android wear device ran an atom chip. Fun times with the power optimization, that was.

Not enough profit for their liking was the reason.

Not their first try on that front, and Intel push for new product branches has not exactly gone well the past decade or so.

I wouldn't count Intel out yet. Sure, things don't look that great, but same was true of AMD/ATI until recently...

> I wouldn't count Intel out yet. Sure, things don't look that great

I always get a kick out of the sentiment toward Intel on HN.

Intel is booming financially. Things have never been better for them in that respect. They have every opportunity to fix their mess.

Intel has eight times (!) the operating income of Nvidia, with a smaller market cap.

Intel is one of the world's most profitable corporations. $26 billion in operating income the last four quarters. Their margins are extreme. Their sales are at an all-time high. Their latest quarterly report was stellar.

In just 2 1/2 years Intel has added a business the size of Nvidia and AMD combined.

If they can't utilize their current profit-printing position to recover, then they certainly deserve their tombstone. Nobody has ever had an easier opportunity to find their footing.

This sounds very similar to the situation Nokia was in around 2007: - Nokia was booming financial. Things had never been better for them in that respect. - Nokia had XX times (!) the operating income of Apple's mobile business. - Nokia was one of the world's most profitable corporations.

And, yet, the writing was on the wall. Nokia was doomed once the smartphone era came. That's where Intel is today: AMD crushes them on the high-end general purpose CPUs. ARM crushes them on I/O performance and the low-end for general purpose CPUs. GPUs crush Intel in the middle, for special-purpose (mainly single-precision floating point) computing.

Right now, large portions of new computer sales, and an even larger portion of the high-margin cpu sales, come from cloud computing. AMD and ARM are stealing huge market share from Intel on that front. I don't see that momentum changing any time soon.

There's a reason that Intel has 8x the operating income of NVidia while having a smaller market cap. It's not because of where they are currently--it's where they are going. Stock market valuations are forward-looking, and the future doesn't look so bright for Intel.

> ARM crushes them on I/O performance

This sounds difficult to believe. Do you know a benchmark that shows this?

Crush might be a strong word, but here's some benchmarks from AnandTech that show ARM (AWS Graviton2) beating Intel and AMD chips in memory bandwidth.


The thing is, that's not "ARM" it's "ARM as implemented by AWS", they get to choose how many memory channels to add etc.

They have created a new type of memory that is 10-100x more performant than flash, and its already popular in high-performance databases. 3dx point

That could be a huge market with no cempetutors in sight

Nokia was fine - just switch flagship from Symbian to Android, continue feature phones and Maemo. After so many years and under different manufacturer brand is still alive.

Stock market valuations are forward-looking, but they aren't always predicting the right future.

Personally I won't be betting on or against Intel - it wouldn't shock me if they follow the Nokia route, it also wouldn't shock me if they come out with a new generation that puts them back on top within the next few years.

Intel is booming financially. Things have never been better for them in that respect. They have every opportunity to fix their mess.

So was RIM in 2010. Profits are a trailing indicator. The PC market is really small compared to the mobile market and declining. While Apple only has 10% of the overall market, it has a much higher percentage of high end personal computers and Intel is about to lose Apple as a customer.

PCs also are having longer refresh cycles. What does “recovery” look like? PC sells going up? That’s not going to happen.

They still have the server market while that is probably growing, Amazon is pushing its own ARM processors hard and MS and Google can’t be too far behind.

Intel has a habit of putting snatching defeat from the jaws of victory.

Their decades of more or less monopoly status has made them complacent, their revenue is high still because of this inertia built into the market that simply will not vanish.

The datacenter for example is still dominated by Xeon not because people like Xeon over Epyc but because there is not a easy migration path between the 2 platforms. If I was building a whole new server farm with all new VM's I would choose Epyc all day... but if I need to upgrade hosts in an existing farm with no down time well that will need to be Xeon then...

When it comes to desktops/laptops though, Lenovo's AMD line is attractive and suffers no such problems

I find it humorous too, but I think it's easily explained: we get a constant stream stories of the plucky underdogs with their fancy engineering achievements. HN loves both industry disruption and engineering achievements, so it's a sort of self-reinforcing reality distortion field. See Tesla for a very similar sort of story.

The innovators and underdogs are always great to see, and they fuel our collective imagination, so it's no surprise that they dominate the HN front page. Of course, that mind-share dominance is in stark contrast to the well-entrenched money-printing machines they're trying to disrupt, who are happy to keep dominating their respective industries year after year instead.

> Their sales are at an all-time high. Their latest quarterly report was stellar.

Past performance. In spite of their earnings their stock plummeted on the 7nm news. It’s not just HN that is bearish.

This is very true. They got this profitable though by gutting every long term investment in new forays: they sold off their ARM business, their modem business, they never bothered to make a serious GPU or mobile chip, etc.

The margins on those big Xeon chips have been so good that they ditched everything else, and painted themselves into a corner, sitting by the sidelines for the past 20 years as new markets emerged.

Right, that's kinda my point, maybe not strongly enough stated. The "don't look that great" is "they'll probably have to buy fab resources from someone else" to keep up perf-wise (like their competitors, who are fabless), not "they're going bankrupt anytime soon".

Is this comparable to that situation? AMD & Intel were building chips basically to the same standard. ARM/Nvidia vs. Intel would be a lot more asymmetric.

Intel has something like 60% of the PC GPU market because of their built-in graphics.

Yes, but not the high-end GPU market, which is what Intel has been trying to break into for quite some time now.

From the outside it sounds more like Intel has been infighting about this for 20+ years...

The original i740 was theoretically a capable card; although fairly hampered by being forced to use Main Memory for Textures. Intel eventually backed down from the graphics market back then, and instead continued to use the 740 as a basis for the integrated graphics in the i810/815 chipsets.

But, as GPUs became closer to what we saw as real GPUs, Intel continued to press on with the idea that keeping things done in the CPU was better for them (i.e. encouraging upgrades to higher end CPUs vs selling more lower margin graphics cards.)

You saw a similar pattern with the 845/855/865: Shaders were all done in software (Hey, it finally almost justified Netburst, right? ;)

And this pattern seems to continue with various forms of infighting between groups up to this day.

The other Consistent problem they have had is driver compatibility/capability.

Also, the i740 drivers were really bad. I had one and I remember all kind of bug and graphical glitchs on games that worked fine on a GeFoce 2 MX that I got latter.

Heh... ever since the days of the chips and technology acquisition, if not sooner, they vacillate between wanting to be in the graphics business and not being in the graphics business...

ARM is not in the high end GPU market either.

No, but Nvidia is. So if Nvidia bought ARM, they could be a CPU and GPU maker.

Nvidia has had modern ARM chips (the Tegra series among other) for at least a decade now. Any company can license ARM tech.

My impression was they were intentionally not competing with Nvidia at one point?

Aren't AMD's APU's a better product for that market and will replace intel sooner or later?

One difference is that AMD only offers integrated graphics with weak CPUs, while Intel, unless something's changed recently, offers them on all their consumer models.

Intel has started offering "F" SKU processors without integrated graphics only very recently with 9th and 10th gen Core CPU's.

But yes otherwise, nearly every Intel CPU has integrated graphics whereas only a few select AMD CPU's have integrated graphics (and AMD brands them as APU's not CPU's).

I'd be okay with only a few select CPUs, if even one of them was a reasonably powerful one. Instead, it's only the bottom of the barrel CPUs performance-wise.

It seems that is changing somewhat with the 4000-series APUs, but guess what, those are only going to be sold to OEMs, not individuals.

It's all rather frustrating, since I'm still on an i7-4770k and wouldn't mind an upgrade.

AMD bet on APUs more than a decade ago, hoping to grab the market with better GPUs than Intel, and have been offering a better product about as long.

But the reality is that vast majority of business desktops and laptop don't need anything better than what Intel offers.

If AMD gains share in that segment (and it looks like they are), it will IMO be because of finally having a better CPU, not because of a better GPU.

Do you not know Intel is close to releasing their own GPUs or do you just think they will just fail at it?

Even with the acquisition of ARM, I don't see Nvidia any better off than Intel at this moment as far as CPU/GPU stack goes. Frankly, I would think AMD would be the one to end up by the wayside since they still are weak on the software side.

I think there's a good chance they'll fail. This is true of any new venture, so it's a bit lame to say, but there are reasons:

Right now I have a fairly decent GPU in my Macbook which I've hardly used. Very little supports it, because it's not nVidia. I can't use it for AI training, for example. Sure, it might work ok for some games, but Macbooks aren't really for gaming, and nVidia has captured that market nicely anyway.

Things can change; maybe Intel's software stack will be incredible. I don't know. But they have quite a hill to climb before they reach that summit.

There's ROCm[1] though. It's just almost every ML platform blindly bent to the NVIDIA vendor lock-in. CUDA is a disaster, like DirectX was back in time. One day it will go, hopefully soon enough.

[1] https://github.com/RadeonOpenCompute/ROCm

Intel has good record to provide software, contribute to OSS, support for developers to support hardware than AMD, so possibly they can do better than RADEON on some world if they can provide great hardware.

>Do you not know Intel is close to releasing their own GPUs or do you just think they will just fail at it?

Given this is about their third attempt at releasing a high performance gpu, I think skepticism is warranted until they’re actually selling something to the general public.

They will fail. If they release anything less than a card as fast as a standard nvidia card and with as good driver support, they have absolutely no chance.

nvidia has a huge market, people buy their stuff because it's fast and there's software support for everything from AAA games to scientific/engineering modelling to ML.

intel took far, far too long to come up wiht a viable graphics ecosystem to ever be successful.

Most GPU sales are for $200-$400 cards. AMD has always optimized for that instead of having the fastest card outright.

Doesn’t intel ship the most GPUs? Just not discrete. Their iGPUs are in so many low end systems compared to any systems with discrete graphics cards

Maybe I'm jaded, but my current hope is they will fail at a discreet GPU, then dump that tech into making their onboard much better and using that as a selling point for the CPUs, thus helping everyone in the long run.

There's a huge market for mobile devices that aren't Apple and are fast. Perhaps this would offer better SoC prospects than whatever slop Qualcomm is dishing out.

Where is this “huge” non Apple mobile market? The mobile market outside of Apple is a commoditized race to the bottom. The average selling price of an Android phone is around $270.


Hardware always gets cheaper and better. Why do you think Apple is so aggressively pushing services?

Yet and still Apple has been charging a premium over competitors for over 40 years....

Apple also fully controls the hardware and software stack for both CPU and GPU.

While ARM can be great, I don’t think anyone is entirely writing off x86. Intel owns a lot of fab. And while I don’t get as hyped on it as other people, Intel going hard into RISCV could change the game for ARM (if it was handled in non-typical-Intel way)

I'm not writing off x86, but I think AMD is going to define the future of x86 (like the did before), instead of Intel.

> I don’t think anyone is entirely writing off x86.

Apple seems to be.

That’s fair.

I was about to write how that’s a special case, but then I remembered that Microsoft is also making their own chips for Surface products.

But Intel has already threaten MS with a lawsuit if it tries to emulate x86 for its ARM products. Meanwhile when Apple switched from PowerPC to x86 part of the agreement was a share of patents which gave Intel access to the multimedia extensions of the PowerPC (AltiVec?). If this "share" went both ways then Apple might be able to provide x86 translation on their SOC. NOTE: there is no evidence of this anywhere other than what MS was blocked when working on Windows 10 for ARM and the Apple-Intel agreements.

If this does happen I think Intel will not sweat it because it will be only Apple. Apple has no interest in selling CPUs. They want to be able to make severe changes (cut the fat) between revisions and not have people crying about having to update their architecture to support it.

IANAL, but it's been argued here that the x86_64 patents will expire soon (the specification was available in 2000, first processor in 2003). Probably neither Apple or MS will be blocked; MS had a problem earlier since it decided to release while the patents were still in force, rather than wait.

Only if Nvidia stop licencing ARM to other companies like Apple, Qualcomm and … AMD, right?

As a founder of ARM, Apple is grandfathered into a "perpetual architecture license," clear sailing unless nVidia deprecated ARM in favor some something completely new (as I understand it), which seems unlikely to say the least.

> As a founder of ARM, Apple ...

Apple was a founding member of AIM (Apple, IBM, Motorola) for PowerPC. (As rumour has it, after DEC refused to go for a higher-volume lower-margin Alpha AXP derivative when Apple came knocking for an m68k replacement, Apple then asked IBM for a higher-volume lower-margin POWER derivative, leading to AIM and PowerPC.) As far as I know, only Acorn had a role in founding/spinning off ARM.

On the other hand, I would be very surprised is Apple wasn't smart enough to get a very-long-term/perpetual license on the ARM instruction set before investing heavily in custom core design.


"In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the Arm core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd."


" Apple has invested about $3 million (roughly 1.5 million pounds) for a 30% interest in the company, dubbed Advanced Risc Machines Ltd. (ARM), but the exact ownership stake of VLSI and Acorn was not disclosed. "

They needed it for the Newton!

Today I learned! Thanks, and upvoted.

Apple was part of the change from "Acorn RISC Machine" to "Advanced RISC Machines", so basically a founder of modern ARM. AIM was completely separate and later.

Apple and a bunch of others also have a "Arm architectural licence" which IIRC cannot be taken back.

Nvidia hardware is great but my biggest concern is that this would eventually spell the end for Raspberry Pis and other good things that are ARM. That would not be cool.

Can you expand on that concern? Rpi uses a Broadcom soc.. so it isn’t like they deal with arm directly.

Designs of future ARM versions could end up being skewed toward NVIDIA's needs or worse, designed to or information withheld in ways that kill other embedded products and monopolize the embedded market.

They could theoretically simply stop licensing ARM to Broadcom, although that might invoke some anti-trust suits.

Broadcom is Acorn, there is a suggestion in another comment that Apple has a perpetual licence to ARM, maybe Broadcom has as well.

> Broadcom is Acorn

What? As far as I know the only connection between Broadcom and Acorn is that they employed Sophie Wilson.

That could be in Intel’s favor if this becomes an example of “deworsification”. Something to be said about focusing purely on one use-case.

Intel's been moving in the direction of launching a discrete GPU line for years: https://www.theverge.com/2020/1/9/21058422/intel-dg1-discret...

And I'd hardly call Intel focused on one use-case. They seem to have their hands in all sorts of random-as-heck product lines like IoT devices and such, most of which rarely see much long-term support.

The random stuff at Intel are usually little experiments to see how complimentary tech drives Xeon sales. When they pitched IoT stuff, the real "sale" was to get Intel-based edge devices in place to herd the IoT. And to get "intel" about what customers are planning in the place around opportunities/threats like 5G.

I think that effort in particular (about 5 years ago for me) was a warning sign about Intel -- they didn't internalize that your smart water meter would have a chip beefy enough very soon to connect to a cellular or other long range network and just hit a cloud endpoint.

I think they mean that Nvidia / ARM are currently each focusing on what use case and a merger could make them worse at both.

Isn't this a monopoly? doesn't seem very legal. If the future is so dependent on GPU compute, and there are only 2 companies (AMD/NVIDIA) making them.... isn't it insane to only have 2 companies the entire world depends upon?

Edit: ok Duopoly, but still... kinda insane that only 2 companies in the world do it.

Part of it is the software world's fault as well. Almost the entire open source machine learning ecosystem is written for NVIDIA's CUDA and nothing else.

And almost all competent non-CUDA platforms (e.g. Google TPUs, Tesla's secret in-house hardware) haven't been open-sourced, or even sold to consumers, which further enables the NVIDIA monopoly.

It's a duopoly, and unless you've got evidence that AMD and Nvidia are colluding to hurt potential competitors there's nothing illegal about it.

It’s really hard to call a duopoly much better than a monopoly. Sure you can probably identify points, but neither outcome is anywhere near efficient. I don’t know that it should necessarily be illegal. But it is market concentration.

A duopoly is absolutely much better than a monopoly. You've got competition pushing you improve.

but by definition isn't a duopoly not competing with each other? I might be wrong, but I remember it being about the two companies making deals with each other to keep both prices high and not innovate or improve, like ISPs in the US

Anticonsumer practices really need a test of the impact on consumers, similar to how hiring practices are tested. There is a give and take in most broadband suppliers, for example, where there is no written agreement to not compete but they don't compete.

This would be bad. Not because of the CPU business - I think RISC V will eventually make that irrelevant. Once CPUs are open source commodities, the next big thing is GPUs. This merger will eliminate a GPU maker, and one that licenses the IP at that.

I think you are confused of what RISC-V is and how these things work.

RISK-V is just ISA.

You will have open source RISK-V microarchitectures for processes that are few generations old. You can use the same design long time when the performance is not so important.

You will not get open source optimized high performance microarchitectures for the latest process and large volumes. These cost $100s of millions to design and the work is repeated every few years for a new process. Every design is closely optimized for the latest fab technology. They have patents.

Intel, AMD, Nvidia, ARM, all have to design new microarchitectures every few years. It's not just doing some VHDL design. It involves research, pathfinding, large scale design, simulation, verification and working closely with fab business. The software alone costs millions and part of it is redesigned. Timelines are tight or the design becomes outdated over time.

"Building" for the latest process and large volumes is another story, but as far as I can see, large scale logic design is something not _that_ far away from software. Large scale, open source, and performant software designs exist in the wild. (see Linux, llvm, ...)

Why wouldn't we get a logic netlist which could perform reasonably well when placed on silicon by people who know what they are doing? (Yeah, lots of handwave.) I'm asking this out of curiosity. Not an expert in the field by any means.

SonicBoom is an open source Risc V core that is clock for clock competitive with the best from ARM. There is still the issue of matching clock speed and tweaking for a given process.

Foundries actually have an interest in helping to optimize an open core for their process as a selling point since it can be reused by multiple customers.

Their own paper[0] shows it performing at similar levels to the A72, a four year old core. Those are obviously really impressive results for an open source core.

The best ARM core is Apple’s Lightning, which has an IPC rate about 4X that of the A72. [0]: https://carrv.github.io/2020/papers/CARRV2020_paper_15_Zhao....

Do you have a reference the Apples IPC?

I was using AnandTech’s SPEC2006 benchmarks.

Agreed, this reminds of the AMD acquisition of ATI a couple decades ago. Almost killed AMD and now Nvidia is thinking about doing the same? Technically the reverse since Nvidia is the GPU company acquiring a CPU company.

At the same time, Nvidia may be trying to hedge its future as its other competitors (Intel, AMD, even Apple) all have their own CPU and GPU designs. The animosity with Apple has shown the power dynamics and the high stakes.

I believe grandparent is referring to ARM's Mali GPU.

Apple's CPU's are still ARM64 cores at heart, even if heavily modified

A thought I've been kicking about, though I fully understand it to be incredibly unlikely would be if NVIDIA would simply terminate any license Apple has to use ARM at all. The move would arguably be done out of pure spite. "payback" for the GeForce 8600M issues that cost them 200~ million and $3 billion in market cap in part due to Apple pushing for a recall.

Apple also seemingly pushed Nvidia utterly out the door, going as far as to completely block NVIDIA from providing drivers for users to use their products even in "eGPU" external enclosures on newer versions of MacOS. Even if only a minority of Apple users ever bought Nvidia cards, being completely banned from an OEM's entire lineup would likely ruffle feathers

Let’s ignore that founding ARM investor Apple has a perpetual license and their own CPU designs.

Do you really think NVidia is going to substantially overpay for ARM to then nuke it’s remaining value by going to war on it’s licensees?

Apple has an "ARM Architecture License". They have a custom implementation that's very different from the reference design (which is why they're miles ahead of other ARM CPU makers). I'm sure the license has contractual obligations that protects Apple. In short, Apple is in no danger, most likely.

You're swallowing the poison pill with pride. The reason why no ARM customer actually wants to buy ARM is because there is almost no way to make money off of it. If you cancel licenses ARM is going to implode and $32 billion worth of value locked up in ARM will vanish into thin air. If you want to extract value out of ARM you'll have to play nice and that is definitively not worth $32 billion either.

Aren't you mixing ISA with implementation here? Or do Apple's CPUs really contain Arm's logic contrary to public info eg on Wikipedia?

You have to license the ARM ISA as well, you can't just freely implement it. IANAL, but as far as US law is concerned, I don't think you're violating copyright by cloning an ISA, but if that ISA is covered by any patents, you'd be fairly screwed without licensing. ARM definitely considers there to be enforceable patents on their ISA (just follow their aggressive assault on any open source HDL projects that are ARM compatible).

If Nvidia bought ARM and decided to find some legal way to terminate the contract with Apple out of spite, Apple would have to find another ISA for their "Apple Silicon".

It's not just patents.

ARM has previously shut down open source software emulation of parts of the ARM ISA.

For a while QEMU could not implement ARMv7 I think, until changed their mind and started permitting it. There was an open source processor design on opencores.org that got pulled too.

The reasoning was something like "to implement these instructions you must have read the ARM documentation, which is only available under a license which prohibits certain things in exchange for reading it".

If Apple are worried about that prospect, couldn't they easily outbid Nvidia to buy ARM?

It would be like a gas station refusing to sell gas. Utter suicide for ARM.

To my knowledge, "Apple Silicon" or what ever you want to call it is an ARM64 ISA, with Apple extensions

Third parties have been able to get IOS running in an emulator for security research and Android(!) has even been ported to the IOS devices that contain the "Checkrain" Boot ROM exploit (though with things like GPU, Wi-Fi etc in varying states of completion)

ISA is just about the instruction set that the silicon is designed to interpret. Aww compatibility does not imply any shared heritage in the silicon between apple and arm LTD cores. Just like AMD and Intel share no silicon design even though they are both sell processors implementing the AMD64 ISA.

(The silicon implementation of an isa is referred to as the microarchitecture btw)

Why wouldn't something similar to RISC V happen in the GPU space?

In a GPU the ISA isn't decoupled from the architecture in the way it is for a post-Pentium Pro CPU. Having a fixed ISA that you couldn't change later when you wanted to make architectural changes would be something of a millstone to be carrying around for a GPU.

I’m curious, why is this the case for GPUs and not CPUs?

It's much more advantageous to be able to respin/redesign parts of the GPU for a new architecture since the user interface is at a much much higher level compared to a CPU. They basically only have to certify that it'll be API compatible at CUDA/OpenCL/Vulkan/OpenGL/DirectX level and no more. All of those APIs specify that the drivers are responsible for turning it into the hardware language, so every program is already re-compiled for any new hardware. This does lead to tiny rendering differences in the end (it shouldn't but it frequently does, due to bug fixes and rounding changes). So because they aren't required to keep that architectural similarity anymore, they're free to change as they need new features or come up with better designs (frequently to allow more SIMD/MIMD style stuff, and greater memory bandwidth utilization). I doubt they really change all that much between two generations, but they change enough that exact compatibility isn't really worth working at.

If you want to look at some historical examples where this wasn't quite the case, look at the old 3DFX VooDoo series. They did add features but they kept compatibility to the point where even up to a VooDoo 5 would work with software that only supported the VooDoo 1. (n.b. This is based on my memory of the era, i could be wrong). They had other business problems, but it meant that adding completely new features and changes in Glide (their API) was more difficult.

RISC-V is an ISA rather than silicon, GPUs are generally black boxes that you throw code at. There's not much to standardize around

AMD document their ISA: https://llvm.org/docs/AMDGPUUsage.html#additional-documentat...

That's why third party open shader compilers like ACO could be made:


AMD document their ISAs, but each one maps pretty much one-to-one onto a particular implementation. Compatibility and standardization are not goals.

As long as they make an open source compiler, there is at least a reference implementation to compare to.

GPUs do have ISAs. It's just that they're typically hidden behind drivers that provide a more standardized API.

Of course they have ISAs, my point is that the economics of standardization around a single ISA a la RISC-V isn't as good by virtue of the way we use GPUs on today's computers. You could make GPU-V but why would a manufacturer use it

> GPUs are generally black boxes that you throw code at.

umm... what? what does that even mean? lol

I could kind of maybe begin understand your argument from the Graphics side, as users mostly interact with it at an API level, however keep in mind that shaders are languages the same way "cpu languages" work. It's all still compiled to assembly, and there's no reason that you couldn't make an open instruction set for a GPU the same as a CPU. This is especially obvious when it comes to Compute workloads, as you're probably just writing "regular code".

Now, that said, would it be a good idea? I don't really see the benefit. A barebones GPU ISA would be too stripped back to do anything at all, and one with the specific accelerations needed to be useful will always want to be kept under wraps.

Just 'cause Nvidia might want to keep architectural access under wraps doesn't necessarily mean that everyone else is going to, or that they have to in order to maintain a competitive advantage. CPU architectures are public knowledge, because people need to write compilers for them, and there are still all sorts of other barriers to entry and patent protections that would allow maintaining competitive advantage through new architectural innovations. This smells less of a competitive risk and more of a cultural problem.

I'm reminded of the argument over low-level graphics APIs almost a decade ago. AMD had worked together with DICE to write a new API for their graphics cards called Mantle, while Nvidia was pushing "AZDO" techniques about how to get the best performance out of existing OpenGL 4. Low-level APIs were supposed to be too complicated for graphics programmers for too little benefit. Nvidia's idea was that we just needed to get developers onto the OpenGL happy path and then all the CPU overhead of the API would melt away.

Of course, AMD's idea won, and pretty much every modern graphics API (DX12, Metal, WebGPU) provides low-level abstractions similar to how the hardware actually works. Hell, SPIR-V is already halfway to being a GPU ISA. The reason why OpenGL became such a high-overhead API was specifically because of this idea of "oh no, we can't tell you how the magic works". Actually getting all the performance out of the hardware became harder and harder because you were programming for a device model that was obsolete 10 years ago. Hell, things like explicit multi-GPU were just flat-out impossible. "Here's the tools to be high performance on our hardware" will always beat out "stay on our magic compiler's happy path" any day of the week.

You could make a standardized GPU instruction set but why would anyone use it? We don't currently access GPUs at that level, like we do with the CPU.

It's technically possible but the economics isn't there (was my point). The cost of making a new GPU generally includes writing drivers and shader compilers anyway, so there's not much of a motivation to bother complying with a standard. It would be different if we did expose them at a lower level (i.e. if CPU were programmed with a jitted bytecode then we wouldn't see as much focus on ISA as long as the higher level semantics were preserved)

SPIR-V looks like a promising standardization. It can not be translated directly into silicon but it doesn't have to. Intel also essentially emulates x86 and runs RISC internally.

>Intel also essentially emulates x86 and runs RISC internally.

By that logic anything emulates its ISA because that is the definition of an ISA. An ISA is just the public interface of a processor. You are wrong about what x86 processors run internally. Several micro ops can be fused into a single complex one which is something that cannot be described with a term from the 60s. Come on, let the RISC corpse rot in peace. It's long overdue.

GPUs are also much simpler chips in comparison to CPUs.

90%+ of the core logic area (stuff that is not i/o, power, memory, or clock distribution) on the GPU are very basic matrix multipliers.

They are in essence linear algebra accelerator. Not much space for sophistication there.

All best possible arithmetic circuits, multipliers, dividers, etc. are public knowledge.

I've been studying and blogging about GPU compute for a while, and can confidently assert that GPUs are in fact astonishingly complicated. As evidence, I cite Volume 7 of the Intel Kaby Lake GPU programmers manual:


That's almost 1000 pages, and one of 16 volumes, it just happens to be the one most relevant for programmers. If this is your idea of "simple," I'd really like to see your idea of a complex chip.

The most complex circuit on the GPU would be the thing chops the incoming command stream, and turns it into something which matrix multiplicators can work.

I completely disagree with this comment.

Just because a big part of the chip are the shading units it doesn't mean it's simple or there's no space for sophistication. Have even you been following the recent advancements in recent GPUs?

There is a lot of space for absolutely everything to improve. Especially now that Ray Tracing is a possibility and it uses the GPU in a very different way compared to old rasterization. Expect to see a whole lot of new instructions in the next years.

> 90%+ of the core logic area (stuff that is not i/o, power, memory, or clock distribution) on the GPU are very basic matrix multipliers. >All best possible arithmetic circuits, multipliers, dividers, etc. are public knowledge.

Combine these 2 statements and most GPUs would have roughly identical performance characteristics (performance/Watt, performance/mm2, etc)

And yet, you see that both AMD and Nvidia GPUs (but especially the latter) have seen massive changes in architecture and performance.

As for the 90% number itself: look at any modern GPU die shot and you'll see that 40% is dedicated just to moving data in and out of the chip. Memory controllers, L2 caches, raster functions, geometry handling, crossbars, ...

And within the remaining 60%, there are large amounts of caches, texture units, instruction decoders etc.

The pure math portions, the ALUs, are but a small part of the whole thing.

I don't know enough about the very low level details of CPUs and GPUs to judge which ones are more complex, but in claiming that there's no space for sophistication, I can at least confidently say that I know much more than you.

> GPUs are also much simpler chips in comparison to CPUs

Funny you say that. I've never heard a CPU architect coming to the GPU world and say "Gosh, how simple is this!".

I invite you to look at a GPU ISA and see for yourself, and that is only the visible programming interface.

Judging by your nickname, I think I have reasons to listen. You are that David who writes GPU drivers?

So, what do you think is the most complex thing on an Nvidia GPU?

Matrix multipliers? As in those tensor cores that are only used by convolutional neural networks? Aren't you forgetting something? Like the entire rest of the GPU? You're looking at this from an extremely narrow machine learning focused point of view.

Is that last sentence provable? If so, that's an impressively-strong statement (to state that the provably-most-efficient mathematical-computation circuit designs are known).

Well, at least for reasonably big integers, it is.

There might be faster algos for super long integers, or minute implementation differences that add/subtract few kilogates.

RISC-V is in danger of imploding from its near-infinite flexibility.

It is driven largely by academics who lack a pragmatic drive in areas of time-to-market, and it is being explored by companies for profit motives only. NXP, NVIDIA, Western Digital see it as a way to cut costs due to Arm license fees.

RISC-V smells like Transmeta. I lived through that hype machine.

Transmeta as in a company that was strong armed by Intel in a court case Transmeta could have won if they hadn't run out of cash and time (and they ran out of those things because of that court case)?

Transmeta was never a viable solution: it was a pet project billed as a disrupter on the basis of it being a dynamic architecture. Do I need to explain the mountain of issues with their primary objectives or do you want to google that?

You make a HUGE assumption that is really going to happen.

Well sure, but... aren't the consequences of the possibility that it's not going to happen literally not worth discussing at all?

(Re: RISC-V) I hope so!!!

I hear a lot of people talking about RISC-V but I wonder why people don't think companies would move to other open ISAs like Power or MIPS.

"MIPS Open" came after the RISC-V announcement, and is still currently somewhat of a joke. Half the links on the MIPS Open site are dead.

I think one of the major points for RISC-V was to avoid the possibility of patent encumbrance of the ISA so that it can be freely used for educational purposes. My computer architecture courses 5-6 years ago used MIPS I heavily. MIPS was not open at the time, but any patents for the MIPS-I ISA had long since expired.

POWER is actually open, but it is tremendously more complicated. RISC-V by comparison feels like it borrows heavily from the early MIPS ISAs, just with a relaxation of the fixed sized instructions and no architectural delay slots and a commitment to an extensible ISA (MIPS had the coprocessor interface, but I digress).

The following is my own experience - while obviously high performance CPU cores are the product of intelligent multi-person teams and many resources, I believe RISC-V is simple enough that a college student or two could implement a compliant RV32I core in an HDL course if they knew anything about computer architecture. It wouldn't be a peak performance design by any measure (if it was they should be hired by a CPU company), but I think that's actually a point of RISC-V as an educational platform AND a platform for production CPU cores.

As a teaching tool RISC-V is clearly great, as it is for companies that want to add custom instructions to their microcontrollers like NVidia or WD. But if I was looking to design a core to run user applications then to me it looks like everything is stacked in favor of Power. The complexity of the ISA is dwarfed by the complexity of a performant superscalar architecture. And to be performant in RISC-V you'd probably be needing extensive instruction fusion and variable length instructions anyways further equalizing things. And you really need the B extension which hasn't been standardized yet. Plus binary compatibility is a big concern on application cores and ISA extensions get in the way of that.

I totally agree.

> still currently somewhat of a joke. Half the links on the MIPS Open site are dead.

unsurprising given that Wave Computing shut down the project almost a year ago and subsequently declared bankruptcy.

Oh I missed that, thanks for the info.

The MIPS Open ISA project was actually shut down after only a year. [^1]

POWER has technically only been "open" for a little under a year. OpenPOWER was always a thing, but this used to mean that companies could participate in the ISA development process and then pay license fees to build a chip. This changed last year when POWER went royalty-free (like RISC-V).

The real defniition of "open" is can you answer the following questions in the negative:

  - Do I need to pay someone for a license to implement the ISA?
  - Is the ISA patent-unencumbered?
RISC-V was the only game in town for a long time and thereby attracted large companies (including NVIDIA) and startups that were interested in building their own microprocessors, but didn't want to pay license fees or get sued into oblivion.

[^1]: https://www.hackster.io/news/wave-computing-closes-its-mips-...

Because they aren’t as open in reality as they sounded when announced.

The advantages of those two (and of ARM) is that there actual Implementations with decades of development behind them, Yes, some technical debt but also many painfully-learned good decisions.

RISC V, which I’m really excited about, you can think of as a PRD (customer’s perspective product requirements document). That’s what an ISA is. Each team builds to meet that using their own iomplemetation, none of which is more than a few years old yet. But the teams incorporate decades of learning , and have largely a blank sheet to start with. I think it will be great....but isn’t yet.

I know it says “more” than $32B but isn’t that a little low if Softbank paid $31B?

Neither Softbank nor Nvidia care what I think, but I would feel better if the buyer of ARM wasn’t a company with a existing chip business.

No, if anything it's overvalued. Previous year's profit was just over $400m. At a 10x multiple that's a $4bn price tag. If you go the year before it was over $600m, which at a 10x multiple is $6bn. Even a crazy 30x multiple would only be $18bn. Softbank massively overpaid, which is why Softbank is SELLING Arm and other companies, because they racked up massive debt.

Interest rates are at a record low. Amd is at a pe ratio of like 100, the average pe ratio today is around 15 iirc. With a growth expectation increasing because of apples adoption and all, plus with record low interest rates, a 60x multiple is not insane.

ARM the companies growth rate has been tepid at best since SoftBank bought it. You need a high growth rate to justify a high multiple.

And interest rates can change quickly. It’s a value trap to overweight current interest rates in equity valuations.

> ARM the companies growth rate has been tepid at best since SoftBank bought it.

It's a "knock it out the ballpark" victory by Softbank's standards.

> the average pe ratio today is around 15 iirc

I'm not sure what population of companies you're taking your average from, but the S&P 500 p/e is a lot higher than that.

That was a mistype, thanks for catching that. I meant 25, but I haven't kept up with the last quarter so I have no idea what it looks like with all the recent insanity, so that might be pretty far off as well

Apple has a one time ARM architecture license so that’s not going to be recurring revenue for ARM.

This is more from a increasing adoption of arm industry wide perspective as people begin writing code on and optimizing code for ARM machines. Could trickle over into increased adoption by other companies and other sectors.

They pay a royalty for every chip they ship.

60x might not be impossible, but it's absolutely insane, even if it's common.

Nvidia's own pe ratio is around 80 it looks like. Its not actually that insane. Overvalued? Maybe. But all this is saying is that basically, you think that your current required rate of return minus your expected growth rate for ARM is around 1.7%, and your required rate of return is going to be pretty low right now because interest rates are so low. You value future earnings more and you think it's going to grow a lot. You might be wrong, and for a company like AMD with a pe ratio above a hundred you might be betting in an awful lot of growth happening, but in this environment, these are not "insane" numbers.

P/E ratio is not the same as a purchase-valuation multiplier.

Revenue is the best indicator for value, not profit which is artificially driven down for tax purposes.

Free cash flow is better than revenue - you can play around with accruals in unsavory ways that are harder to pull off with cash.

Neither is a great indicator for companies that are experiencing significant growth. ARM Holdings has nearly tripled revenue over the past decade. So you have to weight future growth, as well as potential future market opportunities.

ARM + nVidia can be a powerhouse combo, especially in the cloud/server market.

NVidia is already an ARM licensee. What does owning ARM holdings give them besides a massive amount of debt? They could license everything ARM owns for decades for less than this buyout would cost.

ARMs growth has petered out, its market is large and mostly saturated. PCs and servers won’t ship a fraction of the CPUs mobile does.

That's just silly, Wal-Mart would be the most valuable company (while being far from the most profitable) if it held true.

I don't think it's that silly to consider Wal-Mart the most valuable company.

If I sell 100 lemonades per day and can't pay my rent, yet my neighbor sells 50 lemonades but manages the business in a way to pay his rent and a salary, how can my business possibly be more valuable? It's a silly example but I don't think it's far fetched. Similarly, Apple is worth far more than 20% of the phone market despite having a grasp on only 20% (give or take, I don't recall the exact number) market shares. There's a reason companies aren't valued based on revenue.

But its definitely not in any financial metrics. If you own walmart for a year, you'll make around 4 billion, while if you own Google, you'll make around 7. And Google is likely going to be growing faster than Walmart.

Revenue is a terrible indicator, EBITDA is a good indicator.

If you look at the products these chips are in and see the breakdown. Cost of IP license from ARM, price to make that chip and price that chip sold in that product, let alone the final product price. The margins are thinest in ARM's area and for each chip they sell, the others down the line do have larger margins. I don't have TMSC's production costs and what they charge at hand, but i'd hazard that what TMSC make per chip is possibly more than what ARM makes per chip.

Maybe why SOftbank want to raise those licence costs and that is with alternatives now about, not that easy an equation than just raising those costs as many for controllers that RISCV become more than fine for and some HD manufacturers already transitioning for a few cents extra savings currently, let alone if ARM license increased.

Sure ARM here there and everywhere but it got that way as much on the cheap costs of using that IP as much as the array of IP packaged up. So yip they make money, but it's if you breakdown how they make money, ARM are regular/reliable, hence less spikes in income either way.

Logicaly, for ARM to increase revenue, it would need to branch out into other markets. Nvidia honestly would be a good fit for that over shafting licensing costs, which may well see an increase, but for what softbank wants for returns - ARM would not wear out the level of increases they would want and Softbank know this - hence sell or IPO them best for Softbank and also ARM.

> I don't have TMSC's production costs and what they charge at hand, but i'd hazard that what TMSC make per chip is possibly more than what ARM makes per chip.

Most ARM chips made aren’t pushing 7nm ultra expensive processes - and you are probably still correct.

That’s good take on ARM’s margin per chip.

If ARM could raise licensing fees, it already would have.

Why do you think ARM is undervalued? While I agree, that considering the value of the ARM Arch brought to the mobile world, it may seem undervalued when you compare it to the insane valuation of some BS software companies, their business model of licensing IP doesn't allow them to make mega money. Apple and others have a perpetual licenses even.

Sure, if they want to, they could change the licensing model to bleed their customers dry and increase share price but that would just work temporary as everyone would then accelerate the movement to RISC-V.

ARM(the company, not the Arch) has peaked and will probably just stagnate for the foreseeable future.

I just find it hard to believe that Softbank would settle for $1B in profit for a highly successful business. They may have overpaid themself or ARM is losing them money, but why would Nvidia want to pay $32B for a money losing business?

Because Softbank itself is hemorrhaging money/losses and needs to start protecting itself.

Exactly. They have lost so much money - this will at least put some back in their coffers - how much did they lose on WeWork? $18B

They probably have no idea how to hype up ARM. It's like I bought out Apple Store in NY in cash but have no idea what to do with it other than to admire it

The multiples arm is trading at and With the new Apple lines I think they are doing a good job out of it .

SoftBank is run by an idiot that he paid 10x revenues for ARM when its growth was already slowing substantially.

Also keep in mind that Softbank has needed money to pay off their large debt load, which probably reduces the amount of leverage they have in negotiations.

I wonder if it would be possible for a consortium of companies like Apple, Microsoft, and Google to swoop in and outbid Nvidia? All of them rely on customization agreements with ARM and Nvidia, being a chip maker, would be competition. And a consortium like that would allow the consortium members to keep their changes to themselves so whatever Qualcomm and Microsoft are doing with the SQ1 or Apple with their Silicon stuff wouldn't have to be shared will all consortium members -- or a competing chip maker like Nvidia.

Apple has a perpetual license on the ARM instruction set, so they just pay a royalty per chip. Apple owning ARM would be a massive problem for regulators, as they'd be the sole source of chips to their competition.

MS and Google don't make chips, and MS doesn't even license any ARM tech (I don't think, they just make software and buy CPUs, they don't make any). They're a bad fit. Google's dipping their toes in it, but I don't think they're doing fully custom silicon. Most companies buy the rights and layouts, and tweak those layouts.

NVidia makes chips, but they don't make many mobile chips, and virtually no chips for phones compared to Samsung and Qualcomm. They're a better fit.

Microsoft has had an architectural license for quite some time:


> MS and Google don't make chips

MS designed and produced custom silicon for HoloLens 2 (the Holographic Processing Unit 2.0) [0][1]. The Microsoft SQ1 in Surface Pro X is probably also produced by them, though the design is in collaboration with Qualcomm.

[0] https://www.hotchips.org/hc31/HC31_2.14_Terry_Microsoft_Aug2...

[1] https://youtu.be/IjxpMZUqu6c?t=3814

The chipset in the Nintendo Switch is a mobile chipset by NVidia.

Yes yes, there are outliers which is why I didn't make a blanket statement NVidia doesn't make anything mobile.

Google has been making TPUs and similar high end chips, in addition to their Titan chips.

I wouldn't be surprised if they make their own networking chips.

Nvidia might a chip maker, but they aren't really a competing chip maker. The only real competitor to Nvidia is AMD, which does not use ARM IP in any significant amount that I know of.

Nvidia might be tempted to block ARM customers if they had competing designs to push them to, but they don't. So they would be sacrificing revenue without any other sales to make up for it. It doesn't seem like a good idea to me.

This would be a different story if a company like Apple were buying ARM. They could definitely benefit by gouging Samsung and Google on license fees but they've got to balance that with the chance of getting fined for anti-competitive practices.

Sounds like a great candidate for a switch to RISC-V

why though? the PSP is basically a derivative of TrustZone, "switching to RISC-V" would basically be re-engineering the whole PSP stack from scratch.

So much like a shared patent pool, which in effect does partially describe ARM.

But when those companies can get access to those patents for more accountant friendly costs without balancing out risk/asset/management aspects of owning part of that slice, then the motivation, corporate culture that would be needed to drive such an investment is just not there.

That's why having at least three "heavies" as part of the consortium will enforce good behavior or at least keep it from getting out of control. Well-armed (legal teams) companies are polite companies... (except Oracle, but they are mainly a litigation firm that also sells databases).

True, yet the incentive is just not there business wise over what they have now and be a hard sell to the board as what advantages does it give them?

It actually makes more sense for some governments to buy ARM than many companies, so at least we can be thankful (so far) that avenue has not transpired. Might even be best overall if ARM was partially IPO'd. If SOftbank sold 50% stake via IPO, I dare say that way of selling the company would yield the best and also quick return and best of both Worlds and that option I'd still not rule out. Indeed, mooted interest from many large companies sniffing as a prelude to an IPO may not be unheard of.

I too share your aversion towards Oracle's business practices.

I think you'd have significant regulatory back-pressure preventing Microsoft, Google, and Apple - between them they effectively own all the operating systems and app stores of the world. There's a real problem here that will prevent competition from outside (QCOM & Samsung are two massive ARM chip designers for mobile, there's a bunch more in the embedded space). It's important to look at the risks and not just the upsides as there are both.

Apple has never shown much interested in a lot of B2B type licensing companies that aren't 100% apple, and potentially owning something their competitors need and all the legal complications that brings.

Meanwhile Apple's home grown chip initiatives are mature, not sure they need much from ARM.

I think it would make even more sense for such a consortium to buyout AMD at this point.

Which would make the x86 licensing deal void IIRC.

Why is ARM selling itself? May be I am missing something here but if Qualcomm can generate a bulk of their revenues through licensing fees, shouldn't ARM be bringing in more? It feels like going public would be a profitable route for their investors. Wonder why they are opting for that.

ARM is owned by SoftBank, who has had a very bad year and a half. I guess they just want cash, not to take it public.

The irony of WeWork causing a great product like ARM to need to be sold off...

Irony redux: even if WeWork wasn't a trainwreck, this pandemic would have killed it.

It wouldn't have been great in the very short term. But there's an argument that mid-long term, the pandemic will be good for WeWork.

And what would that argument be? That the increase in unemployment leads to more new businesses being founded that can somehow miraculously afford luxury(-priced) offices?

Wireless technology is a patent minefield. CPUs generally aren't because the techniques for internals are old, and internal design, manufacturing, and especially validation (where intel fell flat lately) methodologies dominate the costs and quality.

For much of ARM's recent volume use in MCUs and slightly larger embedded devices, there is credible threat from first party usage of RISC-V (see WD, Nvidia).

Access to the ISA itself can be of high value.. see x86 and s390x for prime examples. Although I don't really see how ARM could pull that off outside being an acquisition like this, and making the licensing process onerous enough that people move to buying chips from nvidia instead of doing their own designs. In such a scenario, RISC-V can become a credible threat to phones too, and the server thing pundits keep pushing for the past 15 years never happens.

So there is a lot of value here, but it's pretty hard to grow as a pure licensing play as ARM has been since there are many risks and opportunities for price compression.

ARM is wholly owned by an investor called SoftBank, who is desperate for cash right now, because they messed up with WeWork.

ARM isn’t selling itself. SoftBank bought ARM several years ago. I would imagine all the cash they hemorrhaged on wework has finally forced their hand.

Interesting, Nvidia has been using their own RISCV designs in their GPUs. Perhaps they are more interested for their automotive line?

I think it's for the datacenter. They have already purchased Mellanox. In many HPC applications, having fast GPUs and fast interconnects is 99% of what you need. A reasonably competent CPU architecture would round this out, and I don't think RISC-V is there yet.

ARM have also made some HPC plays, e.g. buying Allinea (probably the best supercomputer debugging tool).

If that happens it may have wider consequences.

ARM would become owned by an American company. It has to comply with American restrictions on dealing with China but that would make it essentially no different from any other American company, either legally or in fact.

So in the current context this would be bound to raise eyebrows in Beijing, and China could only react by doubling-down on developing domestic alternatives.

Yep, the United States government gaining the ability to directly block the licensing of ARM reference designs from companies like Huawei's HiSilicon (and the fabless chip designers such as Rockchip) is a VERY big deal. It's a very different situation to the US government having to pressure Japan's SoftBank, UK's ARM Holdings (or their respective governments).

I'm kind of wishing this trend will push investment in something open like RISC V. Up and coming countries should definitely be thinking - if US targets China today, it could be them next.

> I'm kind of wishing this trend will push investment in something open like RISC V.

This is indeed happening.

The challenge is on the GPU side. Now is they could finalize the V extension people could start using that for graphics.

There has to be startups already doing just that

Can anyone explain what is going on between Apple and Nvidia? Seems Apple will not add Nvidia hardware to its machines no matter what. What’s the back story to that?

And where this is related is I wonder if Apple will have to relent (assuming the purchase goes through) and do business with Nvidia since it licenses some technology from ARM. Or, have I got it wrong here? Does Apple not rely on ARM?

NVIDIA really ratfucked them with the GPUs in MacBook Pros maybe a decade ago. They started failing like crazy, NVIDIA said they fixed it, they hadn’t, and then when things went to litigation NVIDIA blamed Apple for the GPU failures. Apple ran a giant recall and swore off using NVIDIA ever again.

A little tidbit, and I of course don’t have any proof to back this up but: for the eligible Mac models to qualify for a free mlb replacement from apple, the computers HAD to fail a specific diagnostic (called VST - video systems test). The repair couldn’t be classified as “covered” unless there was a record of this test failing. I’m also certain that privately nvidia agreed to cover the cost (or some of the cost) of these repairs, but stipulated they would only cover the computers that failed the test. Most computers did fail the test, but I definitely saw some machines that absolutely were experiencing gpu issues but wouldn’t have been eligible (most of the time Apple did the right thing and paid for it out of pocket)

Sorry for rambling! Just thought it was an interesting tidbit

Google "macbook faulty nvidia". e.g. [1] references to "arrogance and bluster" from Nvidia.

My personal experience: I had a 2011 Macbook Pro with an nvidia card. It started to fail randomly. Apple identified that certain nvidia GPUs were failing and created a test the "Geniuses" would run. My Macbook always passed, even though it kept throwing noise on the screen anywhere other than the Apple store. Eventually it finally failed their test: Four days after the (extended) warranty period. They refused to replace it. Bitterly, the best option for me was to pay the $800 for a new board.

[1] https://appleinsider.com/articles/19/01/18/apples-management...

Wow. I have had Apple do me right for free repairs out of warranty (and past AppleCare expiration) but several things applied: It was in California where retail culture is less combative about helping the customer; 2) the laptop did have AppleCare; 3) It was at the Palo Alto store which tends to have very well educated employees, with cross flow between the employee pool and corporate; 4) the laptop was purchased at an Apple store, which inexplicably gives them more leeway (well, possibly due to their certainty it was always in Apple’s hands prior to direct sale to the customer).

For what it's worth, I had almost exactly the same experience with the same 2011 MacBook as 'lowbloodsugar'. From my point of view, it was clearly starting to fail while still in AppleCare warranty (snow on the screen when doing anything graphics intensive), but somehow it passed their 'tests'.

Shortly after the warranty ended, the Nvidia card failed for good. On the bright side, it was a dual-video system, and I was able to keep it running a few years longer through complicated booting rituals that convinced it to boot with the lower power Intel graphics (although I lost the ability to recover from sleep).

I'm not sure why we have such different experiences with Apple customer service. I was also in the Bay Area, but not Palo Alto. It's possible that with a more aggressive approach I could have gotten them to fix it. Instead, I accepted their verdict, and now spread the word through forums like this that their warranties are not to be trusted.

Interesting, I have nearly the exact same situation except replace Nvidia with AMD. "Radeongate".

Also have miraculously saved the machine with a booting ritual/hack that bypasses the discrete gpu

Also 2011 MacBook.

I'm worried that you are right, and that I had an AMD card as well. I'm unsure, as the focus of my disappointment is Apple, rather than the manufacturer of the card.

That’s a super, super shitty experience. I made a comment somewhere else in this thread outlining my theory that apple were so strict with the test because part of the financial arrangement with nvidia stipulated the machine had to fail a diagnosistic they provided or contributed to.

Still, as someone who used to work as a “genius”, it would have been easy to cover your mlb repair under warranty for a million reasons, so the technician failed you there, and then to not do anything for you 4 days outside of eligibility especially based on your experience... that’s fucked. Not all “genius” bar people are like that. If something similar happens in future, call AppleCare and ask to speak to “tier 2”. They sometimes have leeway to issue coverage in extenuating circumstances.

See related discussion (from last week): https://news.ycombinator.com/item?id=23929993

SoftBank purchased ARM for $32B in 2016. If this figure is correct it looks terrible for SoftBank. Another one in a string of bad investments.

Depends how much they've milked it in the meantime.

It’s growth has slowed and profits are down. I doubt they will get $20B for it.

I think people are overlooking a worse outcome. Chinese companies have already bought the Asian subsidiary of ARM, if China bought ARM who knows how they would retaliate with it given the current attacks on Huawei. They might use it as leverage.

realistically we need an open unowned architecture like RISC-V, because whoever buys ARM will cause concern given how hyperconpetitive mobile is, the incentive to abuse the ownership is high.

We really want to avoid another Oracle/Java scenario as well.

Looking at how qualcomm has a virtual monopoly for mobile processors nvidia buying arm and getting back into the mobile space might be a good thing. Though I think this is more a play for desktop processors intel is getting into discrete cpu market Amd already is. Every year integrated gpus are getting better at giving good enough performance so the market for discrete gpus will decrease in the end Nvidia might need its own cpu to stay relevant.

> qualcomm has a virtual monopoly for mobile processors

Not even close. Qualcomm has about a third of the market: https://cntechpost.com/2020/03/24/samsung-surpasses-apple-to...

Oh no, look what have you done WeWork. Who would have thought that rich kids burning VC money, who themselves were running elaborate ponzi scheme would end-up as a threat to the democracy of computing.

Ofcourse U.S. Govt. wouldn't have problems with this unlike Broadcom-Qualcomm deal, on the contrary this will put American semiconductor industry in a more dominant position.

RISC-V is the only hope for the rest of the world now.

This would be good. I'm waiting to see high performance PCs running on ARM CPU instead of x86.

This isn't a bad move strategy wise. If the Apple performance is as awesome as is rumoured and hype, there are going to be a lot of people looking for new arm chips, with no obviously ahead players. nVidia has looked a couple of times at building their own x86 core, and ARM cores may be a better bet.

At this point - AMD, Intel, Apple are all looking at fully integrated APU/CPU/GPU stacks. That leaves NV out in the cold if they don't do something.

Looking further into the future nvidia buying arm now means they could have their cpus and gpus in the next generation of consoles instead of amd

Except AMD has:

* better single threaded performance than ARM

* the same instruction set as developers' workstations

* the willingless to add Microsoft/Sony's hardware IP to the console chips

1) There are many domains that don’t care about single threaded CPU performance or CPU performance at all. I would say gaming is becoming one such domain

2) This might soon change if Microsoft and Apple succeed in their quest to push ARM machines to the consumer market. I can imagine than an ARM macbook is all that's needed to start an avalanche in adoption

3) Willingness can change easily :)

Other than the same instruction sets the other two factors can change in the 5 to 10 years that a console generation lasts.

Dunno what Softbank is doing here, I thought ARM is the kind of company they would want to keep.

Wonder how much of this is pushed by US policy to secure IC dominance like getting TSMC to build US fab. Barr was floating the idea of getting Cisco to acquire Ericsson for 5G, not a stretch to also ask Nvidia to buy ARM.

This might be a lucky break for Intel. If NVidia buys ARM, this might slow the ARM ecosystem's intrusion on Wintel. As others have noted, NVidia will probably go and develop a data center CPU to complement their other offerings. Qualcomm already tried that and failed though [0], so NVidia's effort may meet the same fate several years from now. Regardless, Intel would get a little more time to work through its process problems.

[0] https://www.theregister.com/2018/12/10/qualcomm_layoffs/

Such a good use of their inflated stock price for an all-stock transaction. Go, NVIDIA!

SoftBank would want cash, they need to reduce their debt load , that’s pretty much they are considering this price in the first place

Fortunately there's a market for stocks.

There is no easy for market for 32b of any stock . You cannot just make sell order for that qty and expect that Order to be filled.

That stock market is also there for ARM stocks . There is no need to trade for nvidia shares at an undesirable price if there is no immediate cash in it.

Nvidia could potentially raise money from an FPO or similar instruments and leverage advantageous stock price they have, I.e. issue lesser newer stock than they would have at a lower price ,but only way SoftBank will consider a deal is if they get cash

There actually is. For example - on the last annual Berkshire shareholders meeting Buffett was talking about how easy it was for them to unload billions worth of airlines, 100% of their holding.

On a typical single day $4B worth of NVIDA stock is traded.

PS. ARM is currently privately held (i.e. it is not on the stock market).

Sure you can, someone will buy that $32B in nVidia stock for $20B. Which is still more than ARM is worth.

The name is officially "Arm", not "ARM", and that's been the case for exactly 3 years now. Bloomberg has it correct in the article.

This makes no sense. nVidia is already transitioning away from ARM and shipping RISC-V controllers in their GPUs. They already gave up on Denver, their high perf ARM server chip. Why would they buy ARM?

Pure uneducated guesses from me: maybe the reason for giving up on Denver would no longer be a reason if they owed ARM. Or maybe they see a future use of ARM's tech that will either benefit them to own, or that scares them that ARM competition (or competition from other companies working with ARM) in a current or future area. Or just that they believe ARM is a solid business that will make money whoever owns it.

But I don't have a clue if it's a good idea or not.

They're using RISC-V for their micro controllers. I don't think we have any reason to believe they won't stick with ARM for application cores. And they've tended to ping-pong between Denver cores and standard ARM cores so I wouldn't entirely write Denver off.

The would buy ARM because they believe it can result in long term profits.

Controlling the ARM holdings the company is very different value prop than a single arm chip .

When they made those decisions ARM wasn’t available for buying, SoftBank was ok top of the world didn’t have to sell anything . A year back do you think SoftBank would consider a proposal so close to their purchase price ?

Because the investment bankers desperate for fees said they will to the press to try to gin up other buyers?

Super shocked anyone would pay more then $10B based on their financials.

This acquisition is in US interests and can be supported by US government. They try to stop leaking silicone industry in general (famous CHIPS act), and block Huawei and its partners.

I can see why Apple wouldn't want to buy ARM since that would put them in very delicate position, but I'd be surprised if Intel or AMD weren't trying to buy ARM as well.

I hope that this ends up giving Nvidia enough leverage over Qualcomm that they can start putting Tegra chips in smartphones again without cellular radio patent disputes.

People in this thread discuss what Nvidia would do with ARM after they buy it, but I'm honestly not even sure if anyone even could buy ARM to be honest. The article barely even mentions the anti-competitive nightmare that it would probably become, and if Nvidia agreed to let everyone else keep buying licenses and designs as it is now, then you have to argue is there any reason for them to buy ARM at all?

>then you have to argue is there any reason for them to buy ARM at all?

ARM provides them a stable, consistent, and regular revenue stream.

This will be killed or severely hampered by antitrust agencies. Definitely by the EU, maybe the US as well if there's anyone decent left there.

Antitrust nowadays is just a buzzword politicians throw around every couple years in reference to household names like FAANG in order to get reelected. Unless your average grandma knows the name Nvidia, don't count on those in power to even pretend to do something about this.

Why? From where I sit there will still be plenty of (or at least no less) competition in the CPU/GPU space--especially considering the fact that ARM itself is not a semiconductor manufacturer.

Why definitely? What if Nvidia is looking to get out of producing GPU hardware and move to an ARM model?

AMD ATI deal went through , if this got blocked , isn’t it antitrust other way around ? You are not letting Nvidia compete with AMD?

I imagine Apple already negotiated a long-term (or in perpetuity) license for ARM so Nvidia won't be able to put the squeeze on them...

I really hope this doesn’t happen. Nvidia could hike prices for future designs, making devices more expensive as a result.

With Nvidia's nasty anti-competitive and lock-in attitudes, this doesn't bode anything good.

Thats a lot of money and must be very tempting. Lets hope that ARM doesn’t get bullied to bite the apple. Nvidia sees the writing on the wall with GPUs and knows it must acquire the future king of CPUs to survive.

If this goes through would that mean that Huawei can no longer use ARM chips?

Is nvidia seriously going to get into ARM business? Are they going to grok the business of embedded development and microcontrollers? I can't help but think that this would mean the end of ARM as we know it.

The Arm people would still be there running the company, it's not like they all get fired and overnight it's Nvidia people with no experience doing it.

Well nvidia need to create an operating system now and we could end up with three ecosystems that could be competitive with each other. Good for consumers I say.

In the ML GPU they are a monopoly but man they do a great job providing support for TF or Pytorch where AMD hasn't invested. So in Cloud Nvidia is the only way to go

Well, I guess that's one way to keep Apple as a customer :P

Hmm did not Softbank buy ARM for 32bn?

That investment did not pay off I guess..

Intel should start investing in RISC-V or opening up their own architecture. This is the only way they will keep market share in the next 10 years.

I wonder what this means for Tegra and/or mobile GPUs.

Just a thought but if big tech that controls entire markets is bad now (apparently some law makers seem to think so), maybe don’t let this happen.

Kinda curious if this would mean the end of Nvidia's CPU group and/or ARM's GPU group or if there would be a peaceful integration.

Apple seems to be against use of nvidia GPUs in their products - would they potentially abandon using ARM if it comes under nvidia ownership?

Apple is not fundamentally against the use of nVidia products. nVidia refused to implement the Metal APIs knowing full well that in a few years they'd be replaced by Apple's own silicon. Why spend a 1-2 years implementing adding a feature that will be legacy 1-2 years after it ships?

> Apple is not fundamentally against the use of nVidia products.

Are you sure about that? I don't know this story about Metal, but I believe Apple has refused to certify an nVidia driver on Mac since like 2018 (I think), effectively cutting Apple users off from using nVidia products on Apple platforms.

My theory on why Apple prefers AMD follows from Nvidia's main advantage over AMD being software, both their drivers and CUDA.

Apple needs/wants to be in full control of their driver stack. They don't necessarily want to write it themselves, but they want tighter control than Nvidia is willing to give them. AMD on the other hand is much happier to offload responsibility and accept help.

I don't think this set of circumstances would come up for an ARM processor.

Nvidia-provided macOS drivers had QC issues (major bugs) too, some of which never got fixed, which probably didn't help matters.

Apple has an ARM architecture license from the very early days. I don't know if it's perpetual or what the terms are. b Apple doesn't use any stock ARM cores and already develops their own compiler, libraries, OS so how much do they rely on ARM, Inc?

ARM was originally Acorn RISC Machine. (I had an ARM desktop in 1988.) ARM was spun off as a separate company when Apple joined in, with a 43% stake. I imagine that when they sold it, they kept a perpetual license. [1]

Or maybe not: [2]

[1] https://en.wikipedia.org/wiki/Acorn_Computers#ARM_Ltd.

[2] https://www.cultofmac.com/97055/this-is-how-arm-saved-apple-...

Apple is currently against NVIDIA GPUs simply because they got screwed over with NVIDIA GPUs before in terms of hardware reliability and such, which prompted them to switch to AMD for GPUs. It doesn't have much to do with NVIDIA as a company and more to do with their hardware.

But when it comes to ARM chips, Apple designs their own ARM-compliant chips, so NVIDIA owning ARM would do nothing in that aspect imo. Especially since iPhone/iPad chips have been ARM-based for the past decade, so I don't think that Apple really has a good option or reason to switch away from ARM at this point.

> Apple is currently against NVIDIA GPUs simply because they got screwed over with NVIDIA GPUs before in terms of hardware reliability and such

That's the oft-mentioned and certainly plausible reason, but it's not a matter of fact that it's THE reason, is it?

In the absence of any other known reason, this seems to be the most likely explanation.

But yeah, you are correct, there is no factual officially stated reason, and I wouldn't expect Apple to publicly bring it out ever either.

The argument against this reason is that Apple used Nvidia GPUs in their MacBooks from 2012 to 2014(?). This was years after the broken GPU time.

Apple had a long patent and design mark infringement fight with Samsung all while using Samsung fabs for their SOCs, flash, and LCD panels.

Nothing personal, just business.

I'm not an expert, but I thought Apple's reason for ditching Nvidia was speculated to be that their ecosystem was closed and integrating Nvidia slowed down development and lead to failures. I don't think there's an absolute opposition to using Nvidia owned stuff, especially for ARM where Apple licenses IP and custom designs the hardware on their timeline.

Apple's customers don't care about NVidia -- they don't do serious computation, scientific computing. Our Hollywood clients all use PC-architecture machines with NVidia for video editing and rendering. The writers, etc, use Macs.

So Apple can save the cost and get higher profit margins by not having powerful GPUs in their products.

I know a lot of VFX and animation people that do serious rendering / ray-tracing computation and do wish they could use nVidia GPUs in their Macs. It's worth considering how, unlike writers, rendering needs machines not just for all the animators, but for the render farms too.

Of course not.

They might be against nVidia GPUs. They won't throw their investment in Apple Silicon out just because of that.

Apple has a perpetual license no? So wouldn’t matter too much on leverage.

What's the available alternative for Mobile? X86 has never matched ARM on power efficiency vs. compute trade-off for mobile

I think the commenter is referring to Apple's switch to ARM for laptops.

Apple typically has forward-looking supply chain contracts. I imagine ARM is under contract for the licensing for long enough that Apple will build out their own design team.

Already, TSMC is the manufacturer for the cores, so Apple really doesn't have too much of a dependency on ARM in the short term.

I think what we'll see over the next 3-5 years is a divergence from the ARM-licensed design to an in-house-designed architecture: "Apple Silicon II"

Apple already has a perpetual license for the Arm ISA (analogous to the api). The actual chip design has been Apple customer for several years now. Apple does not get their chip designs from Arm.

So in that case they really need nothing from ARM at this point

Slightly unrelated, but I did a quick look on Wikipedia and both Intel and Nvidia are based in Santa Clara, CA? Is there history there?

This 1.7 mile walk starts at Intel's HQ, goes past AMD's HQ and stops at Nvidia's HQ:


How much history do you want?

Start here - https://en.wikipedia.org/wiki/Fairchild_Semiconductor

This might also be of geographical value - https://en.wikipedia.org/wiki/San_Jose,_California

The Santa Clara valley is also known as Silicon Valley.

What’s the word for that thing you get when one part of a duopoly acquires the other? I think there’s a board game about it ...

1) Nvidia my produce good performing GPUs, but it doesn't provide Open Source Drivers and it is not so cooperative with Linux Desktop. Also it is quite buggy.

2) Due to (1) , i believe that this acquisition might promote locked down devices. I don't want this situation to occur.

3) This acquisition may have some affect on Apple since Apple is transitioning to ARM Hardware. If this is the case, Apple may even transit to some other Hardware architecture like RISC-V or OpenPower once again.

This seems stupid and pointless. Although I would probably buy a Nvidia mobile phone :D

Doesn’t sound legit. ARM was purchased by SoftBank in 2016 for 32bn USD.and from then it might have grown more than 20% per year. Selling a growing company with bright perspectives today for the same price you purchased it 4 years ago makes no sense, even for a struggling Softbank.

What if Intel bought ARM? Could that work for Intel? Do they have the resources?

I do wonder what this means for Arm's Mali GPUs if this goes through.

This will be a boon for RISC-V

Best thing for ARM to do is IPO.

As far as I understand this is all rumour still.

What if apple aquires it first! :p

Not looking forward to this at all. This is one of the clear-cut cases that should be shut down by antitrust regulators.

Wow, way to stick it to Apple!

To me, and probably a lot other people, this means the death of ARM. Not literally, but practically.

I wouldn't be surprised if they bought it to extinguish it, probably a proxy buy by Intel.

I bet Apple's really banging their heads against a wall right about now.

At first my mind read this as AMD, reaction: ‘meh’

Then I realized it was ARM, ‘wow’

Is there a way to get rid of these paywalls? So annoying to click on a popular post with plenty of upvotes only to find it is paywall-restricted. Incognito mode works but inefficient.

Could hopefully get Google to invest millions in RISC-v

I hope Huawei beat them to it.

Wow this is really not good.

wtf! x_x

apple is going to be pissed.

Wouldn't Alphabet be able to buy Arm. I think as long as it is separate from Google it should be fine. It would be the company starting with A in their portfolio.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact