Nvidia is a pretty hostile company to others in the market. They have a track record of vigorously pushing their market dominance and their own way of doing things. They view making custom designs as beneath them. Their custom console GPU designs - in the original Xbox, in the Playstation 3 - were considered a failure because of terrible cooporation with Nvidia . Apple is probably more demanding than other PC builders and have completely fallen out with them. Nvidia has famously failed to cooporate with the Linux community on the standardized graphics stack supported by Intel and AMD and keeps pushing propietary stuff. There are more examples.
It's hard to not make "hostile" too much of a value judgement. Nvidia has been an extremely successful company because of it too. It's alright if it's not in their corporate culture to work well with others. Clearly it's working, and Nvidia for all their faults is still innovating.
But this culture won't fly well if your core business is developing chip designs for others. It's also a problem if you are the gatekeeper of a CPU instruction set that a metric ton of other infrastructure increasingly depends on. I really, really hope ARM's current business will be allowed to run independently as ARM knows how to do this and Nvidia has time and time again shown not to understand this at all. But I'm pessimistic about that. I'm afraid Nvidia will gut ARM the company, the ARM architectures, and the ARM instruction set in the long run.
: An interesting counterpoint would the Nintendo Switch running on an Nvidia Tegra hardware, but all the evidence points to that this chip is a 100% vanilla Nvidia Tegra X1 that Nvidia was already selling themselves (to the point its bootloader could be unlocked like a standard Tegra, leading to the Switch Fusee-Gelee exploit).
For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.
The claim that Nintendo is the only company nvidia successfully collaborates with is just wrong:
- nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards
- nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology
- nvidia collaborates with OS vendors like microsoft very successfully
- nvidia collaborated with mellanox successfully and acquired it
- nvidia collaborates with ARM today...
The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...
I mean, this is not nvidia specific.
You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).
I mean, you even try to paint this as if Nvidia is the only company that Apple has parted ways with, yet Apple has long track record of parting ways with other companies (IBM PowerPC processors, Intel, ...). I'm pretty sure that the moment Apple is able to produce a competitive GFX card, they will part ways with AMD as well.
Hey! Wait a second, there. Nvidia isn't bad because it has a properietary Linux driver. Nvidia is bad because it actively undermines open-source.
Quoting Linus Torvalds (2012) :
> I'm also happy to very publicly point out that Nvidia has been one of the worst trouble spots we've had with hardware manufacturers, and that is really sad because then Nvidia tries to sell chips - a lot of chips - into the Android Market. Nvidia has been the single worst company we've ever dealt with.
> [Lifts middle finger] So Nvidia, fuck you.
Nvidia managed to push some PR blurbs about how it was improving the open-source driver in 2014, but six years later, Nouveau is still crap compared to their proprietary driver .
Drew DeVault, on Nvidia support in Sway :
> Nvidia, on the other hand, have been fucking assholes and have treated Linux like utter shit for our entire relationship. About a year ago they announced “Wayland support” for their proprietary driver. This included KMS and DRM support (years late, I might add), but not GBM support. They shipped something called EGLStreams instead, a concept that had been discussed and shot down by the Linux graphics development community before. They did this because it makes it easier for them to keep their driver proprietary without having work with Linux developers on it. Without GBM, Nvidia does not support Wayland, and they were real pricks for making some announcement like they actually did.
Maybe something will change on this soon. There was speculation about this: https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-O.... But I'm not holding my breath, and it would be nice if the solution wasn't "wait and hope until Nvidia releases the software necessary to control their GPUs".
Do you have more info on this ? There is a big difference between not supporting open source, and actively sabotaging it. What are they doing, exactly ?
CUDA just works.
The 5000 series has been out for an entire year without ROCm support at this point.
Even there though, only Intel has a buttery smooth experience. Ryzen for laptops is half baked (terrible USB C docking performance, occasional crashing on Windows and Linux with the 2xxxU series CPU/GPU chips) and AMD GPUs still require manual intervention to load proprietary firmware.
AMD does make some performant mobile GPUs though, they work well in Debian!
I am not sure if I am missing some update, need to set some undocumented kernel flag and/or BIOS setting, if it's a software issue or Í just made a mistake somewhere. Debian 10/11.
Meanwhile, as much as I wanted to get away from Intel, their drivers have never posed any issue at all.
Still no driver for compute 1 year later. I'm so happy i decided to return it and switch to intel instead of waiting for AMD or some random joe on their free time to add support for it to their open source driver.
So yeah. I'd take a working proprietary driver over no driver any day.
The open source driver is kind of ok if the only thing we expect from it is getting a working X session.
Now if one wants to do some complex OpenGL stuff, then it might work, or not.
Even on Windows AMD drivers are the most unstable, bugged software that’s even been shipped. It’s been a long standing joke that AMD “has no drivers”.
Microsoft own previous gen Xbox emulator on the next gen xbox (i think it was original xbox emulated in the 360, but i might be wrong) was impacted by the team having to reverse-engineer the GPU because nvidia refused to let the emulator people to have access to the documentation provided to the original team.
Is this an Ad Hominem ? Linus does not mention there a single thing that they are actually doing wrong.
> Drew DeVault, on Nvidia support in Sway :
Nvidia has added wayland support to both KDE and GNOME. Drew just does not want to support the nvidia-wy in wl-roots, which is a super super niche WM toolkit whose "major" user is sway, another super super niche WM.
Drew is angry for two reasons. First, sway users complain to them that sway does not work with nvidia hardware, which as a user of a WM is a rightful thing to complain about. Second, Drew does not want to support the nvidia-way, and it is angry and nvidia because they do not support the way that wl-roots has chosen.
It is 100% ok for Drew to say that they don't want to maintain 2 code-paths, and wl-roots and sway do not support nvidia. It is also 100% ok for nvidia to consider wl-roots to niche to be worth the effort.
What's IMO not ok is for Drew to feel entitled about getting nvidia to support wl-roots. Nvidia does not owe wl-roots anything.
IMO when it comes to drivers and open-source, a lot of the anger and conflict seems to steem from a sentiment of entitlement.
I read online comments _every day_ of people that have bought some hardware that's advertised as "does not support Linux" (or Macos, or whatever) being angry at the hardware manufacturer (why doesn't your hardware support the platform that says it does not support? I'm entitled to support!!!), the dozens of volunteers that reverse engineer and develop open source drivers for free (why doesn't the open source driver that you develop in your free time work correctly? I'm entitled to you working for free for me so that I can watch netflix!), etc. etc. etc.
The truth of the matter is, that for people using nvidia hardware on linux for Machine Learning, CAD, rendering, visualization, games, etc. their hardware works just fine if you use the only driver that they support on the platforms they say they support.
The only complaints I hear is people buying nvidia to do something that they know is not supported and then lashing out at everybody else due to entitlement.
You're now somehow arguing with people that they should stop complaining about Nvidia's business practices. I would agree with that in the sense that Nvidia can do whatever they want: nobody is obliged to buy Nvidia, and Nvidia is not obliged to cater to everyone's needs. It's a free enough market. But even if you don't agree with some/most of the complaints surely you must agree that Nvidia's track record of pissing of both other companies (and people) is problematic for when they take control of a company with an ecosystem driven business model like ARM's?
I'd agree with you this is OP's argument, however it's main flaw is in explicitly omitting the fact that NVidia is not the only party that's "free" to do things.
We're not obliged to buy their cards and we aren't obliged to stay silent regarding its treatment of the open-source community and why we think it would be bad for them to acquire ARM.
I am always amazed at the amount of pro-corporate spin from (presumably) regular people who are little more than occasional customers.
We still are. I asked about "which specific business practices are these", and was only pointed out to ad hominems, entitlement, and one sided arguments.
Feel free to continue discussing that on the different parent thread. I'm interested on multiple views on this.
> You're now somehow arguing with people that they should stop complaining about Nvidia's business practices
No. I couldn't care less about nvidia, but when somebody acts like an entitled choosing beggar, I point that out. And there is a lot of entitlement in the arguments that people are making about why nvidia is bad at working with others.
Nvidia has some of the best drivers for Linux there are. This driver is not open source and distributed as a binary blob. Nvidia is very clear that this is the only driver that they support on Linux, and if you are not fine with that, they are fine with you not buying their products. This driver supports all of their products very well (as opposed to AMD's, for example), its development is made by people being paid full time to do it (as opposed to most of their competitors which also have people helping on their drivers on their free time - this is not necessarily bad, but it is what it is), and some of their developments are contributed back to open source, for free.
People are angry about this. Why? The only thing that comes to mind is entitlement. Somebody wants to use an nvidia card on Linux without using their proprietary driver. They know this is not supported. Yet they buy the card anyways, and then they complain. They do not only complain about nvidia. They also complain about, e.g., nouveau being bad, the Linux kernel being bad, and many other things. As if nvidia, or as if the people working on nouveau or the Linux kernel for free on their free time owes them anything.
I respect people not wanting to use closed source software. Don't use windows, don't use macosx, use alternatives. Want to use linux? don't use nvidia if you don't want to.
Except that they stop supporting older HW at some point. That, together with occasional crashes learned me not to buy nVidia HW again.
NVIDIA insisted on pushing its own EGL streams even as the wider community was moving in a different direction.
They suffer from a major NIH syndrome and do not know how to work with others at all.
A NVidia purchase of ARM would also create a lot of conflicts of interest.
That's how this year's myspace, which people will have trouble remembering 5 years from now, can get a higher "valuation" than a large semiconductor company with a 30 year track record.
Investors and the financial sector are proving time and time again that they're unable to learn from their mistakes, through no "fault" of their own, because apparently it's human nature to just be horribly bad at this.
It amazes me that people think investors somehow learned anything from the dot-com bubble, given they've been repeating all of their other major mistakes every odd year or so.
I think, it is because people are now so used to Apple and Amazon's trillion valuation, with Apple closing in to 2 Trillion, people think $32B is low or ( relatively ) cheap.
Reality is ARM was quite over valued when it was purchased by Softbank.
This might also be part of SoftBank’s fire sale which bought ARM for $32B just few years ago (2018?)
I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.
Do you have a source explaining what licensing changes they would have to make and what impact would that have for Linux and Nvidia ? I'd like to read that.
Hence AMDs push for Mantle then Vulkan. The console like API is the carrot to get people to use an API that has an verification layer so that third parties can easily say "wow what a broken game" rather "wow this new game runs on Nvidia and not AMD, what broken AMD drivers".
Nvidia open sourcing their drivers completely destroys a large chunk of their competitive advantage and is so intertwined with all the IP of the games they have hacks for that I'd be surprised if they ever would want to open source them, or even could if they wanted to.
More docs would be nice though.
- all kinds of problems wrt. the integration of the driver in the Linux Eco system, including the properitary driver having quality issues for anything but headless CUDA.
- nvidea getting in the way of the implementation of an open source alternative to their driver
AMD does the exact same thing and always has. When you see shaders come down the wire you can replace them with better-optimized or more performant versions. It's almost always fixing driver "bugs" in the game rather than actual game bugs. And the distinction is important.
I do agree with you, but that element is something everyone has to do to remain competitive in games. Developers will only optimize for one platform (because they're crunching), and 9 times out of 10 that's a RTX2080Ti.
This persistent bit of FUD really needs to die. Yes, you have to be careful, but at this point it's ridiculously well-known what is obviously correct and what is obviously incorrect when dealing with GPL. I'm sure there are some grey ares that haven't been worked out, but avoiding those is fairly simple.
Nvidia is already in a weird grey area, releasing binary blobs with an "open source" shim that adapts its interfaces to the GPL kernel. As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.
Way to contradict yourself.
> but avoiding those is fairly simple.
> As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.
Maybe this is what needs to happen to force companies to change their mindset, but where I work, lawyers tell us to (1) never contribute to any GPL'ed based code, (2) never distribute GPL'ed code to anybody (e.g. not even a docker container), etc.
Their argument is: a single slip could require us to publish all of your code, and make all of our IP open, and to make sure this doesn't happen, an army of lawyers and software engineers and managers would need to review every single code contribution that has something to do with the GPL. So the risks are very high, the cost of doing this right is very high as well, and the reward is... what exactly ? So in practice this means that we can't touch GPL'ed code with a 10 foot pole, it is not worth the hassle. If I were to ask my manager, it will tell me that it is not worth it. If they ask their manager, they will tell them the same. Etc.
BSD code ? No problem, we contribute to hundreds of BSD, MIT, Apache, ... licensed open source projects. Management tells us to just focus on those.
For some older card generations (e.g. GTX 600 series) it was competitive with the official driver. But in every hardware generation since then, the GPU requires signed firmware in order to run at any decent clock speed.
The necessary signed firmware is present inside the proprietary driver, but nouveau can't load it because it's against the ToS to redistribute it.
Most GPU features are available but run at 0.1x speed or slower because of this single reason. Nvidia could absolutely fix this "tomorrow" if they were motivated.
At this point I've gotten so bloody tired of the games people play with IP, that I'm arriving at the point I think I wouldn't even mind being part of the collateral damage of our industry being burned to the ground through the complete dissolution of any software delivered or related contract. If you sell me hardware, and play shenanigans to keep me from being able to use it to it's fullest capability, you're violating the intent of the rights of First Sale.
To be honest, I think every graphics card should have to be sold bundled with enough information for a layperson (or I'll throw out a bone,a reasonably adept engineer) to write their own hardware driver/firmware. Without that requirement, this industry will never change.
The point is it's not about open sourcing your properitary driver but about not getting in the way of an alternative open source driver, maybe even letting it a bit of an have even if just unofficially.
I thing if I where nvidea I might go in the direction of having a fully it at least partially open source driver for graphic stuff and a not so open source driver for headless CUDA (potentially running alongside a Intel integrated graphics based head/GUI).
Through I don't know what they plan wrt. ARM desktops/servers so this might conflict with their strategies there.
The only tricky things involve blatantly betraying the spirit of the agreement while trying to pretend to follow the letter and hoping a judge supports your interesting reading of the law.
Even so there is no provision in law wherein someone can sue you and magically come into possession of your IP.
It would literally require magical thinking.
Take a look at a recent snapshot of changesets and lines of code to the Linux kernel contributed by various employers: https://lwn.net/Articles/816162/
Arm themselves is listed at 1.8% by changesets; but Linaro is a software development shop funded by Arm and other Arm licensees to work on Arm support in various free software, and they contributed 4% of changesets and 8.8% by lines of code. And Code Aurora Forum is an effort to help various hardware vendors, many of whom are Arm vendors, get drivers upstreamed, and they contributed 1.8% by changesets and 10.1% by lines changed. A number of other top companies listed are also Arm licensees, though their support may be for random drivers or other CPU architectures as well.
However, Arm and companies in the Arm ecosystem do make up a fairly large amount of the code contributed to Linux, even if much of it is just drivers for random hardware.
And Arm and Linaro developers also contribute to GCC, LLVM, Rust, and more.
You are not required to do that. Use nouveau, buy an AMD or intel GFX card.
You are not entitled to it either. People developing nouveau on their free time don't owe you anything, and nvidia does not owe you an open source driver either.
I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.
I don't use nvidia GFX cards on linux anymore (intel suffices for my needs), but when I did, I was happy to have a working driver at all. That was a huge upgrade from my previous ATI card, which had no driver at all. Hell, I even tried using AMD's ROCm recently on Linux with a 5700 card, and it wasn't supported at all... I would have been very happy to hear that AMD had a binary driver that made it work, but unfortunately it doesn't.
And that was very disappointing because I thought AMD had good open source driver support. At least when buying Nvidia for Linux, you know beforehand that you are going to have to use a proprietary driver, and if that makes you uncomfortable, you can buy just something else.
Has internet discussion really fallen this low that all needs to be spelled out and no context can ever be implied?
We're in a thread about NVidia, so of course OP's talking about NVidia hardware here. Yeah, they can get AMD, but that does not change their (valid) criticisms of NVidia one bit.
> I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.
Windows and macOS have different standard for drivers than many Linux users do. Is it really that surprising that users who went with an open-source operating system find open-source drivers desirable too?
I find it really weird to assume that because something is happening somewhere, it's some kind of an "objective fact of reality" that has to be true for everyone, everywhere.
When you shop for things, are you looking for certain features in a product? Would you perhaps suggest in a review that you'd be happier if a product had a certain feature or that you'd be more likely to recommend it?
It's the same thing. NVidia is not some lonely developer on GitHub hacking during their lunch break on free software.
Do you also assume that the kind of music you find interesting is objectively interesting for everyone?
This has nothing to do with entitlement. It's listing reasons for why someone thinks NVidia buying ARM is a bad idea.
It is to me. When I buy a car, I do not leave a 1 star review stating "This car is not a motorcycle; extremely disappointed.".
That's exactly how these comments being made sound to me. Nvidia is very clear that they only support their proprietary driver, and they deliver on that.
I have many GFX card from all vendors over the years, and I've had to send one back because the vendor wasn't honest about things like that.
Do I wish nvidia had good open source drivers? Sure. Do I blame nvidia for these not existing, not really. That would be like blaming microsoft or apple for not making all their software open source.
I do however blame vendors that do advertise good open source driver support that ends up being crap.
What does any of this have to do with nvidia buying or not buying arm ? Probably nothing.
What nvidia does with their GFX driver can be as different from what ARM does, as what Microsoft does with Windows and Github.
And you can argue that that still is all fine, and that if you're making a choice to run Linux, then you have to accept trade offs. And I'm sympathetic to that argument.
But you're also trying to say that we're not allowed to be angry at a company that's been hostile to our interests. And that's not a fair thing to require of us. If nvidia perhaps simply didn't care about supporting Linux at all, and just said, with equanimity, "sorry, we're not interested; please use one of our competitors or rely on a possibly-unreliable community-supported, reverse-engineered solution", then maybe it would be sorta ok. But they don't do that. They foist binary blobs on us, provide poor support, promise big things, never deliver, and actively try to force their programming model on the community as a whole, or require that the community do twice the amount of work to support their hardware. That's an abusive relationship.
Open source graphics stack developers have tried their hardest to fit nvidia into the game not because they care about nvidia, but because they care about their users, who may have nvidia hardware for a vast variety of reasons not entirely under their control, and developers want their stuff to work for their users. Open source developers have been treated so poorly by nvidia that they're finally starting to take the extreme step of deciding not to support people with nvidia hardware. I don't think you appreciate what a big deal that is, to be so fed up that you make a conscious choice to leave a double-digit percentage of your users and potential users out in the cold.
> None of the drivers on my windows and macosx machines are open source. They are all binary blobs.
Not sure how that's relevant. Windows and macOS are proprietary platforms. Linux is not, and should not be required to conform to the practices and norms of other platforms.
This company in no way shape or form is obligated to cater to your interests. In this case it would likely be counter to their interests.
But for entirely different reasons. Apple switched from PowerPC to Intel because the PowerPC processors IBM was offering weren't competitive. They switched from Intel for some combination of the same reason (Intel's performance advantage has eroded) and to bring production in-house, not because Intel was quarrelsome to do business with.
Meanwhile Apple refused to do business with nVidia even at a time when they had the unambiguously most performant GPUs.
Nvidia introduced a set of laptop GPUs that had a high rate of failure. Instead of working with and eating some of the cost of repairing these laptops they told their customers to deal with it. Apple being one of their customers got upset and left holding the bag of shit and hasn't worked with them since.
Intel and AMD have used their x86/AMD64 patents to block Nvidia from entering the x86 CPU market.
Nvidia purchasing ARM will hurt not the large ARM licensees like Apple and Samsung but the ones that need to use the CPU in a device that does not need any of the Multimedia extensions that NVidia will be pushing.
But yeah I don't think it's about collaboration either.
Out of curiosity, is there any large open source product from NVidia? I can't think of any.
NVIDIA contributes mostly to existing open source projects (LLVM, Linux kernel, Spark, etc.), see https://developer.nvidia.com/open-source
It is a bit like WebKit. It was based on KHTML which was an open source HTML renderer. But Apple expanded that so greatly on their own payroll that it is hard to call WebKit anything but an Apple product.
EDIT - for added detail:
> - nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards
Most (all?) of which bend to Nvidia's demands because Nvidia's been extremely successful in getting end users to want their chips, making the Nvidia chip a selling point.
> - nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology
IBM bends to Nvidia's demands so POWER can remain a relevant HPC platform.
> - nvidia collaborates with OS vendors like microsoft very successfully
Microsoft is the only significant OS vendor with which Nvidia collaborates successfully. It's true - but for the longest time Nvidia would have been out of business if they didn't. I will concede this point, but I don't find this is enough to paint a different picture.
> - nvidia collaborated with mellanox successfully and acquired it
Mellanox bent over to Nvidia's demands to such an extent that they were acquired.
> - nvidia collaborates with ARM today...
Collaboration in what sense? My impression is that Nvidia and ARM have a plain passive customer/supplier relationship today.
> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...
Nvidia is humongously behind their competitors Intel and AMD in open source contribution while having a large amount more R&D in graphics. They are terrible at open source compared to the "industry standard" of their market, and only partake as far as it serves their short term needs.
They are perfectly entitled to behave this way, by the way. But Nvidia's open source track record is only more evidence is that they don't understand how to work in an open ecosystem, not less.
> You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).
The "whataboutism" is valid but completely irrelevant here. I would also not appreciate Apple buying ARM.
> For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.
Apple has parted ways with Intel, IBM, Motorola, Samsung (SoCs) and PowerVR for technology strategy reasons, not relationship reasons. Apple had no reason to part ways with Nvidia for technical reasons (especially considering they went to AMD instead), but did so because of the terrible relationship they built.
I'm typing this on a MacBook with an Nvida GPU that was created in 2012, many years after the failing laptop GPU debacle. AFAIK, Apple used that GPU until 2015?
I'd wager that Apple has been using AMD for something as mundane as offering better pricing, rather than disagreement 12 years ago. (Again: despite all the lawsuits, Apple is still a major Samsung customer.)
This used to be true, as Apple swapped between AMD and Nvidia chips several times in 2000-2015. Then Nvidia and Apple fell out, and Apple has not used Nvidia chips in new designs in 5 years - a timeframe in which Nvidia coincidentally achieved its largest technical advantages over AMD. Apple goes as far as to actively prevent Nvidia's own macOS eGPU driver from working on modern macOS. A simple pricing dispute does not appear to be a good explanation here.
It is the same issue that caused the Xbox 360 red-ring-of-death, and caused "baking your graphics card" to become a thing (including AMD cards). It basically affected everyone in the industry at the time, and Apple would not have gotten any different outcome from AMD had they been in the hotseat at the time. They were just throwing a tantrum because they're apple damn it and they can't have failures! Must be the supplier's fault.
That one has always struck me as a "bridezilla" story where Apple thinks they're big enough to push their problems onto their suppliers and NVIDIA said no.
And as far as the Xbox thing... Microsoft was demanding a discount and NVIDIA is free to say no. If they wanted a price break partway through the generation, it probably should have been negotiated in the purchase agreements in the first place. NVIDIA needs to turn a profit too and likely structured their financial expectations of the deal in a particular way based on the deal that was signed.
Those are always the go-to "OMG NVIDIA so terrible!" stories and neither of them really strike me as something where NVIDIA did anything particularly wrong.
Canonical, which ships nvidia's proprietary driver with Ubuntu, is another quite major OS vendor that collaborates with nvidia successfully. Recently, Ubuntu's wayland-based desktop environment was also the first to work with nvidia's driver (the results of this work are open source).
You will find ARM Macs are cheaper than Intel Macs, even if not as fast but consume less power due to mobile technology.
Microsoft had the Surface tablet with ARM chips and an ARM version of Windows which didn't sell as well, but then they are not Apple who won't make the same mistakes as Microsoft.
This contradicts the claim from the OP that suggests that all the projects from one of these companies are all bad.
Xbox: The Xbox's security was broken, and Nvidia apaprently took the high road, claimed a loss on all existing chips in the supply chain (claiming a loss fo the quarter out of nowhere and tanking their stock for a bit) and allowed Microsoft to ship a new initial boot ROM as quickly as possible for a minimum of cost to Microsoft. When that new mask ROM was cracked within a week of release Microsoft went back to Nvidia looking for the same deal and Nvidia apparently told them to pound sand and in fact said that they would be doing no additional work on these chips, not even die shrinks (hence why there was no OG Xbox Slim). There are other reasons why Microsoft felt like Nvidia still owed them albeit, but it was a bit of a toxic relationship for everyone involved.
PS3: they were never supposed to be the GPU until the eleventh hour. The Cell was supposed to originally crank up to 5GHz (one of the first casualties of the end of Dennard scaling, and how it affected Moore's law as we conceived it) and there were supposed to be two Cell processors in the original design, and no dedicated GPU. When that fell through and they could only crank them up to 3.2GHz, they made a deal with Nvidia at the last second to create a core with an new bus interconnect to attach to the Cell. And that chip was very close to the state of the art from Nvidia. Most of it's problems were centered around duck taping in a discrete PC GPU into the console with time running out on the clock, and don't think that anyone else would have been able to deliver a better solution under those circumstances.
Like I said, Nvidia is a scummy company in a lot of respects, but I don't think the Xbox/PS3 issues are necessarily their fault.
If by working with everyone meant stepping back and relenting in every possible way then Nvidia would not be profitable. I am not sure why Microsoft felt they were entitled to Nvidia. And Nvidia just said no. It was that simple.
Nvidia wants to protect their business Interest, and that is what business is all about. And yet everyone on the internet seems to think company should do open source or throw resources into it etc.
I am just pointing out that Nvidia's evident opinion on how to run a business (their corporate culture) is not in line with cultivating an open ecosystem like ARM is running. And the cultivation of this ecosystem is ARM's key to success here. Nvidia is entitled how to run a business how they want, but I'm very much hoping that that way of working does not translate to how they will run ARM.
People everywhere in this thread are having huge difficulty separating the point "Nvidia's way of doing business does not match ARM's" with "I have personal beef with Nvidia's way of doing business". I'm trying to make the former argument.
> That is just not true.
Out of curiosity - what isn't true here? Am I missing facts, or are you expressing disagreement with my reading of the business situation? If the latter is based on some understanding I have some personal beef with Nvidia, then please reconsider.
: Which is exactly why SPARC, and with one exception Power is dying, and why RISC V is yet to deliver. Nobody (bar IBM's POWER line) is building good processors with those ISAs that make it worth the effort to use. Nothing to do with the ISA - you just need chips people are interested in using.
About the only thing that could force that to change would be another company buying up ARM and changing the licensing mechanisms (e.g. pricing or even removing some license options) going forward.. or just wrecking the product utterly.
I do think RISC-V has an opportunity here, but only if ARM sells out to NV and NV screws this up as hard as they're likely to in that situation.
The way I see it is that this may actually generate incentive for someone to do that. One of the reasons that that isn't happening yet is because there's no real need with ARM vendors supplying and no real chance with ARM vendors as competition. This could, in theory, clear the way.
This is particularly true for Android because basically the entire thing is written in portable languages and the apps even run on a bytecode VM already, so switching to another architecture or even supporting multiple architectures at the same time wouldn't be that hard.
Google could easily afford to design their own RISC-V CPUs and port Android to it, if they thought it was in their strategic interests to do so.
I think it really depends on how nVidia-owned Arm behaves. If it behaves the same as Softbank-owned Arm, I don't think Google would bother. If it starts to behave differently, in a way which upsets the Android ecosystem, Google might do something like this. (I imagine they'll give it some time to see whether Arm's behaviour changes post-acquisition.)
And given that Nvidia is a US company, that makes them quiet toxic for a Chinese company to source from.
“Arm revealed that an investigation had uncovered undisclosed conflicts of interest as well as violations of employee rules.”
The story you posted is incredible. Does this happen anywhere else in the world?
It's not like he was dismissed and he just didn't leave his office. He's challenging the legality of his dismissal.
a•verse (ə vûrs′), (adj.): Having a strong feeling of opposition, antipathy, repugnance, etc.; opposed: He is not averse to having a drink now and then.
I didn't mean to bother you, I've been pedantic, thanks for pointing it out.
It is very hard to make an anti-competition case against someone who is consistently 2nd and 3rd in the market.
On a related note: with PCs now definitely heading towards ARM, this is a sensible move by NVIDIA: they could now sell GeForce-integrated ARM chips for future Windows and Linux boxes - and then they would be the ones with the dominant marketshare.
Nvidia's GPP would require manufactures such as ASUS, Gigabyte, MSI, HP, Dell, etc. to have their gaming brands only use Geforce GPUs. So all the well known gaming brands such as Alienware, Vodoo, ROG, Auros and Omen would only be allowed to have Geforce. nVidia already has aggressive marketing plastering their brand across every esports competition, which is fair game, but the GPP would be a contractual obligation to not use AMD products.
Nike is the exclusive clothing brand of every NBA team. American Airlines is the exclusive airline of the Dallas Cowboys. UFC fighters can only wear Reebok apparel the entire week leading up to a fight.
Heck, I worked for a company that signed an exclusive deal to only use a single server vendor.
How did that work out? Did your company secure a good rate - and/or did the vendor become complacent once they realised they didn’t have to compete anymore? Did the contract require minimum levels of improvement in server reliability and performance with each future product generation?
The only reason nvidia has 20% of the GPU market at all is because their products are better, but without volume, there is very little separating you from losing the market.
If NVIDIA slips over AMD and Intel perf wise during one generation, the competition will have cheaper and better products, so it's pretty much game over.
It has happened many times.
Its ok for nvidia to release an architecture that does compute very well, but barely improves graphics, and vice-versa.
But I don't recall any compute generation where there was a better product from the competition.
Which product? It can't possibly be their GPUs you mean because that would be hilariously wrong. That is like saying a Lamborghini is a better car than a VW because it has a higher top speed.
To me -and to many many buyers- AMD is the superior product. To most Intel has the best product by far (business laptop, Chromebook, etc.)
AMD is the only option for a performant GPU with reasonable open source drivers. Intel has the drivers but they don't currently offer discrete GPUs at all. nVidia doesn't have the drivers.
AMD makes it a lot easier to do GPU virtualization.
AMD GPUs are used in basically all modern game consoles, so games that run on both are often better optimized for them.
They also have the best price/performance in the ~$300 range, which is the sweet spot for discrete GPUs.
Qualcomm however has been rebranded/tweaked (which is unclear) ARM standard CPU core designs since 2017. They very much depend on ARM doing a lot of heavy lifting.
Apple is probably the safest of the bunch given how they helped build ARM.
ARM announced it would cut ties with Huawei after the US ban but reconsidered the decision less than half a year later so I assume that the architecture license is either usually iron clad or simply too valuable to both sides to give up.
Architectural license is not necessarily perpetual
> Drew declined to comment on whether the deal was multi-generational
So they may have license for ARMv8 but not future ISAs like ARMv9.
And again, the terms of the license may vary. I have the impression that Apple has a far more permissive license than anyone else out there for example.
Qualcomm has shown in the past to be able to build great custom ARM CPUs not based on an ARM standard design. But it seems they decided the investment was not worth it after their custom Kryo design (which was not a complete failure but definitely not better than what ARM was producing at the time). But I think they'll need to go back to their own silicon at some point if this acquisition happens.
For sure Huawei and Samsung (and smaller manufacturers like Rockchip, Mediatek, Allwinner) don't have an impressive track record designing custom CPU IP and definitely not custom GPU IP. These guys should be terribly alarmed if this were to happen.
You should mind both of these things. The more oligopolistic technology is, the worse.
Other than that - fully agree with your concern. As a GPU developer I'm often frustrated with NVIDIA's attitude towards OpenCL, for example.
But anti-trust is so diluted and toothless these days, that the deal will probably be simply rubber stamped. If they aren't stopping existing anti-competitive behavior, why wouldn't they allow such bullies to gain even more power?
With this buy Nvidia has GPUs, CPUs, networking, what else do they need to be a vertically integrated shop?
Would Arm stakeholders (i.e. much of the computer industry) prefer an IPO?
In 2017, Softbank's Vision Fund owned 25% of Arm and 4.9% of Nvidia, i.e. these are not historically neutral parties, https://techcrunch.com/2017/08/07/softbank-nvidia-vision-fun...
After WeWork imploded, https://www.bloomberg.com/opinion/articles/2019-10-23/how-do...
> Neumann created a company that destroyed value at a blistering pace and nonetheless extracted a billion dollars for himself. He lit $10 billion of SoftBank’s money on fire and then went back to them and demanded a 10% commission. What an absolute legend.
Is the global industry (cloud, PC, peripheral, mobile, embedded, IoT, wearable, automotive, robotics, broadband, camera/VR/TV, energy, medical, aerospace and military) loss of Arm independence our only societal solution to a failed experiment in real-estate financial engineering?
Is Arm not profitable as a standalone business? They recently raised some license fees by 4X.
I’m skeptical that will work, but Son was dumb enough to pay $31B with no strategic value.
At the time I thought Son had clever telco synergies in mind, but I gave him far too much credit
The problem is that Son needs cash, so he's flogging off everything he can to get it.
Grossly so. They paid like 45 percent above what the stock was trading at the time.
They'd be paying a premium for a path to an all-nvidia datacenter & supercomputer.
Consider HPC applications like Oak Ridge's Frontier supercomputer. They went with an all AMD approach in part due to AMD's CPUs & GPUs being able to talk directly over the high-speed Infinity Fabric bus. Nvidia's HPC GPUs can't really compete with that, since neither Intel nor AMD are exactly in a hurry to help integrate Nvidia GPUs into their CPUs.
This makes ARM potentially uniquely valuable to Nvidia - they can then do custom server CPUs to get that tight CPU & GPU integration for HPC applications.
There is  https://en.wikipedia.org/wiki/NVLink
which is supported by  https://en.wikipedia.org/wiki/POWER9
those two combined give you  https://en.wikipedia.org/wiki/Summit_(supercomputer)
currently the worlds number 2 supercomputer(only very recently dethroned) according to the article.
Installed at Oak Ridge.
So they are already there, just needing some premium POWER?
Amazon paid 350MM for Annapurna, ~ 1/100th of 32B.
For embedded devices, Nvidia already ship Jetson boards with Arm CPUs and Nvidia GPUs.
To buy themselves back from owner Softbank, who can return money to investor Saudi Arabia? https://www.cnbc.com/2018/10/23/softbank-faces-decision-on-w...
According to some comments in this thread, the alternative is the slow destruction of the neutral Arm ecosystem. While some new baseline could be established in a few years, many Arm customers could face a material disruption in their supply chain.
With the US Fed supporting public markets, including corporate bond purchases of companies that include automakers with a supply chain dependent on Arm, there is no shortage of entities who have a vested interest in Arm's success.
If existing Arm management can't write a compelling S1 in the era of IoT, satellites, robots, edge compute, power-efficient clouds, self-driving cars and Arm-powered Apple computers, watches, and glasses, there will be no shortage of applicants.
Publicly traded companies that rely on income from "licensing" peak in revenue then stagnate because innovation becomes harder to come by.
Regarding innovation, ARM's been at it since 1990. I'm sure it's not the same now as it was 30 years ago, but we're well past the point where one can reasonably fear it to be an unsustainable business. Last time I heard numbers, they were talking about more than 50 billion devices shipped with ARM IP in them. That is a massive market.
You don't answer my question. Why wouldn't licensing businesses work as publicly traded companies? What's the fundamental difference, specially in an increasingly fabless market, between a company licensing IP to other companies and a company selling productized IP to consumers?
At the moment ARM lives or dies by the success of the ecosystem as a whole.
When its owned by a customer this may no longer be the case and there are huge potential conflicts of interest. For example, would an Nvidia owned ARM offer a new license to a firm that would be a significant competitor to an existing Nvidia product (eg Tegra)? Will Nvidia hinder the development efforts of other competitors? Will Nvidia give itself access to new designs first? How will it maintain appropriate barriers to the flow of information about competitors new designs to its own design teams?
I can see this getting very significant regulatory scrutiny and rightly so.
I think Trump started something unintentionally that might put other countries in a better position to deal with the American semiconductor hegemony 5-10 years from now.
IMO, Nvidia should be allowed to buy ARM just for it to get bad enough for people to want to buy non NVIDIA products. For years NVIDIA has had shitty business practices, but I bet most people on HN (and the rest of the world) don't give 2 shits about competition and market leadership. They just buy NVIDIA because it's the standard.
Things have to get worse before the get better. It's really, the only way humans and the public seem to be able to learn.
Source: job interview
I’m holding a reference copy of aarch64 in my hands and I’ve written half an assembler from it.
Granted SoCs often are just as bad as nvidia, but that’s not ARM’s fault or problem, really.
They do , but don't provide documentation for them.
That's a heavy bet on ML and crypto(-currency? -graphy?). Has ML, so far, really made any industry-changing inroads in any industry? I'm not entirely discounting the value of ML or crypto, just questioning the premature hype train that exists in tech circles (especially HN).
Well, yes that is the point. My theory is that the gaming market for GPUs is well understood. I don't think there are any lurking surprises on the number of new gamers buying high-end PCs (or mobile devices with hefty graphics capabilities) in the foreseeable future.
However, if one or more of the multitude of new start-ups entering the ML and crypto (-currency) space end up being the next Amazon/Google/Facebook then that would be both unforeseeable and unbelievably transformative. Maybe it won't happen (that is the risk) but my intuition suggests something is going to come out of that work.
I mean, it didn't work out for Sony when they threw a bunch of SPUs in the PS3. They went back to a traditional design for their next two consoles. So not every risk pans out!
Same with machine vision. It's going to be everywhere — not just self-driving trucks (which, unlike cars, are going yo be big soon), but also security devices, warehouse automation, etc.
All this is normally run on vector / tensor processors, both in huge datacenters and on local beefy devices (where a stock built-in GPU alongside ARM cores is not cutting it).
This is a growing market with.a lot of potential.
Licensing CUDA could be quite a hassle, though. OpenCL is open but less widely used.
Does the tech industry not count, or are you only considering industries that are typically slower moving?
It is (IIRC) a pretty fundamental part of self driving tech. I honestly think this is what drives a lot of Nvidia's valuation.
TPUs are massively parallel Float16 engines - not really applicable to anything outside of ML.
Hopefully some of those cryptocurrencies (until they get proof-of-stake fully worked out) move to memory-hard proof-of-work using Curve25519, Ring Learning With Errors (New Hope), and ChaCha20-Poly1309, so cryptocurrency accelerators can pull double-duty as quantum-resistant TLS accelerators.
I'm not necessarily meaning dedicated instructions, but things like vectorized add, xor, and shift/rotate instructions, at least 53 bit x 53 bit -> 106 bit integer multiplies (more likely 64 x 64 -> 128), and other somewhat generic operations that tend to be useful in modern cryptography.
It is also hardware that is similar to ML acceleration, it needs better integer and boolean (algebra, not branching) support, and has a stronger focus on memory access (that ML acceleration also needs, but gains less from). So how comes nobody even speak about this?
ML would benefit from both too, as would highly complex graphics and physics simulation. The cache pre-population is probably at odds with low latency graphics.
If Intel can survive their current CPU manufacturing issues, manage to innovate on their design again, and manage to improve the Xe design in a couple generations, they might be in a good position in several years. I (as a layman) give them a 50/50 shot at recovering or just abandoning the desktop CPU and GPU market.
Being a only a year behind market leaders with your first product actually seems pretty impressive to me. Especially if that's at the (unreasonably priced) top of the line, and they have something competitive in the higher volume but less sexy down market segments.
The >= KNC products have all been "one generation behind" the competition. When the Intel Xe is released, Intel will have been trying to enter this market for about 30 years.
This market is more important now than ever before. I hope that they keep pushing and do not axe the Intel Xe after the first couple of generations only to realize 10 years later that they want to try again.
That is the thing with a lot of these side projects Intel is always working on. It would be great if they actually delivered good products, but they often spend billions acquiring these companies and developing these products only to turn out one or two broken products and then dump the whole project.
I think this time is different with Xe, but I can't blame anyone for looking at the past history and being dubious that Intel is in it for the long haul.
I know that these probably are solvable problems, but they left a pretty bad taste...
I always get a kick out of the sentiment toward Intel on HN.
Intel is booming financially. Things have never been better for them in that respect. They have every opportunity to fix their mess.
Intel has eight times (!) the operating income of Nvidia, with a smaller market cap.
Intel is one of the world's most profitable corporations. $26 billion in operating income the last four quarters. Their margins are extreme. Their sales are at an all-time high. Their latest quarterly report was stellar.
In just 2 1/2 years Intel has added a business the size of Nvidia and AMD combined.
If they can't utilize their current profit-printing position to recover, then they certainly deserve their tombstone. Nobody has ever had an easier opportunity to find their footing.
And, yet, the writing was on the wall. Nokia was doomed once the smartphone era came. That's where Intel is today: AMD crushes them on the high-end general purpose CPUs. ARM crushes them on I/O performance and the low-end for general purpose CPUs. GPUs crush Intel in the middle, for special-purpose (mainly single-precision floating point) computing.
Right now, large portions of new computer sales, and an even larger portion of the high-margin cpu sales, come from cloud computing. AMD and ARM are stealing huge market share from Intel on that front. I don't see that momentum changing any time soon.
There's a reason that Intel has 8x the operating income of NVidia while having a smaller market cap. It's not because of where they are currently--it's where they are going. Stock market valuations are forward-looking, and the future doesn't look so bright for Intel.
This sounds difficult to believe. Do you know a benchmark that shows this?
The thing is, that's not "ARM" it's "ARM as implemented by AWS", they get to choose how many memory channels to add etc.
That could be a huge market with no cempetutors in sight
Personally I won't be betting on or against Intel - it wouldn't shock me if they follow the Nokia route, it also wouldn't shock me if they come out with a new generation that puts them back on top within the next few years.
So was RIM in 2010. Profits are a trailing indicator. The PC market is really small compared to the mobile market and declining. While Apple only has 10% of the overall market, it has a much higher percentage of high end personal computers and Intel is about to lose Apple as a customer.
PCs also are having longer refresh cycles. What does “recovery” look like? PC sells going up? That’s not going to happen.
They still have the server market while that is probably growing, Amazon is pushing its own ARM processors hard and MS and Google can’t be too far behind.
Their decades of more or less monopoly status has made them complacent, their revenue is high still because of this inertia built into the market that simply will not vanish.
The datacenter for example is still dominated by Xeon not because people like Xeon over Epyc but because there is not a easy migration path between the 2 platforms. If I was building a whole new server farm with all new VM's I would choose Epyc all day... but if I need to upgrade hosts in an existing farm with no down time well that will need to be Xeon then...
When it comes to desktops/laptops though, Lenovo's AMD line is attractive and suffers no such problems
The innovators and underdogs are always great to see, and they fuel our collective imagination, so it's no surprise that they dominate the HN front page. Of course, that mind-share dominance is in stark contrast to the well-entrenched money-printing machines they're trying to disrupt, who are happy to keep dominating their respective industries year after year instead.
Past performance. In spite of their earnings their stock plummeted on the 7nm news. It’s not just HN that is bearish.
The margins on those big Xeon chips have been so good that they ditched everything else, and painted themselves into a corner, sitting by the sidelines for the past 20 years as new markets emerged.
The original i740 was theoretically a capable card; although fairly hampered by being forced to use Main Memory for Textures. Intel eventually backed down from the graphics market back then, and instead continued to use the 740 as a basis for the integrated graphics in the i810/815 chipsets.
But, as GPUs became closer to what we saw as real GPUs, Intel continued to press on with the idea that keeping things done in the CPU was better for them (i.e. encouraging upgrades to higher end CPUs vs selling more lower margin graphics cards.)
You saw a similar pattern with the 845/855/865: Shaders were all done in software (Hey, it finally almost justified Netburst, right? ;)
And this pattern seems to continue with various forms of infighting between groups up to this day.
The other Consistent problem they have had is driver compatibility/capability.
But yes otherwise, nearly every Intel CPU has integrated graphics whereas only a few select AMD CPU's have integrated graphics (and AMD brands them as APU's not CPU's).
It seems that is changing somewhat with the 4000-series APUs, but guess what, those are only going to be sold to OEMs, not individuals.
It's all rather frustrating, since I'm still on an i7-4770k and wouldn't mind an upgrade.
But the reality is that vast majority of business desktops and laptop don't need anything better than what Intel offers.
If AMD gains share in that segment (and it looks like they are), it will IMO be because of finally having a better CPU, not because of a better GPU.
Even with the acquisition of ARM, I don't see Nvidia any better off than Intel at this moment as far as CPU/GPU stack goes. Frankly, I would think AMD would be the one to end up by the wayside since they still are weak on the software side.
Right now I have a fairly decent GPU in my Macbook which I've hardly used. Very little supports it, because it's not nVidia. I can't use it for AI training, for example. Sure, it might work ok for some games, but Macbooks aren't really for gaming, and nVidia has captured that market nicely anyway.
Things can change; maybe Intel's software stack will be incredible. I don't know. But they have quite a hill to climb before they reach that summit.
Given this is about their third attempt at releasing a high performance gpu, I think skepticism is warranted until they’re actually selling something to the general public.
nvidia has a huge market, people buy their stuff because it's fast and there's software support for everything from AAA games to scientific/engineering modelling to ML.
intel took far, far too long to come up wiht a viable graphics ecosystem to ever be successful.
Apple seems to be.
I was about to write how that’s a special case, but then I remembered that Microsoft is also making their own chips for Surface products.
If this does happen I think Intel will not sweat it because it will be only Apple. Apple has no interest in selling CPUs. They want to be able to make severe changes (cut the fat) between revisions and not have people crying about having to update their architecture to support it.
Apple was a founding member of AIM (Apple, IBM, Motorola) for PowerPC. (As rumour has it, after DEC refused to go for a higher-volume lower-margin Alpha AXP derivative when Apple came knocking for an m68k replacement, Apple then asked IBM for a higher-volume lower-margin POWER derivative, leading to AIM and PowerPC.) As far as I know, only Acorn had a role in founding/spinning off ARM.
On the other hand, I would be very surprised is Apple wasn't smart enough to get a very-long-term/perpetual license on the ARM instruction set before investing heavily in custom core design.
"In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the Arm core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd."
" Apple has invested about $3 million (roughly 1.5 million pounds) for a 30% interest in the company, dubbed Advanced Risc Machines Ltd. (ARM), but the exact ownership stake of VLSI and Acorn was not disclosed. "
Apple and a bunch of others also have a "Arm architectural licence" which IIRC cannot be taken back.
They could theoretically simply stop licensing ARM to Broadcom, although that might invoke some anti-trust suits.
What? As far as I know the only connection between Broadcom and Acorn is that they employed Sophie Wilson.
And I'd hardly call Intel focused on one use-case. They seem to have their hands in all sorts of random-as-heck product lines like IoT devices and such, most of which rarely see much long-term support.
I think that effort in particular (about 5 years ago for me) was a warning sign about Intel -- they didn't internalize that your smart water meter would have a chip beefy enough very soon to connect to a cellular or other long range network and just hit a cloud endpoint.
Edit: ok Duopoly, but still... kinda insane that only 2 companies in the world do it.
And almost all competent non-CUDA platforms (e.g. Google TPUs, Tesla's secret in-house hardware) haven't been open-sourced, or even sold to consumers, which further enables the NVIDIA monopoly.
RISK-V is just ISA.
You will have open source RISK-V microarchitectures for processes that are few generations old. You can use the same design long time when the performance is not so important.
You will not get open source optimized high performance microarchitectures for the latest process and large volumes. These cost $100s of millions to design and the work is repeated every few years for a new process. Every design is closely optimized for the latest fab technology. They have patents.
Intel, AMD, Nvidia, ARM, all have to design new microarchitectures every few years.
It's not just doing some VHDL design. It involves research, pathfinding, large scale design, simulation, verification and working closely with fab business. The software alone costs millions and part of it is redesigned. Timelines are tight or the design becomes outdated over time.
Why wouldn't we get a logic netlist which could perform reasonably well when placed on silicon by people who know what they are doing? (Yeah, lots of handwave.) I'm asking this out of curiosity. Not an expert in the field by any means.
Foundries actually have an interest in helping to optimize an open core for their process as a selling point since it can be reused by multiple customers.
The best ARM core is Apple’s Lightning, which has an IPC rate about 4X that of the A72.
At the same time, Nvidia may be trying to hedge its future as its other competitors (Intel, AMD, even Apple) all have their own CPU and GPU designs. The animosity with Apple has shown the power dynamics and the high stakes.
A thought I've been kicking about, though I fully understand it to be incredibly unlikely would be if NVIDIA would simply terminate any license Apple has to use ARM at all. The move would arguably be done out of pure spite. "payback" for the GeForce 8600M issues that cost them 200~ million and $3 billion in market cap in part due to Apple pushing for a recall.
Apple also seemingly pushed Nvidia utterly out the door, going as far as to completely block NVIDIA from providing drivers for users to use their products even in "eGPU" external enclosures on newer versions of MacOS. Even if only a minority of Apple users ever bought Nvidia cards, being completely banned from an OEM's entire lineup would likely ruffle feathers
Do you really think NVidia is going to substantially overpay for ARM to then nuke it’s remaining value by going to war on it’s licensees?
If Nvidia bought ARM and decided to find some legal way to terminate the contract with Apple out of spite, Apple would have to find another ISA for their "Apple Silicon".
ARM has previously shut down open source software emulation of parts of the ARM ISA.
For a while QEMU could not implement ARMv7 I think, until changed their mind and started permitting it. There was an open source processor design on opencores.org that got pulled too.
The reasoning was something like "to implement these instructions you must have read the ARM documentation, which is only available under a license which prohibits certain things in exchange for reading it".
Third parties have been able to get IOS running in an emulator for security research and Android(!) has even been ported to the IOS devices that contain the "Checkrain" Boot ROM exploit (though with things like GPU, Wi-Fi etc in varying states of completion)
(The silicon implementation of an isa is referred to as the microarchitecture btw)
If you want to look at some historical examples where this wasn't quite the case, look at the old 3DFX VooDoo series. They did add features but they kept compatibility to the point where even up to a VooDoo 5 would work with software that only supported the VooDoo 1. (n.b. This is based on my memory of the era, i could be wrong). They had other business problems, but it meant that adding completely new features and changes in Glide (their API) was more difficult.
That's why third party open shader compilers like ACO could be made:
umm... what? what does that even mean? lol
I could kind of maybe begin understand your argument from the Graphics side, as users mostly interact with it at an API level, however keep in mind that shaders are languages the same way "cpu languages" work. It's all still compiled to assembly, and there's no reason that you couldn't make an open instruction set for a GPU the same as a CPU. This is especially obvious when it comes to Compute workloads, as you're probably just writing "regular code".
Now, that said, would it be a good idea? I don't really see the benefit. A barebones GPU ISA would be too stripped back to do anything at all, and one with the specific accelerations needed to be useful will always want to be kept under wraps.
I'm reminded of the argument over low-level graphics APIs almost a decade ago. AMD had worked together with DICE to write a new API for their graphics cards called Mantle, while Nvidia was pushing "AZDO" techniques about how to get the best performance out of existing OpenGL 4. Low-level APIs were supposed to be too complicated for graphics programmers for too little benefit. Nvidia's idea was that we just needed to get developers onto the OpenGL happy path and then all the CPU overhead of the API would melt away.
Of course, AMD's idea won, and pretty much every modern graphics API (DX12, Metal, WebGPU) provides low-level abstractions similar to how the hardware actually works. Hell, SPIR-V is already halfway to being a GPU ISA. The reason why OpenGL became such a high-overhead API was specifically because of this idea of "oh no, we can't tell you how the magic works". Actually getting all the performance out of the hardware became harder and harder because you were programming for a device model that was obsolete 10 years ago. Hell, things like explicit multi-GPU were just flat-out impossible. "Here's the tools to be high performance on our hardware" will always beat out "stay on our magic compiler's happy path" any day of the week.
It's technically possible but the economics isn't there (was my point). The cost of making a new GPU generally includes writing drivers and shader compilers anyway, so there's not much of a motivation to bother complying with a standard. It would be different if we did expose them at a lower level (i.e. if CPU were programmed with a jitted bytecode then we wouldn't see as much focus on ISA as long as the higher level semantics were preserved)
By that logic anything emulates its ISA because that is the definition of an ISA. An ISA is just the public interface of a processor. You are wrong about what x86 processors run internally. Several micro ops can be fused into a single complex one which is something that cannot be described with a term from the 60s. Come on, let the RISC corpse rot in peace. It's long overdue.
90%+ of the core logic area (stuff that is not i/o, power, memory, or clock distribution) on the GPU are very basic matrix multipliers.
They are in essence linear algebra accelerator. Not much space for sophistication there.
All best possible arithmetic circuits, multipliers, dividers, etc. are public knowledge.
That's almost 1000 pages, and one of 16 volumes, it just happens to be the one most relevant for programmers. If this is your idea of "simple," I'd really like to see your idea of a complex chip.
Just because a big part of the chip are the shading units it doesn't mean it's simple or there's no space for sophistication. Have even you been following the recent advancements in recent GPUs?
There is a lot of space for absolutely everything to improve. Especially now that Ray Tracing is a possibility and it uses the GPU in a very different way compared to old rasterization. Expect to see a whole lot of new instructions in the next years.
Combine these 2 statements and most GPUs would have roughly identical performance characteristics (performance/Watt, performance/mm2, etc)
And yet, you see that both AMD and Nvidia GPUs (but especially the latter) have seen massive changes in architecture and performance.
As for the 90% number itself: look at any modern GPU die shot and you'll see that 40% is dedicated just to moving data in and out of the chip. Memory controllers, L2 caches, raster functions, geometry handling, crossbars, ...
And within the remaining 60%, there are large amounts of caches, texture units, instruction decoders etc.
The pure math portions, the ALUs, are but a small part of the whole thing.
I don't know enough about the very low level details of CPUs and GPUs to judge which ones are more complex, but in claiming that there's no space for sophistication, I can at least confidently say that I know much more than you.
Funny you say that. I've never heard a CPU architect coming to the GPU world and say "Gosh, how simple is this!".
I invite you to look at a GPU ISA and see for yourself, and that is only the visible programming interface.
So, what do you think is the most complex thing on an Nvidia GPU?
There might be faster algos for super long integers, or minute implementation differences that add/subtract few kilogates.
It is driven largely by academics who lack a pragmatic drive in areas of time-to-market, and it is being explored by companies for profit motives only. NXP, NVIDIA, Western Digital see it as a way to cut costs due to Arm license fees.
RISC-V smells like Transmeta. I lived through that hype machine.
I think one of the major points for RISC-V was to avoid the possibility of patent encumbrance of the ISA so that it can be freely used for educational purposes. My computer architecture courses 5-6 years ago used MIPS I heavily. MIPS was not open at the time, but any patents for the MIPS-I ISA had long since expired.
POWER is actually open, but it is tremendously more complicated. RISC-V by comparison feels like it borrows heavily from the early MIPS ISAs, just with a relaxation of the fixed sized instructions and no architectural delay slots and a commitment to an extensible ISA (MIPS had the coprocessor interface, but I digress).
The following is my own experience - while obviously high performance CPU cores are the product of intelligent multi-person teams and many resources, I believe RISC-V is simple enough that a college student or two could implement a compliant RV32I core in an HDL course if they knew anything about computer architecture. It wouldn't be a peak performance design by any measure (if it was they should be hired by a CPU company), but I think that's actually a point of RISC-V as an educational platform AND a platform for production CPU cores.
unsurprising given that Wave Computing shut down the project almost a year ago and subsequently declared bankruptcy.
POWER has technically only been "open" for a little under a year. OpenPOWER was always a thing, but this used to mean that companies could participate in the ISA development process and then pay license fees to build a chip. This changed last year when POWER went royalty-free (like RISC-V).
The real defniition of "open" is can you answer the following questions in the negative:
- Do I need to pay someone for a license to implement the ISA?
- Is the ISA patent-unencumbered?
The advantages of those two (and of ARM) is that there actual Implementations with decades of development behind them, Yes, some technical debt but also many painfully-learned good decisions.
RISC V, which I’m really excited about, you can think of as a PRD (customer’s perspective product requirements document). That’s what an ISA is. Each team builds to meet that using their own iomplemetation, none of which is more than a few years old yet. But the teams incorporate decades of learning , and have largely a blank sheet to start with. I think it will be great....but isn’t yet.
Neither Softbank nor Nvidia care what I think, but I would feel better if the buyer of ARM wasn’t a company with a existing chip business.
And interest rates can change quickly. It’s a value trap to overweight current interest rates in equity valuations.
It's a "knock it out the ballpark" victory by Softbank's standards.
I'm not sure what population of companies you're taking your average from, but the S&P 500 p/e is a lot higher than that.
ARM + nVidia can be a powerhouse combo, especially in the cloud/server market.
Maybe why SOftbank want to raise those licence costs and that is with alternatives now about, not that easy an equation than just raising those costs as many for controllers that RISCV become more than fine for and some HD manufacturers already transitioning for a few cents extra savings currently, let alone if ARM license increased.
Sure ARM here there and everywhere but it got that way as much on the cheap costs of using that IP as much as the array of IP packaged up. So yip they make money, but it's if you breakdown how they make money, ARM are regular/reliable, hence less spikes in income either way.
Logicaly, for ARM to increase revenue, it would need to branch out into other markets. Nvidia honestly would be a good fit for that over shafting licensing costs, which may well see an increase, but for what softbank wants for returns - ARM would not wear out the level of increases they would want and Softbank know this - hence sell or IPO them best for Softbank and also ARM.
Most ARM chips made aren’t pushing 7nm ultra expensive processes - and you are probably still correct.
That’s good take on ARM’s margin per chip.
Sure, if they want to, they could change the licensing model to bleed their customers dry and increase share price but that would just work temporary as everyone would then accelerate the movement to RISC-V.
ARM(the company, not the Arch) has peaked and will probably just stagnate for the foreseeable future.
MS and Google don't make chips, and MS doesn't even license any ARM tech (I don't think, they just make software and buy CPUs, they don't make any). They're a bad fit. Google's dipping their toes in it, but I don't think they're doing fully custom silicon. Most companies buy the rights and layouts, and tweak those layouts.
NVidia makes chips, but they don't make many mobile chips, and virtually no chips for phones compared to Samsung and Qualcomm. They're a better fit.
MS designed and produced custom silicon for HoloLens 2 (the Holographic Processing Unit 2.0) . The Microsoft SQ1 in Surface Pro X is probably also produced by them, though the design is in collaboration with Qualcomm.
I wouldn't be surprised if they make their own networking chips.
Nvidia might be tempted to block ARM customers if they had competing designs to push them to, but they don't. So they would be sacrificing revenue without any other sales to make up for it. It doesn't seem like a good idea to me.
This would be a different story if a company like Apple were buying ARM. They could definitely benefit by gouging Samsung and Google on license fees but they've got to balance that with the chance of getting fined for anti-competitive practices.
But when those companies can get access to those patents for more accountant friendly costs without balancing out risk/asset/management aspects of owning part of that slice, then the motivation, corporate culture that would be needed to drive such an investment is just not there.
It actually makes more sense for some governments to buy ARM than many companies, so at least we can be thankful (so far) that avenue has not transpired. Might even be best overall if ARM was partially IPO'd. If SOftbank sold 50% stake via IPO, I dare say that way of selling the company would yield the best and also quick return and best of both Worlds and that option I'd still not rule out. Indeed, mooted interest from many large companies sniffing as a prelude to an IPO may not be unheard of.
I too share your aversion towards Oracle's business practices.
Meanwhile Apple's home grown chip initiatives are mature, not sure they need much from ARM.
For much of ARM's recent volume use in MCUs and slightly larger embedded devices, there is credible threat from first party usage of RISC-V (see WD, Nvidia).
Access to the ISA itself can be of high value.. see x86 and s390x for prime examples. Although I don't really see how ARM could pull that off outside being an acquisition like this, and making the licensing process onerous enough that people move to buying chips from nvidia instead of doing their own designs. In such a scenario, RISC-V can become a credible threat to phones too, and the server thing pundits keep pushing for the past 15 years never happens.
So there is a lot of value here, but it's pretty hard to grow as a pure licensing play as ARM has been since there are many risks and opportunities for price compression.
ARM have also made some HPC plays, e.g. buying Allinea (probably the best supercomputer debugging tool).
ARM would become owned by an American company. It has to comply with American restrictions on dealing with China but that would make it essentially no different from any other American company, either legally or in fact.
So in the current context this would be bound to raise eyebrows in Beijing, and China could only react by doubling-down on developing domestic alternatives.
This is indeed happening.
And where this is related is I wonder if Apple will have to relent (assuming the purchase goes through) and do business with Nvidia since it licenses some technology from ARM. Or, have I got it wrong here? Does Apple not rely on ARM?
Sorry for rambling! Just thought it was an interesting tidbit
My personal experience: I had a 2011 Macbook Pro with an nvidia card. It started to fail randomly. Apple identified that certain nvidia GPUs were failing and created a test the "Geniuses" would run. My Macbook always passed, even though it kept throwing noise on the screen anywhere other than the Apple store. Eventually it finally failed their test: Four days after the (extended) warranty period. They refused to replace it. Bitterly, the best option for me was to pay the $800 for a new board.
Shortly after the warranty ended, the Nvidia card failed for good. On the bright side, it was a dual-video system, and I was able to keep it running a few years longer through complicated booting rituals that convinced it to boot with the lower power Intel graphics (although I lost the ability to recover from sleep).
I'm not sure why we have such different experiences with Apple customer service. I was also in the Bay Area, but not Palo Alto. It's possible that with a more aggressive approach I could have gotten them to fix it. Instead, I accepted their verdict, and now spread the word through forums like this that their warranties are not to be trusted.
Also have miraculously saved the machine with a booting ritual/hack that bypasses the discrete gpu
Also 2011 MacBook.
Still, as someone who used to work as a “genius”, it would have been easy to cover your mlb repair under warranty for a million reasons, so the technician failed you there, and then to not do anything for you 4 days outside of eligibility especially based on your experience... that’s fucked. Not all “genius” bar people are like that. If something similar happens in future, call AppleCare and ask to speak to “tier 2”. They sometimes have leeway to issue coverage in extenuating circumstances.
realistically we need an open unowned architecture like RISC-V, because whoever buys ARM will cause concern given how hyperconpetitive mobile is, the incentive to abuse the ownership is high.
We really want to avoid another Oracle/Java scenario as well.
Not even close. Qualcomm has about a third of the market: https://cntechpost.com/2020/03/24/samsung-surpasses-apple-to...
Ofcourse U.S. Govt. wouldn't have problems with this unlike Broadcom-Qualcomm deal, on the contrary this will put American semiconductor industry in a more dominant position.
RISC-V is the only hope for the rest of the world now.
At this point - AMD, Intel, Apple are all looking at fully integrated APU/CPU/GPU stacks. That leaves NV out in the cold if they don't do something.
* better single threaded performance than ARM
* the same instruction set as developers' workstations
* the willingless to add Microsoft/Sony's hardware IP to the console chips
2) This might soon change if Microsoft and Apple succeed in their quest to push ARM machines to the consumer market. I can imagine than an ARM macbook is all that's needed to start an avalanche in adoption
3) Willingness can change easily :)
That stock market is also there for ARM stocks . There is no need to trade for nvidia shares at an undesirable price if there is no immediate cash in it.
Nvidia could potentially raise money from an FPO or similar instruments and leverage advantageous stock price they have, I.e. issue lesser newer stock than they would have at a lower price ,but only way SoftBank will consider a deal is if they get cash
On a typical single day $4B worth of NVIDA stock is traded.
But I don't have a clue if it's a good idea or not.
When they made those decisions ARM wasn’t available for buying, SoftBank was ok top of the world didn’t have to sell anything . A year back do you think SoftBank would consider a proposal so close to their purchase price ?
ARM provides them a stable, consistent, and regular revenue stream.
That investment did not pay off I guess..
Are you sure about that? I don't know this story about Metal, but I believe Apple has refused to certify an nVidia driver on Mac since like 2018 (I think), effectively cutting Apple users off from using nVidia products on Apple platforms.
Apple needs/wants to be in full control of their driver stack. They don't necessarily want to write it themselves, but they want tighter control than Nvidia is willing to give them. AMD on the other hand is much happier to offload responsibility and accept help.
I don't think this set of circumstances would come up for an ARM processor.
Or maybe not: 
But when it comes to ARM chips, Apple designs their own ARM-compliant chips, so NVIDIA owning ARM would do nothing in that aspect imo. Especially since iPhone/iPad chips have been ARM-based for the past decade, so I don't think that Apple really has a good option or reason to switch away from ARM at this point.
That's the oft-mentioned and certainly plausible reason, but it's not a matter of fact that it's THE reason, is it?
But yeah, you are correct, there is no factual officially stated reason, and I wouldn't expect Apple to publicly bring it out ever either.
Nothing personal, just business.
So Apple can save the cost and get higher profit margins by not having powerful GPUs in their products.
They might be against nVidia GPUs. They won't throw their investment in Apple Silicon out just because of that.
Already, TSMC is the manufacturer for the cores, so Apple really doesn't have too much of a dependency on ARM in the short term.
I think what we'll see over the next 3-5 years is a divergence from the ARM-licensed design to an in-house-designed architecture: "Apple Silicon II"
Start here - https://en.wikipedia.org/wiki/Fairchild_Semiconductor
This might also be of geographical value - https://en.wikipedia.org/wiki/San_Jose,_California
2) Due to (1) , i believe that this acquisition might promote locked down devices. I don't want this situation to occur.
3) This acquisition may have some affect on Apple since Apple is transitioning to ARM Hardware. If this is the case, Apple may even transit to some other Hardware architecture like RISC-V or OpenPower once again.
As far as I understand this is all rumour still.
Then I realized it was ARM, ‘wow’