Is there something inherently complicated in adding a SATA/M.2 port to board like this?
The RaspberryPi is also "disk-less", which to me is one of the major limitations.
It's a super interesting little board, and I love that it's a RISC-V, that could really help getting the CPU in the hands of people. I just don't know enough about these things to understand why there are no storage connectors (other than an SD card slot).
On the systems I can control, I do all work "disk-less". I strongly prefer it. I like to keep programs and data segregated. The basic setup on NetBSD is as follows, with endless variations possible therefrom. The kernel(s) and userland(s), along with bootloader(s), are on external media, like an SD card or USB stick, marked read-only. There's a RAM-disk in the kernel with a multi-call binary-based userland (custom-made using BusyBox or crunchgen). After boot, a full userland from external media or the network is mounted on tmpfs-based or mfs-based overlay and then we chroot into the full userland. From there, the work space is all tmpfs (RAM).^1 I can experiment to heart's content without jeopardising the ability to reboot into a clean slate. The "user experience" is also much faster than with disk-based. Any new "data" that is to be retained for future use across reboots is periodically saved to some USB or Ethernet connected storage media. I do not use "cloud" for personal data. Sorry. This forces me to think carefully about what needs to be saved and what is just temporary. It helps prevent the accumulation of cruft. The number one benefit of this for me though is that I can recover from a crash instantly and without an internet connection. No dependencies on any corporation.
This BeagleV looks like it would work well for me as it has Ethernet and USB ports.
1. With NetBSD, I can run out of memory with no major problems. With Linux, this has not been the case. I have to be more careful.
I disable swap. If I run out of RAM, thrashing may occur but it is not fatal. Some process that needs to write to "disk" might fail, but the system does not. I just delete some file(s) to free up the needed RAM, restart the process and continue working. I don't think NetBSD has anything like "OOM killer".
Note that one drawback with "disk-less" via read-only SD card or USB stick is how to have a good source of entropy at boot time.
I actually came up with this myself just playing around with NetBSD. You will probably not find anyone advocating disabling swap. I do not run X11 anymore. I stay in VGA textmode. Doubt anyone would want to do exactly what I do.
NetBSD users tend toward DIY and each has their own preferences. If you study how NetBSD's install media are created that will teach you almost everything you need to know. Happy to walk you through it though if you want to try NetBSD.
Chromebooks are claimed to be 100% safe from certain types of attacks. The developers came up with this silly "whitewash" gimmick. Their motivation for a disk-less-like system is to force users to store personal data in the cloud. Ugh. Well, the disk-less systems I have been creating for recreational use long before Chromebooks existed are just as safe. Probably safer because I do not use a graphics layer or a web browser with Javascript interpeter.
>I actually came up with this myself just playing around with NetBSD. You will probably not find anyone advocating disabling swap.
Actually I saw recently some discussion exactly about that. I think many people would be glad to do it if they new how. It is not easy when you relatively new in the area. It is not too complicated but a lot of information and it is not obvious what is important and what not, so you read all and get overwhelmed easily.
>I do not run X11 anymore. I stay in VGA textmode. Doubt anyone would want to do exactly what I do.
I do exactly what you do, simply because x11 is not working on Pi with TV I have and I have nothing more since MBPro has simply died. (if you wish details, see here ”how“: https://news.ycombinator.com/item?id=25778247 )
>If you study how NetBSD's install media are created that will teach you almost everything you need to know. Happy to walk you through it though if you want to try NetBSD.
Thank you very much. This advice is already precious, because it helps to orient yourself in tons of info, and now I would know where to start digging. I would love to try once I’ll have equipment and time. I really wish to encourage you though to write down this process, I am sure there are some people who are desperately looking for this setup and how to achieve it properly, bothered with many questions. This guide could also be a good practical introduction into a NetBSD by the way.
>Chromebooks are claimed to be 100% safe from certain types of attacks. The developers came up with this silly "whitewash" gimmick. Their motivation for a disk-less-like system is to force users to store personal data in the cloud. Ugh.
it’s a shame really and done for wrong purpose. I never considered Chromebooks seriously for that reason.
>Well, the disk-less systems I have been creating for recreational use long before Chromebooks existed are just as safe. Probably safer because I do not use a graphics layer or a web browser with Javascript interpreter.
What I really like about it is the ability of fast recovery, which gives more freedom for experiments. I love it.
Note I did all the experiments many years ago, on i386, and using NetBSD's venerable bootloader. For the Pi, some things will differ obviously.
Not only does this type of setup give freedom for experiments (which NetBSD really does in general) but it makes expts easier. You can put multiple different kernels on the USB stick and reboot into each of them to test pros and cons of different release versions and/or configurations. Or you might boot one computer with kernel A, pull out the stick, insert into another computer and boot kernel B and run them simultaneously. No HDDs needed.
I might do a writeup; some users have done writeups on "disk-less" in the distant past. I have some simplification tricks no one has ever written about. However NetBSD has such great documentation relative to almost every other project and the developers tend to be "quietly competent" in the best way. It was never a culture of "HOW-TO's" as you will find in Linux. Studying the source and following source-changes and other mailing lists is better than any writeup, IMO.
It is like the old saw about teaching a man to fish. It is worth learning. Things do not change in NetBSD so fast or dramatically that what you have learnt will be later considered "obsolete".
Sensible spec. The price isn’t outlandish for what is basically a desktop with a relatively exotic CPU. 16GB ram is minimum for development in my view even though you can do a lot in 8GB.
I'm sure people would start comparing it pricewise to a i7 or something made at significant economies of scale. I think that is an unfair comparison due to the exotic CPU. Exotic in the sense that these aren't commodity CPUs.
16GB DDR4 2400 DIMMs run about $67 retail. So about 1/10 of the total cost of that board. It's a little apples and oranges, but since it seems like they just plucked down the same 8 chips you'd find on a DIMM right to the board maybe not that much.
I note that it has a PCIe x8 slot, but they don't have drivers for a video card (or any card) yet so it's kinda useless.
People have been running Radeon graphics cards with Linux desktop and the open source driver on the predecessor HiFive Unleashed board for literally three years already.
You would have to integrate a storage controller into your design and verify that it works. It's not insurmountable, but it's engineering time that has to go toward something that a lot of people aren't ever going to use, along with an increase in BOM cost.
My pet peeve about all these small single board computers is that none of them have multiple ethernet ports, which severely limits their usefulness as networking hardware. They'd otherwise be very well suited to being various kinds of packet routing appliances.
You can sometimes hang a usb ethernet dongle off them but performance on those tend to be somewhat limited.
Check out solidrun, they have a variety of boards with multiple Ethernet ports.
I used to wonder why embedded boards didn’t have multiple Ethernet boards and why it’s not common to use ethernet to connect to peripherals instead of “old interfaces”. Until I tried it for a project. It turns out Ethernet uses roughly 0.5 to 2W per port depending on speed (that’s 1-4W per connection).
Perhaps what you want is a range of boards with many combinations of features. Some with 4xethernet, some with 2xethernet, some with PCIE and 1xethernet, some with M.2 and PCIE, some with M.2 and 2xethernet, etc. to fill out that big matrix of combinations. But is the market big enough for such a huge range to be economical?
Some of them have PCIE which you could connect a network card to. That seems like a more practical way to allow flexibility than having a lot of special purpose boards.
The support for PCIe cards is very limited on that board. I've spent too much time trying to get a disk controller to work, and I'm not alone. I ended up buying old server boards with low power xeon processors to accomplish my goal.
No, not too complicated, though connectors of all kinds take lots of space (both physical space and routing). Not to mention the amount of power that SATA drives might take.
You can certainly use the PCIe support on the RPi 4 CM with a SATA controller. Either with a PCIe SATA card on the IO board you linked, or by building a custom board.
Why would you need SATA when you have SD, also USB3 and Gigabit Ethernet? I have no problem booting from the SD then accessing data on my SATA drives using a USB-attached controller, also plan to get a NAS.
Sure, it would be great to have connectors for everything and support for every cool standard but these guys have to give up all what is non-essential to keep it small, cheap and possible to engineer by a small team.
I wish folks would stop conflating boot media and root media. There's no reason your SD card has to consist of anything more than u-boot.
SD is fantastically simple, so that the boot rom can get at a bootloader without much effort (generally the bootrom just looks at a memory offset on the mmc device). Once you start speaking newer faster protocols, this simplicity is lost. You're not likely going to find a bootrom that implements all the bits needed to get u-boot from a sata device.
In a perfect world, once these are no longer developer devices, the mmc would be replaced with some spi flash (or even an emmc) with just u-boot.
On 90% of embedded dev systems, you're better off thinking of the sd card as a "bios chip" then a hard drive. The fact you can also use it as a block device to store a rw filesystem is almost incidental, and should probably be avoided.
SD cards speak SPI. In fact SPI is all SiFive supported on their first 2018 HiFive Unleashed. As you say, it's good enough to bootstrap. That old board has gigE. Once you're up enough to TFTP you're away. I've never actually bothered to set that up on mine -- I boot a full kernel with NFS support and then switch to that.
The existence of Raspberry Pi as it is, Raspberry Pi 4 in parricular, is is a huge leap forward for the humanity already. The next leap is going to be the same kind of board but with no connectors besides an increasing number of full Thunderbolt 4 ports letting you connect anything you can imagine (incl as many GPIO connectors as you need). There is no need in a zoo of connectors like M.2, U.2, SATA, HDMI etc when we can conduct PCIe, DisplayPort and USB over a unified standard wire.
By the way, switching to a single super-fast (for the time) connector for everything (including internal hard drives) was already anticipated before the invention of SATA and even USB2 - FireWire (IEEE 1394) was meant for that.
There's value in having GPIO pins not behind any bus or controller at all. Don't Thunderbolt controllers need firmware uploaded and such before they start working? How complex is Thunderbolt device and bus enumeration?
Perhaps you're right. In fact I was going to write about connectors of 3 kinds: GPIO, dedicated fan connectors and Thunderbolt-enabled USB-C but then I came to the conclusion you can also put a GPIO controller or a fan on Thunderbolt the Occam's razor suggests we should only leave Thunderbolt - because we can. Apparently Apple thinks the same as it only left USB-C conectors on their recent laptops.
CFast is CompactFlash, but with a SATA-based interface (up to ~600 MByte/sec) instead of IDE. It was designed because video cameras were making too much data for CF to handle. The downside is its size: 43x36x5 mm vs 15x11x1 mm for microSD.
Also, CFast aren’t as ubiquitous as microSD. I can go to Target or Walmart and get a wide “selection” of microSD, but I’d be hard pressed to find a CFast card. So the chances of an SBC using CFast over eMMC or microSD is low.
You can get an SD card with 0.5TB capacity on it for USD$30, what storage needs do you foresee? Also these tend to be used where weight is an issue, so, why do you want to lug around a huge disk?
Using f2fs and setting up the SW on my SBCs to not write data needlessly to SD card all the time (some programs are really bad at this), and having a very high quality power supply and cabling was enough to make my boards work for years on a single SD card. Some are at 4 years currently and still work fine. I'm using Sandisk only, since a few years ago, because it's the only manufacturer that allows me to verify online whether the card I bought is genuine and provides A1 rated cards at the same time. Experience with about 20 boards.
Now that's interesting. I have a few dead samsung uSD cards, that I'd check for genuineness, if possible. I suspect they might have been a cheap clone.
I guess too little too late. None of the 4 dead Samsung cards I could find have the V mark (too old for that), required for validation, and they probably started adding support for regular EVO cards just a year ago or so.
I have a special SD card made up that has all of the correct boot configs set and a small script to update everything and set up the Pi for PXE boot. After everything is configured, I take out the SD card and let it just pull boot images over the network. My main home server serves everything up over tftp. The only downside is that you can't get this to work over wifi. The wifi Pis get read only SD cards that are all configured the same so it's easy to reimage a new one if the card dies.
This seemed to be such a frequent problem that I have up on ARM SBCs (despite how cool they are).
Bought a cheap NUC with a disk and a BIOS and it is working like a champ - boots every time. I probably paid a little more but it's enough for my needs - a simple login and (occasional) DLNA server.
I bought a cheap used ultra small form factor PC with an i5, 8GB of RAM, all for less than 200 USD. It even included a licensed copy of Windows 7 Pro, which I don't use.
It costs more upfront, but it's worth it because I don't have to spend my time investigating why it's not working.
Also, when comparing costs you should take into account things like enclosure, power supply, as card(s) etc.
I'd say I probably paid around $250 or so for the NUC and a SODIMM and I had a SATA boot disk already. I had previously owned ODROID, libre computer (le potato), and Pi(s). These were probably in the ballpark of $80-$120 each, when including all the necessary equipment.
I suspect they don't necessarily mean spinning platters. A SATA port also allows the connection of an SSD which is almost always a huge boost in usability over SD slots. The fact that you can add a large platter for a small NAS solution is just a nice option.
I'd far rather see a 2x PCIe port that could be used for whatever you wanted.
I think it's just that most people want to use the Raspberry Pi for toy projects. They're trying to keep the cost down, and to keep the board small. If you want something like micro ATX ARM motherboard, that's a different use case. The large majority of Raspberry Pi users are fine with USB 3.0 ports for I/O.
I would love to see a Micro-ATX og Mini-ITX board with an ARM or RISC-V CPU, with PCIe ports and storage connectors. There are ARM boards, but they're a little pricy.
SD cards are too slow to be of practical use as a general purpose disk. They're fine when you have a very predictable workload that does not involve a lot of disk IO.
SD in itself isn't technically unreliable. Lots of smartphones do just fine with eMMC, which is same thing as microSD card except in nonstandard IC chip form thus can't carry SD branding.
Practically they end up being really different. SD Cards end up being bottom of the barrel flash chips (even from the 'good' brands), whereas eMMC uses flash good enough that you can count on it being non replaceable.
You want "industrial" cards from a vendor tailored to the space. I wouldn't trust even those new industrial sandisk cards.
Yeah as someone who has deployed SD-based and eMMC-based systems in industrial spaces, SD cards are hot garbage compared to eMMC and I would strongly recommend not using them in anything that needs to be reliable.
compiling on SSD is so slow. I once compiled an rpi kernel on rpi just for fun and it took forever. Cross compiling is the only way to go for now, unless you only have a small project.
Cheap, slow (as an operating system medium at least), and unreliable. They're the modern day floppy disk. The one upside is that the space is cheap and quite plentiful.
I'm certainly no expert, but I've never been able to run a spinning disk HDD off of a raspberry pi without a powered USB hub. I imagine the thermals and form factor would be hurt pretty badly by including a plausible power supply.
RPI 4 -- the original revision. Whenever I ran it off the "official" power supply, I got disk errors. As soon as I moved over to a powered USB hub, I had no other issues. This was also with a keyboard and mouse plugged in, so maybe that had something to do with it.
I was also trying to run it as a NAS, so it had a decent amount of IO.
Maybe it could have been fixed with a better power supply, but switching over to the powered hub was easier to figure out.
Honestly even with a high quality supply I'd tend to prefer if the drive were externally powered. I have flashbacks to the first generation Pis that would brownout and reboot if you plugged in a mouse or keyboard while it was running.
If I had to guess, and this is just a guess, so take it with a grain of salt. I'd bet most storage devices these days are just thin wrappers around PCI-Express lanes, most of the hardware in the silicon running that stuff is on die for AMD/Intel CPU's and you likely run into cost/power/board space limitations in a device like this
I've been unable to use Beagle boards in the past as they ship with an old kernel and uboot without the sources to update or config them (this was specifically with the black variant). It probably had something to do with vendor NDAs with chipsets or something but it made them entirely unusable to me and more expensive than competitors by almost 2x to boot.
I would love a RISC-V board to play with that is a bit more stable and about the size of a Raspberry Pi within a reasonable price range. The SiFive development boards are pretty pricey, definitely showcase hardware (look at all these peripherals or this is basically a desktop computer!).
I'm hoping with the explicit call out of open hardware and open software that this board won't have the same issues as the Beaglebone Black...
The ubuntu on bbb team keeps up to date with the latest LTS kernels. There is 5.4 support and 5.10 support is in progress.
The price wasn't so bad if you're in for the feature set (mainly the IO features). Using the embedded PRU controllers can replace the arduinos ones typically connects to their RPi.
However, BBB is very old now. A single core cortex-A8 is abysmally slow.
Getting the GPU working was always a pain. But I always used them headless so it wasn't an issue for me.
It's pretty easy to use to build an image for Debian Buster/Stretch or Ubuntu Bionic Beaver, he has various configurations that cover IoT, console only/headless, GUI and a few other combos. It's pretty easy to create your own config with the Kernel and packages that you want.
The images can be used for flashing to the eMMC via an SD card (or via USB).
I've found images built this way to up very up to date and absolutely rock solid thanks to Robert's curation.
I've never had this problem with Beagle Boards and sources. Sure the kernel or u-boot that ship with them might be slightly older but sources have always been available.
And TI are quite decent at contributing to the upstream kernel and u-boot trees. Generally only a few months after a new TI SoC is announced there's enough support in the upstream u-boot and kernel to boot the board and do some useful things. Generally by the time silicon is buyable by mere mortals mainline is in pretty decent shape.
The biggest reason I used them in production devices (and still use them at home) was the eMMC. Which made it well worth the price, even with the slow processor.
We had to work with the Balena.io (formerly resin.io) team to get full hardware support on the BealgeBone Green Wireless (wifi drivers were the biggest hangup, iirc) a few years, but they were incredibly responsive and have done a fantastic job maintaining a stable distro for these boards.
If you want a straightforward out-of-the-box experience, I highly recommend Balena.io.
The board I have has only 4GB of MMC on board which wasn't big enough for the later versions of the OS, but you could boot off of the SD instead if you held down a button on the board while powering it up.
There was a way to tweak it so you didn't have to hold the button down, but it was kinda involved IIRC so I never got around to it.
> Although the first hardware run will be entirely $140 / 8GiB systems, lower-cost variants with less RAM are expected in following releases.
> The initial pilot run of BeagleV will use the Vision DSP hardware as a graphics processor, allowing a full graphical desktop environment under Fedora. Following hardware runs will include an unspecified model of Imagine GPU as well.
Sounds like a direct competitor to the Raspberry Pi. I don't know if the Imagine GPU planned for the next iteration is playing catch-up or leapfrog. The Arstechnica article links to SiFive creates global network of RISC-V startups [1] which I think demonstrates that SiFive is strategically leveraging or responding to the geopolitics surrounding Chinese technology.
Imagination GPU :( Notorious for being hard to support in open source. I'm not even sure there was a single free driver for those.
That likely means those devices are going to be stuck on an outdated kernel, unless Imagination steps in and provides ongoing binary support for newer kernels for their GPUs like x86 GPU manufacturers do. However, this being RISC-V with 2 existing devices total, I don't count on it.
Except for cost... which has been a problem for the BeagleBoard line of SBC's since the beginning. They actually predated the original Raspberry Pi by a couple of years but when the Pi came in at ~25% of the cost, they caught up and overtook the BeagleBoard in popularity fast. The BeagleV looks interesting from an early adopter standpoint but the hobbyist market will probably standardize around whatever decent RISC-V board comes in at sub-$50 first.
To me, they seem to serve different markets. The various BeagleBoards have more industrial specs like a wider operating temperature range, on-board EMMC, etc. Also, the pair of PRU's make them useful for things where more precise timing is important.
Regarding the BBB/BBG, in the last 3-5 years the RPis have gotten significantly faster (RPi3 & 4) and gone 64-bit whereas the BBB & BBG haven't changed much (aside from a bit more eMMC and a very minor CPU bump) since they were launched. These days the 1GHZ 32-bit AM3358 (BBB RevC) is comparatively much slower and with only 512MB RAM, that's a lot less than a stock RPi 4.
Having said that, the BBBs are a great device! They're rock solid and have far better I/O options than the RPi: 4 UARTS, multiple I2C, SPI & CAN buses, EHRPWM, a ton of GPIO, 2x PRU processors, LCD driver, both USB and USB Gadget, oh and of course, the onboard eMMC is great compared to booting from an SD.
>>*SiFive is strategically leveraging or responding to the geopolitics surrounding Chinese technology.*
interesting subtext.
We should all be debating HW WRT the fact that you can't name a single device (aside from weapons) that does not contain a single-non-chinese-manufactured component...
every phone or machine is almost 100% chinese built.
"designed by apple in cupertino california" (but made with slave labor from congo, china and other countries)
And we already know about all the backdoors both China and the US do...
FFS we have known about Eschelon since the 70s - the carnivor, room 641A, etc... etc....
Indeed, the geopolitics works both ways. I think the Chinese are looking at RISC-V as a safe-guard against American embargoes of the kind that killed/maimed HiSilicon, the non-Chinese nation-states are looking for full transparency of silicon design, and the manufacturers want full access to a truly global market that includes China. I'm not sure that SiFive RISC-V designs can be competitive with ARM/x64 in the short-term but the geopolitics creates a potential niche.
I think you are conflating assembly and manufacturing. TSMC is Taiwanese and Samsung is South Korean. Personally I'd prefer that all nation-states and their security organizations followed the Golden Rule and promoted free trade rather than protectionism.
> made with slave labor from congo, china and other countries
I don't equate low-wage manufacturing/assembly with exploitation and certainly not slavery but I understand that this is a common metaphor. Contemporary slavery [1] is a real thing and, until I see contrary evidence, I'm assuming it makes zero contribution to high-tech assembly or manufacturing.
We have min wage in the US, and even that is not equal across all states - we do not have UBI or universal health care, we have shitty industries, such as insurance (forced hedge funds) and in general, we have brainwashing people to accept it as normal.
fuck that.
YC even wants you to think that all VC is all-truistic NOPE.
If it has driver/firmware blobs, I'll say no thanks and stick to my ARM Rockchips. If they deliver this on linux with full source code, I will definitely buy their boards.
What kind of performance is expected from this compared to say the RPi 4B? Any noteworthy changes in performance characteristics (e.g better/worse I/O or something)?
Highly unlikely to be very competitive. At this stage it's about getting RISC-V hardware into developers' hands, and previous boards either cost $1000 or were limited in ways where they could not run Linux well. (I am one of the Fedora/RISC-V maintainers.)
Just saw this SiFive board HiFive is retailing for $679, can you please comment if this is any good for toying around ? Specs look more than decent - almost desktop class computing.
I have a couple on order, and I've talked to one of the developers. It looks nice - PCIe, NVRAM SSDs, mini ITX format, 16GB RAM, more cores, etc - but not in the same price point or market segment as an SBC. We will likely buy a pile of them to do Fedora builds.
Fedora have been shipping on RISC-V for about three years already. Last I saw, around 95% of packages work. The main exceptions have been thing that need some JIT that hasn't been ported yet -- gcc and llvm have been working for years.
Why would you want use these as build machines? It seems more efficient to just cross-compile on your fastest build machines. You get much faster CI feedback that way. Obviously you want to validate RISC-V code on native devices but using them as builders seems wasteful.
I'm genuinely curious why that's desirable. Maybe I just misunderstand your comment and you want the RISC-V boards for validation i.e. make sure you can self-host the distro even if in general releases are cross-compiled on your (I assume AMD64) build fleet.
I'm sure the Fedora builds aren't designed for cross compilation (which is far from trivial for most packages not designed for it). Also, man-power is the most precious resource so it would be a waste to spend time trying to cross-compile what could be built natively.
Other commenters have pointed out how tricky cross compiling can be. Debian has been making a lot of progress here, most thanks to Helmut Grohne it seems. http://crossqa.debian.net/https://wiki.debian.org/DebianBootstrap (slightly out of date, but still useful)
The CPU cores are comparable to the ARM A55, rather than the A72.
It's more MHz and a better uarch than the cores in a Pi 3, so should outperform even the newer Pi 3+ on tasks that don't use NEON and don't use more than 2 cores.
> All GPIOs can be configured to different functions including but not limited to SDIO, Audio, SPI, I2C, UART and PWM
Does supporting audio mean that these GPIOs can be used as analog-to-digital converters? There are home automation applications where reading a voltage in an ethernet connected device is a good fit but from what I've seen in a raspberry that requires extra hardware connected to the IO ports.
As an aside, I really wish that these boards would include more than two PWM outputs. The Raspberry Pi has two as well, and it feels really limiting. Analog control instead gets farmed out to microcontrollers, when you could probably make it work with a single board if there were more pins to work with.
I used to love the beaglebone for that. Run application logic on the main arm core, farm the microcontroller stuff out to the embedded PRU microcontrollers (which could access all the IO functions). Still a single board solution.
Honestly, I'm still mostly getting started in the area as well, but I'm looking to use them for motor control. An example of this would be this paper [1], which I'm hoping to replicate to some degree, but requires at least 4 PWM outputs. I'm currently planning on using an Odroid-C4, which has 6 PWM outputs.
But the work I've needed to put into that in order to get a motor controller working was way more than I needed than for a RPi4b, since libraries for it already exist for the Pi vs needing to re-write them for the Odroid board. It would have been cool to try it with an open-hardware board as well. But it's not just quadcopters--a lot of other projects like rovers or position control with multiple stepper on different axes benefit from extra PWM, and doing it in software can lead to too much jitter, so the hardware timers are necessary.
Eh, a decent audio DAC pretty much requires another chip since it takes up a ton of die space, so you'd be an absolute fool to use the same process node as your logic. Since this looks to be beagleboard compatible, you should be able to use an audio cape like this https://www.element14.com/community/docs/DOC-67906/l/beagleb...
If you don't care about it being decent, you can just use the PWM channels like the RPi does.
Many cheap and easy to interface I2C and SPI chips for that, which is the way the industry is going generally. Or throw in a tiny Arduino with stock Firmata firmware, and use that as an I/O extender.
Looks like their Hard IP is proprietary but based on open high-level designs such as Rocket and BOOM. The peripherals situation is mixed, but they've stated in the past that they're quite OK with using open designs whenever feasible.
They have source code files for their 'Freedom' core designs up on GitHub under an Apache 2.x license. These can be directly used to evaluate their designs as FPGA soft-cores.
SiFive has E-series (aka "Freedom Everywhere") and U-series (aka "Freedom Unleashed") cores, both of which seem to be based on Rocket. And they do provide high-level designs for both on their GitHub, under a free license.
I wrote SiFive CPUs, not RISC-V there. (because this BeagleV board uses SiFive)
There's a _huge_ gap between academic implementations and ones that are actually worth shipping. I can see RISC-V winning quite handily for microcontrollers. But on anything bigger than that...
Western Digital's open sourced SweRV cores are approximately the same performance as the closed-source SiFive 7-series ones.
The main difference is the WD cores are 32 bit, no MMU, probably DTIM rather than cache (not sure).
The guts of the dual issue decode, register file, pipeline is there, which is the hard part. The other bits could be relatively easily added by the open source community, probably cribbing components from RocketChip.
They aren't for the customers of SiFive. What is your greater point with this comment? That SiFive isn't Open Source or that they are equivalent to Arm?
This video from SiFive clearly explains the RISC-V ecosystem and how SiFive is involved.
Open source refers solely to the software side of things in these groups and types of projects. Thus, all of the drivers and software you need to use the SiFive is open source and thus you can say it is an open source design. However, it is not an "open hardware" design in that the IP used to design the chip is not released.
> However, it is not an "open hardware" design in that the IP used to design the chip is not released.
The marketing page for the BeagleV specifically says "Open Hardware Design", but I agree that they probably didn't mean Open Hardware beyond just the PCB layout or something similar.
It would be very surprising if they released the detailed schematics for the SiFive cores. Until there's some kind of common micro-fab standard where every major university can have a legit semiconductor fab for small scale operations, giving people the design doesn't really do much.
I really wish that someone would work on the problem of making affordable, small-scale semiconductor fabrication possible on a reasonably modern node (<= 32nm). It's a hard problem... but everyone in the world being dependent on a few large fabs is also a hard problem.
Speaking as someone at Beagle, we see this board as an important step to more openness in the ecosystem, especially helping software developers improve the state of open source for RISC-V. It is also just a really cool board. Beagle will do more to try to get more openness at the RTL-level moving forward, perhaps even with FPGA boards at an interim step. The shuttle services are starting to make releasing a new chip design in reasonably modern nodes more possible.
The processor maybe, but what about the e.g. USB3 controller, WIFI, etc, that can also freely snoop around your memory. Are there even open USB3 implementations?
Does it have an GPU? I can't quite decipher the specs.
Random googling suggests that SiFive have partnered with PowerVR for GPU, which might even enable vulkan support, but I suppose this SoC is not one of those?
> The initial pilot run of BeagleV will use the Vision DSP hardware as a graphics processor, allowing a full graphical desktop environment under Fedora. Following hardware runs will include an unspecified model of Imagine GPU as well.
Also, ImgTec are planning on writing and upstreaming open drivers for Linux and mesa for another RISC-V based board, so probably those drivers will work here too.
I don't think that necessarily says that ImgTec will be upstreaming the open drivers, more that they don't have a better option at the moment and will be replacing closed source components with each revision.
I hope they will, but I'll believe it when I see it. They've been extremely allergic to open source in the past.
The post specifically says upstreaming open drivers, here is a quote:
Imagination is also creating a new open-source GPU driver to provide a complete, up-streamed open-source kernel and user-mode driver stack to support Vulkan® and OpenGL® ES within the Mesa framework. It will be openly developed with intermediate milestones visible to the open-source community and a complete open-source Linux driver will be delivered by Q2 2022. Imagination will work with RIOS to run the open-source GPU driver on the PicoRio open-source platform.
I agree it seems like quite a change of heart and I definitely won't be holding my breath.
That'll be interesting to see how much is actually new. I'm pretty sure they licensed the core of their shader compiler stack, so there was no way it was going to just be opened up, but their GPU ukernel looks totally homegrown and would be a shame to throw away.
They've been delivering that vacuous promise every few years. Being bought by Canyon Bridge, a private equity fund owned by the Chinese government, a few years ago has unfortunately not changed anything.
Having reverse engineered a bit of the drivers, I think it's because they culturally think that all of their value add is in the software. Patents have expired on the TBDR fixed function hardware blocks. The rest is just a combo of a little RISC core that does job dispatch (Programmable Data Sequencer in their parlance), and a cluster of SMT barrel scheduled cores (used to be called USSE in the SGX days, not sure now) that do the heavy lifting wrt shaders that don't really have any secret sauce AFAICT.
The value add is all in the software stack where they run a full little ukernel on the main GPU cores, and optimizing the shit out of the software that runs on those cores from their pretty clever compiler.
I bet they think that if they open source the drivers, that's giving away the one thing that makes PowerVR GPUs special in the first place.
If a IMG person reads this: y'all are wrong with that last piece. Your company is dying without opening the drivers, and you'll be able to control the hardware/software co-design in a way that nobody else can even if you give away the software. You'll have to keep doing work to have new hardware available and stay ahead of the curve, but that's true anyways and is the sign of a healthy business. Sure beats withering away as your patents expire.
Can someone explain the technical benefits of this architecture over the competition? That is should I be excited if I don't care about e.g. openness? Or is it simply an effort to create something that is a half-decent cpu alternative but open?
Is there anything about RISC-V that is "better" simply because it is a later design than others? Is it likely to evolve faster because it is open or more modern?
> Is there anything about RISC-V that is "better" simply because it is a later design than others?
A lot of it is because it is newer, and the designers have learned from previous architectures. It is a relatively clean and straightforward instruction set, designed to be easily and efficiently implemented.
There's not anything that is super crazy revolutionary, in contrast to the (still vaporware) Mill CPU architecture.
> Is it likely to evolve faster because it is open or more modern?
They have a good extension mechanism that allows relatively clean additions to the instruction set. Some of the recent ones like the vector extension aren't finalized yet. Anyone can propose their own extension. Historically, ARM might work with their most important customers to implement an extension, but good luck getting their attention if you're not already paying them millions per year.
The mill has been in development for... 18 ~years now! Soon they will be able to hire engineers who are actually younger than the company. I wonder if there has ever been a tech company that survived so long without bringing a product to market. Duke Nukem Forever took ~14 years.
According to Ivan on their forums (so take this with a grain of salt as it's from the horses mouth rather than an external assessment) they were apparently supposed to be levelling-up in the summer of 2020.
They have at least secured a decent patent portfolio, particularly on the belt.
From what I understand, all the developers have day jobs or are independently wealthy and can afford to work on it without (much?) pay. They haven't accepted VC money, even though that would likely have sped up development considerably.
That's patently wrong. :) I worked for the Mill for a while The Mill is as real as it gets. EDIT: No, there is no actual CPU but the software, the compiler, the simulator, etc. exist.
I think in the real world "No percentage of each sale payments to ARM" is what will drive RISC-V. An "open" ISA doesn't force anything else to be open.
So, use cases like Western Digital, where they can quit paying ARM a percentage of every hard drive they sell, for example.
As for technical advantages, each RISCV vendor has their own choice of how to implement, so it's hard to say anything broad that applies to all RISCV implementations. The Berkeley BOOM project is hitting really good DMIPS/MHz numbers. LowRISC has some interesting memory tagging and "minion core" ideas, etc.
Edit: I left out perhaps the most important reason RISCV has a lot of hype. They've been successful getting first class support from the Linux kernel maintainers.
The percentage is very small, though. So this argument only works for very high volume use cases, which is why the RISC-V eval boards are currently far more expensive than comparable ARMs.
Do WD do their own silicon yet, or do they just buy the parts?
How many ARM microcontrollers are low volume? Pretty much all of the examples I can think of are incredibly high volume. STM32, SAMD21/51, i.MX, etc.
WD said they were finishing taping out their first production design about a year and a half ago, so I assume they have started shipping them at this point, but it's hard to find info.
Hasn't WD been doing their own silicon (at least from a design standpoint, they still use someone else's fab) precisely because the 'small percentage' ARM charges matters for their margins? In a world where we have ESP-01 boards which retail for $2, even a couple of percent matters.
The ISA is pretty nice, simple, and well documented. And since it's "open", people can create their own implementations. Like this guy, who is creating a RISC-V processor from scratch, without using an FPGA.
One of the other replies points to the RISC-V extensions feature. I think for someone who "doesn't care about openness" would at least benefit from that in the architecture. It means the same compiler can be used to bootstrap things and simple steps can be added to greatly optimize specific types of code, like AI stuff. This board really stands out in AI performance.
Also, having things open means that the supply-chain can be more stable, with less chances of a single glitch in the system halting deliveries for any time. This is driving a lot of interest in RISC-V right now.
I'm quite excited about vector instructions. The approach used in RISC-V is very refreshing coming from SIMD. But I would not expect an instant impact from a user point of view.
The main difference is that RISC-V is a lot more modular, so it's going to be difficult to distribute binaries for but more flexible if you're doing something completely vertical. Also a lot of the modules have bundle relatively common/easy instructions with niche/difficult ones. E.g. multiply with divide.
> The main difference is that RISC-V is a lot more modular, so it's going to be difficult to distribute binaries for but more flexible if you're doing something completely vertical. Also a lot of the modules have bundle relatively common/easy instructions with niche/difficult ones. E.g. multiply with divide.
I don't think it'll be worse than ARM and it's decidedly better than x86.
There are SEVEN major revisions of ARMv8. Then there's v8-R, v8-M, and additional 32-bit variants of each instruction set in addition to both ARMv7 and ARMv6 which also still ship billions of chips per year. Oh, and under pressure from companies, ARM also allows custom instructions now. Those aren't just theoretical either -- Apple at least added a ton of custom matrix instructions to the M1.
For x86, supporting only semi-recent processors (2006 Core or greater) leaves you still checking for support for: SSE3, SSE4, SSE4.1, SSE4a, SSE4.2, SSE5, AVX, AVX2, AVX512, XOP, AES, SHA, TBM, ABM, BMI1, BMI2, F16C, ADX, CLMUL, FMA3, FMA4, LWP, SMX, TSX, RdRand, MPX, SGX, SME, and TME. That's 29 instruction sets and not all of them have use on both Intel and AMD chips.
RISC-V seems at least that cohesive. If you're shipping a general purpose CPU, you'll always have mul/div, compression, fusion (not actually instructions), privilege, single precision, double precision, bit manipulation, and probably a few others.
Where you'll run into mul/div missing or no floats are microcontrollers or "Larabee" style GPU cores. In all of those cases, you'll be coding to a very specific core, so that won't really matter.
Thankfully, we've had ways to specify and/or check these kinds of things for decades.
> leaves you still checking for support for: SSE3, SSE4...
Find me a processor that supports SSE4 but not SSE3. That's the problem. With x86 you pretty much can say "we're targeting processors made after 2010" or whatever and that's that. You make one binary and it works.
RISC-V allows a combinatorial explosion of possible CPUs. You can have a CPU that supports extension X and not Y, but another one that supports Y and not X.
If you're in an embedded situation where you're building all the software yourself then that's fine.
If you're on a general purpose PC/smartphone with packaged software then the OS vendor specifies a base set of extensions that everything must implement -- for Linux at the moment that is RV64IMAFDC aka RV64GC.
All of those extensions (except maybe A) are very generally useful and pervasive in code.
Some other extensions, such as the Vector extension, will provide significant benefits to applications that don't even know whether the system they are running on has them -- you'll just get dynamically linked to a library version that uses V or doesn't, as appropriate.
To take a very trivial example, on a system with V, every application will automatically use highly efficient (and also very short) V versions of memcpy, memcmp, memset, bzero, strlen, strcpy, strcmp and similar.
The same will apply to libraries for bignums, BLAS, jpeg and other media types, and many others.
If you're doing something embedded nothing prevents you implementing multiply but not divide. RISC-V gcc has an option to use an instruction for multiply but runtime library call for divide.
In fact, even if you claim to implement the M extension (both multiply and divide) all that is necessary is that programs using those opcode work -- but that can be via trap and emulate. If your overall system can run binaries with multiply and divide instructions in them then you can claim M-extension. Whether the performance is adequate is between you and your customers. Note that there are also vast differences in performance between different hardware implementations of multiply and divide, with 32-64 cycle latencies not unheard of.
The same applies for implementing a subset of other extensions in hardware. You can implement the uncommon ones in the trap handler if that will meet your customer's performance needs.
Yeah if you're willing to do something completely non-standard of course you can do whatever you want.
> Note that there are also vast differences in performance between different hardware implementations of multiply and divide, with 32-64 cycle latencies not unheard of.
This problem exists even among different CPUs that all implement the M extension in hardware -- or different CPUs in ISAs where multiply and divide are not optional, such as x86 or aarch64.
The worst (but conforming) hardware implementations are barely better than the best software emulations -- and maybe worse if the software emulation is running on a wide OoO CPU.
Interesting, I might get myself one of these to play with.
But the board has an HDMI output, however the description doesn't describe the specifications on display processor / GPU functionality, or even if it's just a simple framebuffer, etc. There's specifications on Video processing, but I get the impression this is for camera / video input, not output.
"The initial pilot run of BeagleV will use the Vision DSP hardware as a graphics processor, allowing a full graphical desktop environment under Fedora. Following hardware runs will include an unspecified model of Imagine GPU as well."
ImgTec are planning on writing and upstreaming open drivers for Linux and mesa for another RISC-V based board, so probably those drivers will work here too.
> BeagleV™ is the first affordable RISC-V board designed to run Linux. Based on the RISC-V architecture, BeagleV™ pushes open-source to the next level and gives developers more freedom and power to innovate and design industry-leading solutions with an affordable introductory price at $149.
That's not the point. Risc-V machines are not yet price competitive, but presumably will be at some point. This is for people who are interested enough in Risc-V to spend some time and a ~$100 on it, but not thousands of of dollars. And the main reason many people are interested in Risc-V over Arm is that it's open and license free.
Raspberry Pi 4 with 8 GB RAM is $75, and that's not counting a power supply, SD card, or HDMI cable. Adding those at pishop.us (16 GB card) takes the price to $95.85.
We don't know whether the BeagleV price includes those necessary items or not but either way coming within a factor of 2 of Raspberry Pi price is pretty impressive at this stage. Up until now you've paid $3000 for a 1.5 GHz RISC-V setup with HDMI and USB etc, if you've got one actually in your hands (HiFive Unleashed plus MicroSemi HiFive Expansion board), or $665 for one that will start delivery in March (HiFive Unmatched). There is also the $499 Icicle which is quad core but only single-issue (like the HiFive Unleashed) and only 600 MHz.
From the Guidelines: Please don't complain about website formatting, back-button breakage, and similar annoyances. They're too common to be interesting. Exception: when the author is present. Then friendly feedback might be helpful.
Still, the page loads in ~3secs on my 2012 i7-3770K with 100Mbps...
This looks really awesome. It's been a longstanding desire of mine to build a custom laptop with one of these powerful small SOCs (originally thought of something like the lattepanda, which is similar to an Intel macbook air, but as a slightly bigger tan pi soc).
If the form factor is reduced to something like an NSLU2 you can attach a large spinning disk and have a desktop server/NAS device. i.e., an unlocked WD MY Cloud with open source community android and iphone app for the device
Why the linux boards are always designed to be cheap? What about those who have little more dough and are willing to put some cash to get extra? Just create some expensiver premium models, not only cheap stuff, damn it.
Because, obviously, everyone wants to be the RISC raspberry. Becoming the expensive-alternative-to-raspberry-nobody-knows-about is not so attractive.
Moreover, it looks to me like these cheap boards are not lacking features. What would you want in a premium version that is not present in the current models?
At the end of the day most of these are just toys and it's hard to justify spending $850 on a device that is less capable than a $500 mini form factor OEM box if you're doing something serious.
The type of questions in the application questionnaire ("Click to Apply") is encouraging -- it makes me think they anticipate some real professional interest in this.
I think everyone is well aware of that, but unless you're making and able to sell millions of boards and have hundreds of millions in cash flow, there's no real path to using 10nm or smaller nodes. FWIW AllWinner are currently manufacturing 5 million RISC-V chips (with TSMC IIRC). I don't know what node they are using.
https://developers.google.com/speed/pagespeed/insights/?url=... score still isn't great, but at least the images aren't the bottleneck anymore. The site is in the process of a grand rebuild that should get rid of the recurring jquery loads, but that's a few months away.
Likely that it supports the Supervisor execution environment from the riscv spec [1]. This means it can run the typical ring 0 and ring 3 for kernelspace and userspace respectively, and importantly the board supports virtual memory.
Most risc-v cores to date have been more on the micro-controller side of things (ie without an MMU). The cpu on this board has one so it can run Linux.
I don't think it's open source. RISC-V is an open ISA, but implementations are not required to be open source (although there are many implementations which are).
I like to think of it as an open-standard like where anyone can download the TCK versus something where you need to pay for the SPECs from ISO (for example Prolog is https://www.iso.org/standard/21413.html which is 185€/$250 USD).
How is RISC-V better than x64? Aren't both of them just ISAs? The only pro I can find is that RISC V is cleaner (less legacy). But no one programs in assembly these days so why is this an issue?
It's more about the licensing of the ISA. Try creating your own compatible x86 processors, and find out how long it takes before Intel's attack lawyers come down on you. ARM is a bit better but you still have to pay licensing fees. For people who want to git clone a design and manufacture it without involving lawyers or licensing fees, RISC-V is likely the best choice.
Think of it as MP3 vs Ogg Vorbis. It's equivalent, slightly newer and a bit nicer, but really the benefit is that it's patent-unencumbered and you are free to tinker with the ISA and build your own versions at will.
Not many people program in assembly, but teams working on compilers like gcc and LLVM very much do care about the ISA.
Apple has been able to make great progress moving from Intel x86 to their own custom ARM M1 processor. The way I understand it, RISC-V may allow similar advances.
The RaspberryPi is also "disk-less", which to me is one of the major limitations.
It's a super interesting little board, and I love that it's a RISC-V, that could really help getting the CPU in the hands of people. I just don't know enough about these things to understand why there are no storage connectors (other than an SD card slot).
reply