The RaspberryPi is also "disk-less", which to me is one of the major limitations.
It's a super interesting little board, and I love that it's a RISC-V, that could really help getting the CPU in the hands of people. I just don't know enough about these things to understand why there are no storage connectors (other than an SD card slot).
This BeagleV looks like it would work well for me as it has Ethernet and USB ports.
1. With NetBSD, I can run out of memory with no major problems. With Linux, this has not been the case. I have to be more careful.
Could you elaborate a bit more on that, please ?
It's been a while since I last touched NetBSD, but I don't remember anything special regarding that aspect.
Then again, I've seen BSDs handle it even worse.
("tuning of" or "turning off")
Do you mean something like
Seems too late to edit now.
Note that one drawback with "disk-less" via read-only SD card or USB stick is how to have a good source of entropy at boot time.
Actually I saw recently some discussion exactly about that. I think many people would be glad to do it if they new how. It is not easy when you relatively new in the area. It is not too complicated but a lot of information and it is not obvious what is important and what not, so you read all and get overwhelmed easily.
>I do not run X11 anymore. I stay in VGA textmode. Doubt anyone would want to do exactly what I do.
I do exactly what you do, simply because x11 is not working on Pi with TV I have and I have nothing more since MBPro has simply died. (if you wish details, see here ”how“: https://news.ycombinator.com/item?id=25778247 )
>If you study how NetBSD's install media are created that will teach you almost everything you need to know. Happy to walk you through it though if you want to try NetBSD.
Thank you very much. This advice is already precious, because it helps to orient yourself in tons of info, and now I would know where to start digging. I would love to try once I’ll have equipment and time. I really wish to encourage you though to write down this process, I am sure there are some people who are desperately looking for this setup and how to achieve it properly, bothered with many questions. This guide could also be a good practical introduction into a NetBSD by the way.
>Chromebooks are claimed to be 100% safe from certain types of attacks. The developers came up with this silly "whitewash" gimmick. Their motivation for a disk-less-like system is to force users to store personal data in the cloud. Ugh.
it’s a shame really and done for wrong purpose. I never considered Chromebooks seriously for that reason.
What I really like about it is the ability of fast recovery, which gives more freedom for experiments. I love it.
Not only does this type of setup give freedom for experiments (which NetBSD really does in general) but it makes expts easier. You can put multiple different kernels on the USB stick and reboot into each of them to test pros and cons of different release versions and/or configurations. Or you might boot one computer with kernel A, pull out the stick, insert into another computer and boot kernel B and run them simultaneously. No HDDs needed.
I might do a writeup; some users have done writeups on "disk-less" in the distant past. I have some simplification tricks no one has ever written about. However NetBSD has such great documentation relative to almost every other project and the developers tend to be "quietly competent" in the best way. It was never a culture of "HOW-TO's" as you will find in Linux. Studying the source and following source-changes and other mailing lists is better than any writeup, IMO.
It is like the old saw about teaching a man to fish. It is worth learning. Things do not change in NetBSD so fast or dramatically that what you have learnt will be later considered "obsolete".
PCIe, 16 GB DDR4, two M.2 (one for storage, one WIFI), quad core 1.5 GHz (same cores as this board, but twice as many)
I'm sure people would start comparing it pricewise to a i7 or something made at significant economies of scale. I think that is an unfair comparison due to the exotic CPU. Exotic in the sense that these aren't commodity CPUs.
I note that it has a PCIe x8 slot, but they don't have drivers for a video card (or any card) yet so it's kinda useless.
SiFive demonstrates the HiFive Unmatched using a $300 150W Radeon RX580. https://www.youtube.com/watch?v=HVsnnYuvDXI
You can complain that you don't want to spend that much on a graphics card  but you can't complain that it's not supported.
 cheaper ones will work too
You can sometimes hang a usb ethernet dongle off them but performance on those tend to be somewhat limited.
I used to wonder why embedded boards didn’t have multiple Ethernet boards and why it’s not common to use ethernet to connect to peripherals instead of “old interfaces”. Until I tried it for a project. It turns out Ethernet uses roughly 0.5 to 2W per port depending on speed (that’s 1-4W per connection).
The actual routing is done as a router-on-a-stick. It’s not perfect but it’s simple, scalable, and reliable.
Some of them have PCIE which you could connect a network card to. That seems like a more practical way to allow flexibility than having a lot of special purpose boards.
On lower end hardware you'd probably get higher throughput over the single link than trying to add a USB LAN adapter to it.
Could you say more about the use case you have in mind?
HardKernel/ODroid has things like the oDroid-HC4.
Also, USB3-to-SATA isn't completely crazy to do.
Lack of sata/m.2 port is very frustrating to me.
The Turing Pi project looks to do that with their next version: https://turingpi.com/v2/
Should be much more stable than a USB3 connection.
Sure, it would be great to have connectors for everything and support for every cool standard but these guys have to give up all what is non-essential to keep it small, cheap and possible to engineer by a small team.
There's no reason you couldn't kick SD transfer speeds up towards SSD speeds other than the protocol doesn't allow for it.
SD is fantastically simple, so that the boot rom can get at a bootloader without much effort (generally the bootrom just looks at a memory offset on the mmc device). Once you start speaking newer faster protocols, this simplicity is lost. You're not likely going to find a bootrom that implements all the bits needed to get u-boot from a sata device.
In a perfect world, once these are no longer developer devices, the mmc would be replaced with some spi flash (or even an emmc) with just u-boot.
On 90% of embedded dev systems, you're better off thinking of the sd card as a "bios chip" then a hard drive. The fact you can also use it as a block device to store a rw filesystem is almost incidental, and should probably be avoided.
Oh, that's a nice idea! It would be nice if you linked to a guide or wiki page so that more people can replicate your setup.
Also, CFast aren’t as ubiquitous as microSD. I can go to Target or Walmart and get a wide “selection” of microSD, but I’d be hard pressed to find a CFast card. So the chances of an SBC using CFast over eMMC or microSD is low.
There is one. It's called UFS 3.0. The Sony Xperia 1 II has it. They're same size as MicroSD, but look slightly different.
I guess too little too late. None of the 4 dead Samsung cards I could find have the V mark (too old for that), required for validation, and they probably started adding support for regular EVO cards just a year ago or so.
Bought a cheap NUC with a disk and a BIOS and it is working like a champ - boots every time. I probably paid a little more but it's enough for my needs - a simple login and (occasional) DLNA server.
Also, when comparing costs you should take into account things like enclosure, power supply, as card(s) etc.
I'd say I probably paid around $250 or so for the NUC and a SODIMM and I had a SATA boot disk already. I had previously owned ODROID, libre computer (le potato), and Pi(s). These were probably in the ballpark of $80-$120 each, when including all the necessary equipment.
I'd far rather see a 2x PCIe port that could be used for whatever you wanted.
Raspberry pis are very popular as audio streamers, media centers that streams of the network, home automations etc.
But all or them greatly suffer from the fact that it can randomly die on you. Which is quite frustrating.
The only sensible solution (imho) to this currently is netbooting but it can be a hassle to setup in some cases.
An integrated 16GB emmc would do wonders for everything mentioned above.
And everybody bitches that the board is too expensive.
I don't want to lug around anything, I want it to sit behind my monitor. Also an M.2 drive isn't that big, it could fit in many existing cases.
You want "industrial" cards from a vendor tailored to the space. I wouldn't trust even those new industrial sandisk cards.
1. most SD card devices are battery powered
2. SD card failure is acceptable because they are replaceable
3. SD cards are not used as main storage for a computer
That leads to low quality SD cards.
I was also trying to run it as a NAS, so it had a decent amount of IO.
Maybe it could have been fixed with a better power supply, but switching over to the powered hub was easier to figure out.
faster-spinning-plates would draw more power, I guess.
I would love a RISC-V board to play with that is a bit more stable and about the size of a Raspberry Pi within a reasonable price range. The SiFive development boards are pretty pricey, definitely showcase hardware (look at all these peripherals or this is basically a desktop computer!).
I'm hoping with the explicit call out of open hardware and open software that this board won't have the same issues as the Beaglebone Black...
The price wasn't so bad if you're in for the feature set (mainly the IO features). Using the embedded PRU controllers can replace the arduinos ones typically connects to their RPi.
However, BBB is very old now. A single core cortex-A8 is abysmally slow.
Getting the GPU working was always a pain. But I always used them headless so it wasn't an issue for me.
It's pretty easy to use to build an image for Debian Buster/Stretch or Ubuntu Bionic Beaver, he has various configurations that cover IoT, console only/headless, GUI and a few other combos. It's pretty easy to create your own config with the Kernel and packages that you want.
The images can be used for flashing to the eMMC via an SD card (or via USB).
I've found images built this way to up very up to date and absolutely rock solid thanks to Robert's curation.
And TI are quite decent at contributing to the upstream kernel and u-boot trees. Generally only a few months after a new TI SoC is announced there's enough support in the upstream u-boot and kernel to boot the board and do some useful things. Generally by the time silicon is buyable by mere mortals mainline is in pretty decent shape.
We had to work with the Balena.io (formerly resin.io) team to get full hardware support on the BealgeBone Green Wireless (wifi drivers were the biggest hangup, iirc) a few years, but they were incredibly responsive and have done a fantastic job maintaining a stable distro for these boards.
If you want a straightforward out-of-the-box experience, I highly recommend Balena.io.
There was a way to tweak it so you didn't have to hold the button down, but it was kinda involved IIRC so I never got around to it.
> The initial pilot run of BeagleV will use the Vision DSP hardware as a graphics processor, allowing a full graphical desktop environment under Fedora. Following hardware runs will include an unspecified model of Imagine GPU as well.
Sounds like a direct competitor to the Raspberry Pi. I don't know if the Imagine GPU planned for the next iteration is playing catch-up or leapfrog. The Arstechnica article links to SiFive creates global network of RISC-V startups  which I think demonstrates that SiFive is strategically leveraging or responding to the geopolitics surrounding Chinese technology.
That likely means those devices are going to be stuck on an outdated kernel, unless Imagination steps in and provides ongoing binary support for newer kernels for their GPUs like x86 GPU manufacturers do. However, this being RISC-V with 2 existing devices total, I don't count on it.
So close, yet so far.
Probably those work here too.
I would consider the Black and Green to be competitors too.
Would be really interested to get a good analysis of the Raspberry Pi, BB Black, and this new board.
Having said that, the BBBs are a great device! They're rock solid and have far better I/O options than the RPi: 4 UARTS, multiple I2C, SPI & CAN buses, EHRPWM, a ton of GPIO, 2x PRU processors, LCD driver, both USB and USB Gadget, oh and of course, the onboard eMMC is great compared to booting from an SD.
So I'm psyched about the Beagle-V.
We should all be debating HW WRT the fact that you can't name a single device (aside from weapons) that does not contain a single-non-chinese-manufactured component...
every phone or machine is almost 100% chinese built.
"designed by apple in cupertino california" (but made with slave labor from congo, china and other countries)
And we already know about all the backdoors both China and the US do...
FFS we have known about Eschelon since the 70s - the carnivor, room 641A, etc... etc....
I think you are conflating assembly and manufacturing. TSMC is Taiwanese and Samsung is South Korean. Personally I'd prefer that all nation-states and their security organizations followed the Golden Rule and promoted free trade rather than protectionism.
> made with slave labor from congo, china and other countries
I don't equate low-wage manufacturing/assembly with exploitation and certainly not slavery but I understand that this is a common metaphor. Contemporary slavery  is a real thing and, until I see contrary evidence, I'm assuming it makes zero contribution to high-tech assembly or manufacturing.
We have min wage in the US, and even that is not equal across all states - we do not have UBI or universal health care, we have shitty industries, such as insurance (forced hedge funds) and in general, we have brainwashing people to accept it as normal.
YC even wants you to think that all VC is all-truistic NOPE.
Their page mentions spec with :
• SiFive U74 RISC-V Dual core with 2MB L2 cache @ 1.5GHz
• Vision DSP Tensilica-VP6 for computing vision
• NVDLA Engine (configuration 2048 MACs@800MHz )
• Neural Network Engine (1024MACs@500MHz)
I have just noticed the recent trend of neural hardware support on mainstream chips... Apples M1, this chip.
Wondering what the implications are for software and user-space .. what kind of devices / apps will this enable ?
One area Im interested in is turning lidar scanner point data into 3D geometry on the device, thus solving a big data management issue.
Do you expect a lot of things to break on RISCV?
Same for Debian, where the percentage of packages that build on RISC-V is second only to ppc64 out of the "minor" ISAs. https://buildd.debian.org/stats/graph-ports-week.png
I'm genuinely curious why that's desirable. Maybe I just misunderstand your comment and you want the RISC-V boards for validation i.e. make sure you can self-host the distro even if in general releases are cross-compiled on your (I assume AMD64) build fleet.
Found that link from this SiFive page:
This board has a Dual-core U74 (RV64GC) 64-bit SoC @ 1.5 GHz with 2MB L2 cache. The U74 claims 2.5 DMIPS/MHz.
So, roughly 1/4 as fast as a RPIi 4B given the roughly half DMIPS/MHz and half the cores?
That, of course, is a really rough guess, ignoring lots of potential variables.
It's more MHz and a better uarch than the cores in a Pi 3, so should outperform even the newer Pi 3+ on tasks that don't use NEON and don't use more than 2 cores.
The hardware might be there to do some good audio processing, but how it integrates with the OS is something I'm not experienced with.
Does supporting audio mean that these GPIOs can be used as analog-to-digital converters? There are home automation applications where reading a voltage in an ethernet connected device is a good fit but from what I've seen in a raspberry that requires extra hardware connected to the IO ports.
I just play with electronics for side projects but stick to digital for everything, so I'm wondering what I'm missing out on.
But the work I've needed to put into that in order to get a motor controller working was way more than I needed than for a RPi4b, since libraries for it already exist for the Pi vs needing to re-write them for the Odroid board. It would have been cool to try it with an open-hardware board as well. But it's not just quadcopters--a lot of other projects like rovers or position control with multiple stepper on different axes benefit from extra PWM, and doing it in software can lead to too much jitter, so the hardware timers are necessary.
If you don't care about it being decent, you can just use the PWM channels like the RPi does.
I know the RISC-V ISA is, but I thought the SiFive designs were proprietary.
As I said elsewhere, SiFive CPUs are just as closed as Arm ones, you just pay a royalty to SiFive instead of Arm.
They have source code files for their 'Freedom' core designs up on GitHub under an Apache 2.x license. These can be directly used to evaluate their designs as FPGA soft-cores.
Anything that uses the ARM ISA on the other hand requires a (costly) licence from ARM.
SiFive has its own proprietary *implementation" buty that's not required, and there are many free open source implementations.
There's a _huge_ gap between academic implementations and ones that are actually worth shipping. I can see RISC-V winning quite handily for microcontrollers. But on anything bigger than that...
The main difference is the WD cores are 32 bit, no MMU, probably DTIM rather than cache (not sure).
The guts of the dual issue decode, register file, pipeline is there, which is the hard part. The other bits could be relatively easily added by the open source community, probably cribbing components from RocketChip.
You need to give it a few years before you write it off. It's significant that most CS/EE students are now learning the RISC-V architecture.
This video from SiFive clearly explains the RISC-V ecosystem and how SiFive is involved.
The marketing page for the BeagleV specifically says "Open Hardware Design", but I agree that they probably didn't mean Open Hardware beyond just the PCB layout or something similar.
It would be very surprising if they released the detailed schematics for the SiFive cores. Until there's some kind of common micro-fab standard where every major university can have a legit semiconductor fab for small scale operations, giving people the design doesn't really do much.
I really wish that someone would work on the problem of making affordable, small-scale semiconductor fabrication possible on a reasonably modern node (<= 32nm). It's a hard problem... but everyone in the world being dependent on a few large fabs is also a hard problem.
Still, it is a step forward.
For something more open at the RTL level, I'd look at Precuror (https://www.crowdsupply.com/sutajio-kosagi/precursor) right now.
Speaking as someone at Beagle, we see this board as an important step to more openness in the ecosystem, especially helping software developers improve the state of open source for RISC-V. It is also just a really cool board. Beagle will do more to try to get more openness at the RTL-level moving forward, perhaps even with FPGA boards at an interim step. The shuttle services are starting to make releasing a new chip design in reasonably modern nodes more possible.
So this board has an open source CPU: https://www.sifive.com/boards/hifive1-rev-b
This does not: https://www.sifive.com/boards/hifive-unmatched
Although, I get the feeling they will open source design on a rolling basis. Pure speculation.
Can them? Even with IOMMU?
EDIT: I don't think so.
Or proprietary designs that achieve similar functionality such as Hex Five and WorldGuard.
Am I right?
Random googling suggests that SiFive have partnered with PowerVR for GPU, which might even enable vulkan support, but I suppose this SoC is not one of those?
I hope they will, but I'll believe it when I see it. They've been extremely allergic to open source in the past.
Imagination is also creating a new open-source GPU driver to provide a complete, up-streamed open-source kernel and user-mode driver stack to support Vulkan® and OpenGL® ES within the Mesa framework. It will be openly developed with intermediate milestones visible to the open-source community and a complete open-source Linux driver will be delivered by Q2 2022. Imagination will work with RIOS to run the open-source GPU driver on the PicoRio open-source platform.
I agree it seems like quite a change of heart and I definitely won't be holding my breath.
Having reverse engineered a bit of the drivers, I think it's because they culturally think that all of their value add is in the software. Patents have expired on the TBDR fixed function hardware blocks. The rest is just a combo of a little RISC core that does job dispatch (Programmable Data Sequencer in their parlance), and a cluster of SMT barrel scheduled cores (used to be called USSE in the SGX days, not sure now) that do the heavy lifting wrt shaders that don't really have any secret sauce AFAICT.
The value add is all in the software stack where they run a full little ukernel on the main GPU cores, and optimizing the shit out of the software that runs on those cores from their pretty clever compiler.
I bet they think that if they open source the drivers, that's giving away the one thing that makes PowerVR GPUs special in the first place.
If a IMG person reads this: y'all are wrong with that last piece. Your company is dying without opening the drivers, and you'll be able to control the hardware/software co-design in a way that nobody else can even if you give away the software. You'll have to keep doing work to have new hardware available and stay ahead of the curve, but that's true anyways and is the sign of a healthy business. Sure beats withering away as your patents expire.
Is there anything about RISC-V that is "better" simply because it is a later design than others? Is it likely to evolve faster because it is open or more modern?
A lot of it is because it is newer, and the designers have learned from previous architectures. It is a relatively clean and straightforward instruction set, designed to be easily and efficiently implemented.
There's not anything that is super crazy revolutionary, in contrast to the (still vaporware) Mill CPU architecture.
> Is it likely to evolve faster because it is open or more modern?
They have a good extension mechanism that allows relatively clean additions to the instruction set. Some of the recent ones like the vector extension aren't finalized yet. Anyone can propose their own extension. Historically, ARM might work with their most important customers to implement an extension, but good luck getting their attention if you're not already paying them millions per year.
They have at least secured a decent patent portfolio, particularly on the belt.
It would also be super-nice if they released at least a Copper or Tin core (low-end) that can be synthesized to an FPGA for people to try out.
So, use cases like Western Digital, where they can quit paying ARM a percentage of every hard drive they sell, for example.
As for technical advantages, each RISCV vendor has their own choice of how to implement, so it's hard to say anything broad that applies to all RISCV implementations. The Berkeley BOOM project is hitting really good DMIPS/MHz numbers. LowRISC has some interesting memory tagging and "minion core" ideas, etc.
Edit: I left out perhaps the most important reason RISCV has a lot of hype. They've been successful getting first class support from the Linux kernel maintainers.
The percentage is very small, though. So this argument only works for very high volume use cases, which is why the RISC-V eval boards are currently far more expensive than comparable ARMs.
Do WD do their own silicon yet, or do they just buy the parts?
WD said they were finishing taping out their first production design about a year and a half ago, so I assume they have started shipping them at this point, but it's hard to find info.
Also, having things open means that the supply-chain can be more stable, with less chances of a single glitch in the system halting deliveries for any time. This is driving a lot of interest in RISC-V right now.
Here's a RISC-V quick reference: https://www.cl.cam.ac.uk/teaching/1617/ECAD+Arch/files/docs/...
And ARM: http://users.ece.utexas.edu/~valvano/Volume1/QuickReferenceC...
The main difference is that RISC-V is a lot more modular, so it's going to be difficult to distribute binaries for but more flexible if you're doing something completely vertical. Also a lot of the modules have bundle relatively common/easy instructions with niche/difficult ones. E.g. multiply with divide.
I don't think it'll be worse than ARM and it's decidedly better than x86.
There are SEVEN major revisions of ARMv8. Then there's v8-R, v8-M, and additional 32-bit variants of each instruction set in addition to both ARMv7 and ARMv6 which also still ship billions of chips per year. Oh, and under pressure from companies, ARM also allows custom instructions now. Those aren't just theoretical either -- Apple at least added a ton of custom matrix instructions to the M1.
For x86, supporting only semi-recent processors (2006 Core or greater) leaves you still checking for support for: SSE3, SSE4, SSE4.1, SSE4a, SSE4.2, SSE5, AVX, AVX2, AVX512, XOP, AES, SHA, TBM, ABM, BMI1, BMI2, F16C, ADX, CLMUL, FMA3, FMA4, LWP, SMX, TSX, RdRand, MPX, SGX, SME, and TME. That's 29 instruction sets and not all of them have use on both Intel and AMD chips.
RISC-V seems at least that cohesive. If you're shipping a general purpose CPU, you'll always have mul/div, compression, fusion (not actually instructions), privilege, single precision, double precision, bit manipulation, and probably a few others.
Where you'll run into mul/div missing or no floats are microcontrollers or "Larabee" style GPU cores. In all of those cases, you'll be coding to a very specific core, so that won't really matter.
Thankfully, we've had ways to specify and/or check these kinds of things for decades.
Find me a processor that supports SSE4 but not SSE3. That's the problem. With x86 you pretty much can say "we're targeting processors made after 2010" or whatever and that's that. You make one binary and it works.
RISC-V allows a combinatorial explosion of possible CPUs. You can have a CPU that supports extension X and not Y, but another one that supports Y and not X.
If you're on a general purpose PC/smartphone with packaged software then the OS vendor specifies a base set of extensions that everything must implement -- for Linux at the moment that is RV64IMAFDC aka RV64GC.
All of those extensions (except maybe A) are very generally useful and pervasive in code.
Some other extensions, such as the Vector extension, will provide significant benefits to applications that don't even know whether the system they are running on has them -- you'll just get dynamically linked to a library version that uses V or doesn't, as appropriate.
To take a very trivial example, on a system with V, every application will automatically use highly efficient (and also very short) V versions of memcpy, memcmp, memset, bzero, strlen, strcpy, strcmp and similar.
The same will apply to libraries for bignums, BLAS, jpeg and other media types, and many others.
In fact, even if you claim to implement the M extension (both multiply and divide) all that is necessary is that programs using those opcode work -- but that can be via trap and emulate. If your overall system can run binaries with multiply and divide instructions in them then you can claim M-extension. Whether the performance is adequate is between you and your customers. Note that there are also vast differences in performance between different hardware implementations of multiply and divide, with 32-64 cycle latencies not unheard of.
The same applies for implementing a subset of other extensions in hardware. You can implement the uncommon ones in the trap handler if that will meet your customer's performance needs.
> Note that there are also vast differences in performance between different hardware implementations of multiply and divide, with 32-64 cycle latencies not unheard of.
Yes that is exactly the problem.
The worst (but conforming) hardware implementations are barely better than the best software emulations -- and maybe worse if the software emulation is running on a wide OoO CPU.
But the board has an HDMI output, however the description doesn't describe the specifications on display processor / GPU functionality, or even if it's just a simple framebuffer, etc. There's specifications on Video processing, but I get the impression this is for camera / video input, not output.
"The initial pilot run of BeagleV will use the Vision DSP hardware as a graphics processor, allowing a full graphical desktop environment under Fedora. Following hardware runs will include an unspecified model of Imagine GPU as well."
0 - https://www.cnx-software.com/2021/01/13/beaglev-powerful-ope...
* $119 for 4GB of RAM
* $149 for 8GB of RAM
But the early version apparently is only the 8GB variant.
> BeagleV™ is the first affordable RISC-V board designed to run Linux. Based on the RISC-V architecture, BeagleV™ pushes open-source to the next level and gives developers more freedom and power to innovate and design industry-leading solutions with an affordable introductory price at $149.
edit: CNX Software has some more info  - it's $119, still a far cry from a Raspberry Pi or Orange Pi
We don't know whether the BeagleV price includes those necessary items or not but either way coming within a factor of 2 of Raspberry Pi price is pretty impressive at this stage. Up until now you've paid $3000 for a 1.5 GHz RISC-V setup with HDMI and USB etc, if you've got one actually in your hands (HiFive Unleashed plus MicroSemi HiFive Expansion board), or $665 for one that will start delivery in March (HiFive Unmatched). There is also the $499 Icicle which is quad core but only single-issue (like the HiFive Unleashed) and only 600 MHz.
Prices are plummeting.
Still, the page loads in ~3secs on my 2012 i7-3770K with 100Mbps...
Would be super awesome to do it with riscv
- hardkernel boards
- Nvidia Jetson AGX Xavier (8 core + 512 CUDA core volta GPU, PCIe 4.0)
- Honeycomb LX2 (16 core, quad SFP+ (for 10G optics), PCIe 4.0)
I purchased an AGX Xavier for doing some self driving robot projects.
Moreover, it looks to me like these cheap boards are not lacking features. What would you want in a premium version that is not present in the current models?
At the end of the day most of these are just toys and it's hard to justify spending $850 on a device that is less capable than a $500 mini form factor OEM box if you're doing something serious.
But it is a little pricey.
To be more efficient and complete with mainstream processors a move to 10/12nm is required.
Convert everything to <300kb progressive jpg and use svg for graphics.
Can someone let me know, is there other use cases of such devices than prototyping/hobby?
Only the ISA itself is open, (fast) implementations are not. (for any commercial-grade one instead of academic projects)
Presumably, they will release schematics and such when the device enters manufacturing, if they follow through on actually being Open Hardware.
As others mention, the processor core designs probably won't be open.
Says the early boards in March won’t have a GPU but the production boards in September will have an Imagination Technologies GPU.
And apparently they will make a OpenGL driver for it...
Apple has been able to make great progress moving from Intel x86 to their own custom ARM M1 processor. The way I understand it, RISC-V may allow similar advances.