So it seems to have a bunch of scripts to import data into its database from a variety of sources (including cloud services), and provides a search interface to navigate that historical data. And it has a lot of under-the-hood stuff about replication, and it's entirely self-hosted.
This is sad because the project looks interesting, it is not the usual quick kludge or vaporware, but the bad or inexistent documentation is a terrible disservice to the project.
> And here it is, renamed!
> Perkeep (née Camlistore) is a set of open source formats, protocols, and software for modeling, storing, searching, sharing and synchronizing data in the post-PC era. Data may be files or objects, tweets or 5TB videos, and you can access it via a phone, browser or FUSE filesystem.
I checked archive.org and that text has been there for a couple of months at least. Looks interesting, I've been in the market for this kind of self hosted backup/replication/tagging/search thing.
Not for me. Almost all of what you copied there seems like intrinsically meaningless fluff. "a set of formats"? "a set of protocols"? "software"! Of course it's software! "for modeling"? "in the post-PC era"?
I would have preferred GP's ~"imports data/files from cloud services into a local database and provides a search interface for it". Because THAT tells me what it does, not any of the other words you posted.
Someone made a really well done cartoon mascot for it, but the copy isn't _helpful_.
It does not.
The first sentence on the site ("Perkeep [...] is a set of open source formats...") describes literally what the thing is, but not at all what it _does_.
Not to slam on these cats, because marketing copy is _hard_. For project collaborators, or open-source dorks who live in this kind of world anyway, the sentence on the homepage is probably perfectly descriptive.
But I agree with my GP post. Reading the homepage I had no idea what Perkeep actually did.
> modeling, storing, searching, sharing and synchronizing [...] files or objects, tweets or 5TB videos, and you can access it via a phone, browser or FUSE filesystem
I mean maybe it could have been more explicit or they could have added more detail, but having this as the first sentence is WAY better than most of the 'professional' landing pages for startups that get posted here. 'Harmonizes synergy and increases your ability to wow your target space with your aspirations', now that's meaningless.
You: “Well I’m sorry it wasn’t clear to you but it was clear to me and better than these other things and here’s why it should have been clear to you.”
If someone tells you something is unclear to them, arguing about it doesn’t change the fact that it wasn’t clear to them.
> files or objects, tweets or 5TB videos
means rsync, curl, and Twitter API integration. It obviously does more than this, since I can throw something together that does that in a few hours. Where is the list of everything it supports?!?
That should be front and center.
"Your data should be alive in 80 years, especially if you are." To which you might add, "We're here to help you make sure that's what happens". Then follow that with the "Things Perkeep believes ..." section.
After that, the mission is clear, how it works is clear (though many people might have -no idea- what 'Open source' is good for). Only Then (IMO) can you get away with going all technical on them!
It really depends on the audience for the product, targeting regular users does require more thought on the UX, etc which almost invariably means more funding.
When someone says "permanently keep your stuff, for life" do they mean some sort of pay-once eternal backup, like permanent.org? Something censorship-resistant, like Freenet? Something peer-to-peer and distributed-ledger based like Filecoin? Backing up data locked up in cloud services? Converting obscure file formats into ones with more longevity? Bypassing defunct DRM? Activism against civil forfeiture?
It looks more like "wealthy Googlers helping out a friend with a short term gig" than "financing a professional product".
Perhaps you could donate your time as a UX specialist to help these folks who are more versed in backend systems and libraries.
I considered it for my use case (archiving and deduping dozens of terabytes / millions of files on several personal NAS boxes), really wanted to use it - as I find some of its ideas pretty cool, but in the end decided it would be simpler to just write something from scratch instead. It took less time to write it than it would have taken perkeep to ingest my data.
"Overview: The original motivation and background for why Perkeep exists and what one might use it for."
And there I found a great description.
it’s like git but for all your stuff.
why would you want to use it? you probably wouldn’t, quite yet. but it’s an interesting attempt at doing something a little more sophisticated than a plain file system.
- The data store messes with my files (yes, there's a FUSE mount, but eh, having to adopt a special data store always makes me feel weird since it usually comes with performance and compatiiblity implications, there are also many other block-chopping data stores, for example in IPFS).
- The last time I checked there was no way to delete something. This is okay for Tweets I guess, but if I commit a 3h video I later realize to just be too large or a photo I end up not really wanting around - well, oops.
I have huge respect for Brad Fitzpatrick in general, of course, and especially for creating this. Recent velocity has seemilgly been relatively low, however: https://news.ycombinator.com/item?id=22161812
> We have no plans to abandon it.
There are some bits (permanodes and claims) for adding metadata to objects (filename, timestamp, geo location and other attributes, I think even arbitrary jsons) and for authentication/sharing. A few really cool bits around modularity: blob servers can be composed over network - you can transparently split your blob storage over multiple machines, databases, cloud services, set up replication, maybe encryption (unclear to me if it works or not).
Importing data from different services is not really its core competency, at least not yet. It can ingest anything you can put on your file system and there are importers for a few third-party services (see https://github.com/perkeep/perkeep/tree/master/pkg/importer
), but that's about it
One thing that I'm still trying to figure out is, if you do happen to know: how does it handle data deduplication (if at all)? How about redundancy and backups? I've been glancing over the docs and I do see mention of replication to another Perkeep instance but that's not quite what I'm looking for.
Then there is also some logic to chunk large objects into small pieces or "blobs". These small chunks are actually what the storage layer works with, rather than with the original unlimited-length blobs that the user uploaded. Chunking helps to space-efficiently store multiple versions of same large file (say, a large VM image) - the system only needs to store the set of unique chunks, which can be much smaller than N full but slightly-different copies of the same file. But I personally I find that it deteriorates its performance to the point of making it unusable for my use case of multi-TB multi-million-files storage of immutable media files. If chunking/snapshotting/versioning is important for your use case, I'd look more towards backup-flavored tools like restic, which share many of these storage ideas with Perkeep.
Redundancy and backup is handled by configuring storage layer ("blobserver") to do it. Perkeep's blobservers are composable - you can have leaf servers storing your blobs, say, directly in a local filesystem directory, remote server over sftp, or an S3 bucket, and you can compose them using special virtual blobserver implementations into bigger and more powerful systems. One such virtual blobserver is https://github.com/perkeep/perkeep/blob/master/pkg/blobserve... - which takes addresses of 2+ other blobservers and replicates your reads and writes to them.
You give it the addresses of source and destination blobservers, it enumerates blobs in both, and copies the
source blobs missing from destination into the destination server.
I feel like there's some room for improvement.
You're not going to prevent silent bitrot no matter what modern technology you use, so take a proactive approach instead to prevent data loss.
However, I found that I had a lot of data to back up, and it was actually cheaper and less tedious to get 4TB USB hard drives for ~£100 each, and plug them into an old defunct EeePC901 (with the added advantage that if the power goes out it has a battery).
My main PC has an SSH private key that lets it access a restricted shell on the EeePC that only allows it to give it files to store. That way, if a hacker breaks into my internet-facing machine, all they can do to my backups is fill the disc up, not delete or access anything. I have a process on the EeePC that regularly scrubs the par2 files, and the hard drives (I have two so far) are formatted with BTRFS, so given all the data is regularly read by the scrub process, that should notice any drive failures. My main PC uses ZFS, so I have safety in variety.
I also have an off-site backup stored on an encrypted USB hard drive in my locked locker at work, which isn't updated as regularly. My internet connection is slow, so I use the rsync --only-write-batch trick, and then carry the large update file to the backup on my laptop.
What could possibly go wrong?
Something that's easy to overlook with larger drives is that their rebuild times are worse.
"Shucking" drives throws the economics way off even if it means having to do some hacks and losing warranty... Usually the drives that come in enclosures are smaller.
A lot of ways to be efficient with money start by having or using a lot of it:)
If you're going for 3.5" drives, then yes I can well believe that the sweet spot is with slightly larger drives, especially if you take enclosures into account. I did the calculations for work a while back for shoving hard drives into something from https://www.45drives.com/ and it seemed that getting the largest drives possible was the best price/capacity option.
You need to keep them spinning on a regular basis, and replace them as they begin to fail.
This is usually a precursor to SMART errors happening in the near future, but unfortunately, it can still result in corrupted replication and corrupted backups; as your backups would be backing up the rotten (corrupt) data.
I've witnessed this happen on both Seagate and WD drives, on systems with ECC memory. I can only suspect this is due to HDD manufacturers wanting to reduce their error rates, and RMA rates: it may happen with the ECC bits in a sector is corrupt, making bitrot undetectable. Instead of giving an error (and being grounds for a RMA replacement), the HDD firmware may choose to return non-integrity-checked data; which would usually be correct but also could be corrupt.
It's why filesystems like ZFS and btrfs are so important.
My rough estimation of this, based on my own experiences and those on r/DataHoarder, suggests 1 hardware sector (4KB for most drives post 2011) will silently corrupt per 10TB year. Such corruption can be detected via checksumming disc systems like ZFS.
Usually, the whole sector is garbage, which is not indicative of cosmic ray bitflips.
External flash memory storage like USB sticks and SD cards fare far worse. In my own experience, silent corruption occurs more like 1 file per device, per 2-3 years; irrespective of the size of memory. I've had USB sticks and SD cards return bogus data without errors, so often. I only know because I checksum everything, otherwise I would have thought the artefacts in my videos or photos came with the source.
If, in 2020, you are not using ZFS or btrfs for long term archival, you are doing something wrong.
ext4, NTFS, APFS, etc may be tried and tested, but they have no checksumming, and that is a problem.
However, at work, I have backed up ~200TB of data to a large server with RAID-6 and ext4, storing the backups as large .tar files with par2 checksums and recovery data, and regularly scrubbing the par2 data. I have yet to see any corruption whatsoever. These are enterprise-grade hard drives. This is the strongest evidence I have yet seen that the enterprise-grade drives are actually better than the consumer-grade ones, rather than just being re-badged.
I should really get around to converting the main drive to btrfs, but this works well.
Much like other commenters I'm no expert on the topic, but I think you'd have to be pretty incredibly unlucky to have a mechanical failure on 3 drives at once from lack of use, especially if they were from different manufacturing batches.
Now get Carbonite (not affiliated, I just like the infinite space backup), and get it to backup your key laptop folders (Docs, Images, Desktop, etc) and your L-drive.
I don't remember how much it costs ($6-10?/mo), but I have stopped worrying since then. I got a monthly tib file for my system and an "instant" backup for everything else. So even if my laptop is stolen I can set up a new laptop (the .tib may be useless but I can open it and see what s/w I had and I can take the config files/folders to move to my new system).
I don't remember how much the disk was but it didn't hurt my wallet, and the ~$100 (?) per year on Carbonite (had CrashPlan) definitely doesn't hurt my wallet.
If you do all these things, I think that's about the best you can do with optical media.
There might be a better medium available nowadays but if I seriously wanted to have a piece of data fifty years from now that's where I'd start.
There are LOTS of failure cases with any cloud provider, especially one with a crazy policy of deleting data in just 45 days.
There is at least 1 reddit post a month about how someone lost data with Backblaze. Their reddit support rep is never able to do anything about it, other than "sorry, we will take on board your feedback".
For comparison, if your Google Drive subscription lapses, Google stops you from uploading but will not delete your data.
A good lesson was learned but it hurt. The upload took weeks to complete.
So much to show off for 7 years of development, I'm pretty skeptical of its future. But some of the ideas are pretty cool, like composable blob servers.
I added a single 2.7G ubuntu iso - it took 5 minutes to ingest it (on a tmpfs!), and turned it into 45k(!) little chunks, wtf is up with that? At this rate indexing my multiple terabytes of data is going to take it days and I don't even want to think how much seek time it's going to need to store its repo on a spinning HDD.
Ingest times correlate linearly with file sizes because it needs to compute the blobref (which is a configurable hash) for all the blobs (chunks as you call them). Splitting in blobs/chunks is necessary because a stated goal of the project is to have snapshots by default when modifications are done. Doing snapshots/versioning without chunking would be very inefficient.
But perkeep's focus, as I understand it, is more on managing an unstructured collection of immutable things (e.g. photo archive), rather than being a tool to back up your mutable filesystem. So I'm not sure they made a good design decision to chunk the sh*t out of my files, which really kills the performance on large files and especially on spinning disks.
It seems like with the recent wave of news about social media migrations (reddit, facebook, twitter, twitch, tiktok), people are hopefully starting to get more and more warmed up to the idea of protocolization of their social data.
But most of the projects doing it are still just too immature. Solid, Perkeep, Blockstack, etc. just seem like vaporware.
Seems like the only serious projects in use are Matrix, Urbit, and ActivityPub/Mastodon. But I haven't checked in with the decentralization scene in a while.
To add to your list, there is also Secure Scuttlebutt  which has had a decent userbase over the past few years, and Planetary  which is a funded iOS client for it.
I think in general they all suffer from the chicken-egg problem and will need some reason for enough people to switch to be able to build a userbase. There isn't really any "novel hook" like tiktok, twitter, whatsapp, instagram, snapchat, etc have had in the past.
So I'll write my app outside of SSB, hopefully in a way that's mostly compatible, and possibly with future integration.
I may also toy with an SSB-like protocol myself, as the fundamentals of SSB is a work of art imo. I really enjoy what Gossip brings to the table, and how SSB focuses on human->human relationship to bring P2P to the table.
The same thing happened with remoteStorage. There's initially a flurry of proof-of-concept apps, but no commercial quality killer apps to attract daily users.
AFAIK the only cloud storage protocol really used for app development is Google Drive. GDrive got successful by making a great cloud storage solution first, then once everyone had one app developers started making apps for it.
It probably needs someone new to adopt it.
For popular content, you'll see it has many seeds, instead of likes