

However, my impression is that in fact largefiles is basically the only game in town and Mercurial LFS if anything is meant to be even more like Git LFS to the point of being compatible with it. So there is a chance I may get something wrong here. To preface: though I've read a fair amount about Mercurial, I can count on my fingers the number of times I've actually used a Mercurial repo and I've used largefiles only ever as a toy, so I am very much a Mercurial newbie. to move away from git-annex you can just commit the binary files directly to your git directory and then just copy them out to a separate folder whenever you go back to an old commit and re-import them.īut perhaps I'm interpreting the author incorrectly, in which case it's hard for me to see how any solution for large files in git would allow you to move back without rewriting history to an ordinary git repository without large file support. However, as long as you still have your exact binary files sitting around somewhere, you can always import them back on the fly, so e.g. Whereas with git-annex, it is true that without rewriting history, even if you disable git-annex moving forward, you'll still have symlinks in your git history. There is now always a source of truth that is inconvenient to work around when you might still have the files lying around on a bunch of different hard drives or USB drives.
#Git annex read only full
In particular it's pretty annoying to have to always spin up a full HTTPS server just to be able to have access to your files. The "one-way door" as I understand the article to be describing is talking about the additional layer of centralization that Git LFS brings.

If you want to completely disable use of largefiles then you still have to run `hg lfconvert` at some point. In particular, AFAICT, Mercurial requires the exact same thing as what you're pointing out. The way that the author talks about Mercurial as not having this problem makes me think they're talking about something related but subtly different.
