You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think this is possibly related to the other bug I posted, to do with now the CLI parameters are evaluated and used to create the mounts. I now think this is just a string/path parsing issue combined with how that is passed to a call to stat/fstat when asking the remaining disk storage.
Today I tried to make a backup file copy of a file that I had just moved from one place to another, (which always works fine) but this time after copying it I need to alter it, so I made a cut and paste copy into the same drive. Normally that would say file exists and ask to rename it, but instead it failed with not enough disk space. But literally a few seconds previously I'd just pasted the same file from another path into the same location,. So I tried the same with a file less than 16MB, that was ok.
It is reporting the free disk space as 16MB, and at the last look it should be more like 1.7TB I had only created a file of about 250MB so that should have worked hundreds of times.
At first I thought something had changed on the remote, perhaps due to an update.
As sshfs is nothing more than scp/sftp (system dependent) and these days scp is usually sftp, with the scp mode being deprecated, then it should behave the same if connected to the same path on the same server.
So after what I am about to explain happened I tried it using sftpd, expecting the same to occur, thus
confirming what I believed to date that this was the ACL or XDEV or one of the other security systems like aparmor etc.
However it did not. And I think that just gave me a clue that is related to the other issue I raised.
So I 've been fighting for weeks with what I thought to be ACL issues, I could not see a consistent pattern
and kept screwing with the ACLs in various attempts to fix it
BTW, I am also aware of the limitations when copying across remote devices. but here they are all the same device.
But it seemed whatever I did certain paths were not writeable over sshfs on one mount but are writeable on another mount, even though they really both have access to the same folder and same tree.
Then I realised that the difference here is the initial mount location (mount locations don't seem to chroot - is that even a thing on sshfs?) So you can just navigate in the desktop go towards the drive root from either and then navigate up one or other branch.
One mount has a terminating slash for reasons in #312 and the other must not have a terminating slash as it is a users home, the actual path resolves based on the user name which obviously changes. If you try and mount ssh@user:/ you mount the absolute root not the users root.
All the mounts are in reality rooted on a ram disk, so any attempt to write off the bottom would try and write files to the ram disk. Obviously the RAM disk is small as it has a few symlinks and nothing else. Herein lies the issue.
If some query resolves the root, and asks for the disk stats, you get the ramdisk rather than the volume mounted on it.
And I tested that using stat -f on each of /vol/A, /vol/home and /vol each with and without the trailing slash. which showed just 4096 blocks (16MB) for both forms of the latter.
I put a [gist of the script here](<script src="https://gist.github.com/the-moog/06fe4f5bde0c182ac9be31a5e267267d.js"></script>) (Really one I just wrote, not the one I actually used bit shows what happens)
I have probably got the references muddled in my head as I write this from memory but that does not really matter as the test case is easy to replicate, as I just presented.
If I get time at the weekend I will try and build a sshfs dev env and simply add that test case.
So which is actually A,B, C does not matter, I think it goes like this:
We have three path operations.
Two paths are being interacted with on one mount, /mnt/vol/A/, these are /mnt/vol/A/deep/B
And one on the other, /vol/homes/user/C (no slash)
/mnt/vol is actually a ram disk and /A and /home are symlinks to trees elsewhere on the real disk.
(Btw I did not invent this, blame QNAP).
I could always copy between one pair but never between on of those and the remainder over sshfs. SMB is fine.
No I find I can't even copy between the remainder and itself on one mount but I can through another.
Which eliminates al possible ACL issues as they mount commands differ by a few characters.
cp A C # fail
cp A B # ok
cp B C # ok
cp C C.bak # fail
A/B/C are all paths on the same physical disk all commands are the same user both ends.. The only difference being the absolute path of the remote mount and if it has a trailing slash.
As I said, I will try and find time to check this properly with a repeatable test, as I may have them swapped in my head, but that does not really matter.
The text was updated successfully, but these errors were encountered:
could you try and do this with rclone instead https://github.com/rclone/rclone (rclone mount also uses fuse under the hood) and see if the results are similar / different / what you expect?
it will help us figure out if the problem lies in sshfs, libfuse, sftp or something else
I think this is possibly related to the other bug I posted, to do with now the CLI parameters are evaluated and used to create the mounts. I now think this is just a string/path parsing issue combined with how that is passed to a call to stat/fstat when asking the remaining disk storage.
Today I tried to make a backup file copy of a file that I had just moved from one place to another, (which always works fine) but this time after copying it I need to alter it, so I made a cut and paste copy into the same drive. Normally that would say file exists and ask to rename it, but instead it failed with not enough disk space. But literally a few seconds previously I'd just pasted the same file from another path into the same location,. So I tried the same with a file less than 16MB, that was ok.
It is reporting the free disk space as 16MB, and at the last look it should be more like 1.7TB I had only created a file of about 250MB so that should have worked hundreds of times.
At first I thought something had changed on the remote, perhaps due to an update.
As sshfs is nothing more than scp/sftp (system dependent) and these days scp is usually sftp, with the scp mode being deprecated, then it should behave the same if connected to the same path on the same server.
So after what I am about to explain happened I tried it using sftpd, expecting the same to occur, thus
confirming what I believed to date that this was the ACL or XDEV or one of the other security systems like aparmor etc.
However it did not. And I think that just gave me a clue that is related to the other issue I raised.
So I 've been fighting for weeks with what I thought to be ACL issues, I could not see a consistent pattern
and kept screwing with the ACLs in various attempts to fix it
BTW, I am also aware of the limitations when copying across remote devices. but here they are all the same device.
But it seemed whatever I did certain paths were not writeable over sshfs on one mount but are writeable on another mount, even though they really both have access to the same folder and same tree.
Then I realised that the difference here is the initial mount location (mount locations don't seem to chroot - is that even a thing on sshfs?) So you can just navigate in the desktop go towards the drive root from either and then navigate up one or other branch.
One mount has a terminating slash for reasons in #312 and the other must not have a terminating slash as it is a users home, the actual path resolves based on the user name which obviously changes. If you try and mount ssh@user:/ you mount the absolute root not the users root.
All the mounts are in reality rooted on a ram disk, so any attempt to write off the bottom would try and write files to the ram disk. Obviously the RAM disk is small as it has a few symlinks and nothing else. Herein lies the issue.
If some query resolves the root, and asks for the disk stats, you get the ramdisk rather than the volume mounted on it.
And I tested that using
stat -f
on each of/vol/A
,/vol/home
and/vol
each with and without the trailing slash. which showed just 4096 blocks (16MB) for both forms of the latter.I put a [gist of the script here](<script src="https://gist.github.com/the-moog/06fe4f5bde0c182ac9be31a5e267267d.js"></script>) (Really one I just wrote, not the one I actually used bit shows what happens)
I have probably got the references muddled in my head as I write this from memory but that does not really matter as the test case is easy to replicate, as I just presented.
If I get time at the weekend I will try and build a sshfs dev env and simply add that test case.
So which is actually A,B, C does not matter, I think it goes like this:
We have three path operations.
Two paths are being interacted with on one mount, /mnt/vol/A/, these are /mnt/vol/A/deep/B
And one on the other, /vol/homes/user/C (no slash)
/mnt/vol is actually a ram disk and /A and /home are symlinks to trees elsewhere on the real disk.
(Btw I did not invent this, blame QNAP).
I could always copy between one pair but never between on of those and the remainder over sshfs. SMB is fine.
No I find I can't even copy between the remainder and itself on one mount but I can through another.
Which eliminates al possible ACL issues as they mount commands differ by a few characters.
A/B/C are all paths on the same physical disk all commands are the same user both ends.. The only difference being the absolute path of the remote mount and if it has a trailing slash.
As I said, I will try and find time to check this properly with a repeatable test, as I may have them swapped in my head, but that does not really matter.
The text was updated successfully, but these errors were encountered: