Ubuntu thinks btrfs disk is full but its not

由于国外网站经常打不开,因此内容直接复制到这里 原文链接

Btrfs is different from traditional filesystems. It is not just a layer that translates filenames into offsets on a block device, it is more of a layer that combines a traditional filesystem with LVM and RAID. And like LVM, it has the concept of allocating space on the underlying device, but not actually using it for files.

A traditional filesystem is divided into files and free space. It is easy to calculate how much space is used or free:

Btrfs combines LVM, RAID and a filesystem. The drive is divided into subvolumes, each dynamically sized and replicated:

The diagram shows the partition being divided into two subvolumes and metadata. One of the subvolumes is duplicated (RAID1), so there are two copies of every file on the device. Now we not only have the concept of how much space is free at the filesystem layer, but also how much space is free at the block layer (drive partition) below it. Space is also taken up by metadata.

When considering free space in Btrfs, we have to clarify which free space we are talking about - the block layer, or the file layer? At the block layer, data is allocated in 1GB chunks, so the values are quite coarse, and might not bear any relation to the amount of space that the user can actually use. At the file layer, it is impossible to report the amount of free space because the amount of space depends on how it is used. In the above example, a file stored on the replicated subvolume @raid1 will take up twice as much space as the same file stored on the @homesubvolume. Snapshots only store copies of files that have been subsequently modified. There is no longer a 1-1 mapping between a file as the user sees it, and a file as stored on the drive.

You can check the free space at the block layer with btrfs filesystem show / and the free space at the subvolume layer with btrfs filesystem df /


For this mounted subvolume, df reports a drive of total size 38G, with 12G used, and 13M free. 100% of the available space has been used. Remember that the total size 38G is divided between different subvolumes and metadata - it is not exclusive to this subvolume.

Each line shows the total space and the used space for a different data type and replication type. The values shown are data stored rather than raw bytes on the drive, so if you're using RAID-1 or RAID-10 subvolumes, the amount of raw storage used is double the values you can see here.

The first column shows the type of item being stored (Data, System, Metadata). The second column shows whether a single copy of each item is stored (single), or whether two copies of each item are stored (DUP). Two copies are used for sensitive data, so there is a backup if one copy is corrupted. For DUP lines, the used value has to be doubled to get the amount of space used on the actual drive (because btrfs fs df reports data stored, not drive space used). The third and fourth columns show the total and used space. There is no free column, since the amount of "free space" is dependent on how it is used.

The thing that stands out about this drive is that you have 9.47GiB of space allocated for ordinary files of which you have used 9.46GiB - this is why you are getting No space left on device errors. You have 13.88GiB of space allocated for duplicated metadata, of which you have used 1.13GiB. Since this metadata is DUP duplicated, it means that 27.76GiB of space has been allocated on the actual drive, of which you have used 2.26GiB. Hence 25.5GiB of the drive is not being used, but at the same time is not available for files to be stored in. This is the "Btrfs huge metadata allocated"problem. To try and correct this, run btrfs balance start -m /. The -m parameter tells btrfs to only re-balance metadata.

A similar problem is running out of metadata space. If the output had shown that the metadata were actually full (used value close to total), then the solution would be to try and free up almost empty (<5% used) data blocks using the command btrfs balance start -dusage=5 /. These free blocks could then be reused to store metadata.

For more details see the Btrfs FAQs:

Fixing Btrfs Filesystem Full Problems

由于原作的地址打不开链接,因此直接把Google的快照内容复制到这里。原作链接

Clear space now

If you have historical snapshots, the quickest way to get space back so that you can look at the filesystem and apply better fixes and cleanups is to drop the oldest historical snapshots.

Two things to note:

  • If you have historical snapshots as described here , delete the oldest ones first, and wait (see below). However if you just just deleted 100GB, and replaced it with another 100GB which failed to fully write, giving you out of space, all your snapshots will have to be deleted to clear the blocks of that old file you just removed to make space for the new one (actually if you know exactly what file it is, you can go in all your snapshots and manually delete it, but in the common case it'll be multiple files and you won't know which ones, so you'll have to drop all your snapshots before you get the space back).
  • After deleting snapshots, it can take a minute or more for btrfs fi show to show the space freed . Do not be too impatient, run btrfs fi show in a loop and see if the number changes every minute. If it does not, carry on and delete other snapshots or look at rebalancing.

Note that even in the cases described below, you may have to clear one snapshot or more to make space before btrfs balance can run. As a corollary, btrfs can get in states where it's hard to get it out of the 'no space' state it's in. As a result, even if you don't need snapshot, keeping at least one around to free up space should you hit that mis-feature/bug, can be handy

Is your filesystem really full? Mis-balanced data chunks

Look at filesystem show output:

Only about 50% of the space is used (441 out of 865GB), but the device is 88% full (751 out of 865MB). Unfortunately it's not uncommon for a btrfs device to fill up due to the fact that it does not rebalance chunks (3.18+ has started freeing empty chunks, which is a step in the right direction).

In the case above, because the filesystem is only 55% full, I can ask balance to rewrite all chunks that have less than 55% space used. Rebalancing those blocks actually means taking the data in those blocks, and putting it in fuller blocks so that you end up being able to free the less used blocks.
This means the bigger the -dusage value, the more work balance will have to do (ie taking fuller and fuller blocks and trying to free them up by putting their data elsewhere). Also, if your FS is 55% full, using -dusage=55 is ok, but there isn't a 1 to 1 correlation and you'll likely be ok with a smaller dusage number, so start small and ramp up as needed.

# Follow the progress along with: legolas:~# while :; do btrfs balance status -v /mnt/btrfs_pool1; sleep 60; done Balance on '/mnt/btrfs_pool1' is running 10 out of about 315 chunks balanced (22 considered), 97% left Dumping filters: flags 0x1, state 0x1, force is off DATA (flags 0x2): balancing, usage=55 Balance on '/mnt/btrfs_pool1' is running 16 out of about 315 chunks balanced (28 considered), 95% left Dumping filters: flags 0x1, state 0x1, force is off DATA (flags 0x2): balancing, usage=55 (...)

When it's over, the filesystem now looks like this (note devid used is now 513GB instead of 751GB):

Before you ask, yes, btrfs should do this for you on its own, but currently doesn't as of 3.14.

Is your filesystem really full? Misbalanced metadata

Unfortunately btrfs has another failure case where the metadata space can fill up. When this happens, even though you have data space left, no new files will be writeable.

In the example below, you can see Metadata DUP 9.5GB out of 10GB. Btrfs keeps 0.5GB for itself, so in the case above, metadata is full and prevents new writes.

One suggested way is to force a full rebalance, and in the example below you can see metadata goes back down to 7.39GB after it's done. Yes, there again, it would be nice if btrfs did this on its own. It will one day (some if it is now in 3.18).

Sometimes, just using -dusage=0 is enough to rebalance metadata (this is now done automatically in 3.18 and above), but if it's not enough, you'll have to increase the number.

Balance cannot run because the filesystem is full

One trick to get around this is to add a device (even a USB key will do) to your btrfs filesystem. This should allow balance to start, and then you can remove the device with btrfs device delete when the balance is finished.
It's also been said on the list that kernel 3.14 can fix some balancing issues that older kernels can't, so give that a shot if your kernel is old.

Note, it's even possible for a filesystem to be full in a way that you cannot even delete snapshots to free space. This shows how you would work around it:

<<<< BAD

<<< GOOD

Misc Balance Resources

For more info, please read:

Ubuntu下的截图编辑软件Shutter

使用习惯了Windows下的QQ截图MAC下面的Skitch,能够非常方便的进行窗口的截图,并且使用箭头,矩形等编辑工具,对图片进行编辑,注释。Ubuntu下的搜索了好长时间才找到功能差不多的图片备注软件Shutter

安装命令

也可以在Ubuntu软件中心中搜索 Shutter 来安装。

简单的使用参考下面的图片:
Shutter_Main
Shutter_Edit

ERROR: Non-debuggable application installed on the target device. Please re-install the debuggable version!

注意,本文的描述,必须完全满足下面的条件,并且要确定已经在编译ndk的时候,已经使用了 NDK_DEBUG=1。并且打包APK的时候已经包含gdbserver,gdb.setup。

最近在调试NDK的时候,发现一个比较棘手的问题,一直报告错误“ERROR: Non-debuggable application installed on the target device. Please re-install the debuggable version! ”如下面所示:

要求ndk-gdb输出详细的执行过程如下:

可以观察到几个奇怪的地方,比如,报错的地方提示"ERROR: Could not find gdbserver binary under ./libs/" ,正常情况下,应该是"./libs/armeabi-v7a","./libs/armeabi"之类的东西,并且“Compatible device ABI: ”部分,是不应该输出为空的情况的。这说明,没有正确的读取到APK中的关于CPU相关的数据。另外,注意这句话“WARNING: APP_PLATFORM android-19 is larger than android:minSdkVersion 14 in ./AndroidManifest.xml

庆幸的是,Google提供了Python版本的ndk-gdb 因此,我们在Ubuntu 15.04下面使用Spyder来跟踪调试,观察到底哪里出了问题。
设置如下:
Configure_Spyder
调整需要跟踪调试的目录到工程中正常执行ndk-gdb所在的目录:
Configure_Path
跟踪之后发现问题如下图所示:
ndk_gdb_py_bug

也就是说,当AndroidManifest.xml中设置的版本号“<uses-sdk android:minSdkVersion="14" />跟在 Application.mk 中设置的版本号“APP_PLATFORM := android-19”,当两者不一致的时候,会导致返回的APP_ABIS不是正常情况下的"[armeabi,armeabi-v7a]"这种形式的返回,而是返回了错误信息的详情,而这个仅仅是个警告而已,也就是说是Google 的一个BUG.

了解了原因,就比较好解决问题了,只要两者修改成为一致就可以了

TortoiseSVN客户端重新设置用户名和密码

在第一次使用TortoiseSVN从服务器CheckOut的时候,会要求输入用户名和密码,这时输入框下面有个选项是保存认证信息,如果选了这个选项,那么以后就不用每次都输入一遍用户名密码了。
不过,如果后来在服务器端修改了用户名密码,则再次检出时就会出错,而且这个客户端很弱智,出错之后不会自动跳出用户名密码输入框让人更新,我找了半天也没找到修改这个用户名密码的地方。
最终,找到两种解决办法:
办法一:在TortoiseSVN的设置对话框中,选择“已保存数据”,在“认证数据”那一行点击“清除”按钮,清楚保存的认证数据,再检出的时候就会重新跳出用户名密码输入框。2012032209211811
如果方法一不起作用,则可以采用方法二:
Tortoise的用户名密码等认证信息都是缓存在客户端文件系统的这个目录:

删除auth下面的所有文件夹,重新连接远程服务器进行检出,对话框就会出现!

参考链接 TortoiseSVN客户端重新设置用户名和密码

Linux下查看so导出函数列表

1.只查看导出函数

2.查看更详细的二进制信息

注意使用readelf读取函数列表的时候,如果函数名比较长,可能会在显示的时候被截断,如果只查看导出函数,建议使用 objdump命令。

3.查看链接的库

objdump,readelf,nm,ldd命令不能在Cygwin中使用

最近在Windows下面编写NDK应用,在检查编译完成后的so文件的时候,发现,objdump,readelf,nm,ldd等命令不能在默认安装的Cygwin中使用。解决方法是,下载Cygwin的安装包,然后点击安装,从选择包里面增加 binutils包,如下图所示:(注意,只选择Devel版本即可,正常情况下我们一般不会需要携带调试信息的版本cygwin_binutils

Windows下使用命令行执行需要用SSH keys登录的Git

Windows下使用Git的时候,大家都习惯了使用TortoiseGit来执行操作。但是在某些情况下,必须在Git Bash下面执行命令的时候,就比较麻烦了,尤其是服务器设置了SSH keys登陆的时候。默认情况下,TortoiseGit会自动加载puttygen生成的.PPK文件来完成认证工作,但是在Git Bash 下面,只能是根据Linux下面的配置来进行登陆认证了。

1.在Git Bash中输入“puttygen”,如下图所示:puttygen
弹出如下图形界面:putty_key_generator
对于已经有PPK文件的情况,点击"Load"来加载,没有PPK文件,请点击"Generate"来生成。加载完成后,点击菜单上的“Conversions”按钮,如下图所示:export_openssh_key

导出的文件没有扩展名,命名为“id_rsa”,注意,必须是这个名字,没有扩展名。

2.拷贝文件“id_rsa”到当前用户目录下面的 “.ssh”目录下面,当前用户目录的位置,请在Git Bash中输入“echo ~”来显示出来,最后的结果如下图所示id_rsa

3.在Git Bash中继续执行命令,就可以了!

将已有的Git项目的某个目录分离成独立子模块

当一个项目在开发若干时间后,希望将某个目录单独出一个项目来开发,此时就可以利用这个subtree的功能分离里。

具体的操作方式是这样的:(只能在Git Bash的命令行中操作
1. 首先cd到需要处理的项目的根目录

注意
<name-of-folder> 必须是从工程的根目录开始算起,类似“project/hello/world”的格式。
<name-of-new-branch> 的名字,尽量取简单。

2. 创建一个新的repo(项目)(离开原有的项目的文件夹,在另外的地方新建一个项目

注意
</path/to/big-repo>是我们需要拉取数据的根目录的绝对路径,类似 “D:\Source\Project”的格式。
<name-of-new-branch> 是我们刚刚分离出来的分支的名字。

3. 添加子模块(到原来工程的根目录下面

可以看到,在原来的Git工程的根目录下面增加了一个名为“.gitmodules”的文件,里面的内容如下:

fatal error C1189: #error : The C++ Standard Library forbids macroizing keywords. Enable warning C4005 to find the forbidden macro.

最近在使用 VS2015 编译以前用VS2008的项目的时候,提示错误:fatal error C1189: #error :  The C++ Standard Library forbids macroizing keywords. Enable warning C4005 to find the forbidden macro.

解决方法:在项目的“预处理器定义”中增加 "_XKEYCHECK_H"