前言:
如今兄弟们对“磁盘分区100g显示99g”大致比较珍视,朋友们都想要知道一些“磁盘分区100g显示99g”的相关内容。那么小编在网上网罗了一些有关“磁盘分区100g显示99g””的相关知识,希望我们能喜欢,我们一起来了解一下吧!对于普通的服务器来说,磁盘还大都是机械硬盘,对于机械硬盘而言,经常会出现坏道,影响整体的磁盘读写速度,严重会导致数据丢失,此时我们需要更换故障的硬盘,接下来模拟操作整个更换过程,环境如下:
系统:Red Hat Enterprise Linux Server release 7.9 (Maipo)。
磁盘:三块5GB的测试硬盘
系统磁盘分区情况如下:
[root@node1 ~]# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 100G 0 disk ├─sda1 8:1 0 1G 0 part /boot└─sda2 8:2 0 99G 0 part ├─rhel_node1-root 253:0 0 91.1G 0 lvm / └─rhel_node1-swap 253:1 0 7.9G 0 lvm [SWAP]sdb 8:16 0 5G 0 disk sdc 8:32 0 5G 0 disk sdd 8:48 0 5G 0 disk sr0 11:0 1 1024M 0 rom [root@node1 data]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.9 (Maipo)[root@node1 data]#
sdb/sdc/sdd 是用来模拟的三块5G的硬盘,sda是系统盘。
1、创建PV/VG
[root@node1 ~]# pvcreate /dev/sd{b..c} Physical volume "/dev/sdb" successfully created. Physical volume "/dev/sdc" successfully created.[root@node1 ~]# vgcreate vg01 /dev/sdb /dev/sdc Volume group "vg01" successfully created[root@node1 ~]#
2、创建LV
[root@node1 ~]# lvcreate -n lv01 -L 6G vg01 Logical volume "lv01" created.
3、格式化LV并挂载
[root@node1 ~]# mkfs.ext4 /dev/mapper/vg01-lv01 mke2fs 1.42.9 (28-Dec-2013)Discarding device blocks: done Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks393216 inodes, 1572864 blocks78643 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=161061273648 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done [root@node1 ~]# mkdir /datamkdir: cannot create directory ‘/data’: File exists[root@node1 ~]# [root@node1 ~]# [root@node1 ~]# [root@node1 ~]# mount /dev/mapper/vg01-lv01 /data[ 343.999156] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)[root@node1 ~]# df -hFilesystem Size Used Avail Use% Mounted ondevtmpfs 3.9G 0 3.9G 0% /devtmpfs 3.9G 0 3.9G 0% /dev/shmtmpfs 3.9G 8.5M 3.9G 1% /runtmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup/dev/mapper/rhel_node1-root 92G 26G 66G 28% //dev/sda1 1014M 137M 878M 14% /boottmpfs 799M 0 799M 0% /run/user/0/dev/mapper/vg01-lv01 5.8G 24M 5.5G 1% /data
4、查看PV占用情况
[root@node1 ~]# pvdisplay /dev/sdb --- Physical volume --- PV Name /dev/sdb VG Name vg01 PV Size 5.00 GiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 1280 Free PE 0 Allocated PE 1280 PV UUID U97Zlh-om7D-zCcD-o94u-h1qA-DFqe-3fXglk [root@node1 ~]# pvdisplay /dev/sdc --- Physical volume --- PV Name /dev/sdc VG Name vg01 PV Size 5.00 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 1280 Free PE 1024 Allocated PE 256 PV UUID ujyD0m-6nmn-tPd6-lYH0-tuTD-jXir-MiNoTS
5、上传文件测试(这里使用dd模拟出测试文件)
[root@node1 ~]# dd if=/dev/zero of=/data/disk01.img count=0 bs=1 seek=5G0+0 records in0+0 records out0 bytes (0 B) copied, 0.00214895 s, 0.0 kB/s[root@node1 ~]# cd /data[root@node1 data]# ls -ltrhtotal 16Kdrwx------ 2 root root 16K Apr 30 13:23 lost+found-rw-r--r-- 1 root root 5.0G Apr 30 13:27 disk01.img[root@node1 data]# [root@node1 data]# [root@node1 data]# pvdisplay /dev/sdb --- Physical volume --- PV Name /dev/sdb VG Name vg01 PV Size 5.00 GiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 1280 Free PE 0 Allocated PE 1280 PV UUID U97Zlh-om7D-zCcD-o94u-h1qA-DFqe-3fXglk [root@node1 data]# pvdisplay /dev/sdc --- Physical volume --- PV Name /dev/sdc VG Name vg01 PV Size 5.00 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 1280 Free PE 1024 Allocated PE 256 PV UUID ujyD0m-6nmn-tPd6-lYH0-tuTD-jXir-MiNoTS
从如上代码和图中可以看出,/dev/sdb空间已经分配完毕。
6、使用md5sum验证数据完整
备注:如果存储是重要数据,需要使用md5sum验证,这是测试使用的是dd生成的文件,这一步可以不做的
[root@node1 data]# md5sum disk01.img ec4bcc8776ea04479b786e063a9ace45 disk01.img
7、硬盘故障以及添加新硬盘
假设sdb磁盘目前存在一些问题,需要使用sdd来替换:
7.1 添加sdd磁盘
[root@node1 data]# pvcreate /dev/sdd Physical volume "/dev/sdd" successfully created.[root@node1 data]# vgextend vg01 /dev/sdd Volume group "vg01" successfully extended[root@node1 data]#
7.2 验证VG中所有的物理PV
[root@node1 data]# vgdisplay -v vg01 --- Volume group --- VG Name vg01 System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 3 Act PV 3 VG Size 15.00 GiB PE Size 4.00 MiB Total PE 3840 Alloc PE / Size 1536 / 6.00 GiB Free PE / Size 2304 / 9.00 GiB VG UUID 50q2Cv-WLcJ-iaT1-5sCf-YZ2n-IttG-81w0lt --- Logical volume --- LV Path /dev/vg01/lv01 LV Name lv01 VG Name vg01 LV UUID lxFdDV-CIpa-di8u-Clhb-92dl-EXzG-C1ZGh2 LV Write Access read/write LV Creation host, time node1, 2024-04-30 13:23:24 +0800 LV Status available # open 1 LV Size 6.00 GiB Current LE 1536 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:2 --- Physical volumes --- PV Name /dev/sdb PV UUID U97Zlh-om7D-zCcD-o94u-h1qA-DFqe-3fXglk PV Status allocatable Total PE / Free PE 1280 / 0 PV Name /dev/sdc PV UUID ujyD0m-6nmn-tPd6-lYH0-tuTD-jXir-MiNoTS PV Status allocatable Total PE / Free PE 1280 / 1024 PV Name /dev/sdd PV UUID HY2XfL-1ui3-4BSs-HtD3-PCYL-7Z01-eJsWpx PV Status allocatable Total PE / Free PE 1280 / 1280
如上看到该VG(vg01)中目前已经包含三块磁盘sdb/sdc/sdd。
目前磁盘sdd还未使用
7.2、迁移数据
[root@node1 data]# pvmove /dev/sdb /dev/sdd /dev/sdb: Moved: 0.23% /dev/sdb: Moved: 50.39% /dev/sdb: Moved: 73.75% /dev/sdb: Moved: 100.00%[root@node1 data]# pvdisplay /dev/sdb
通过如上的迁移操作,sdb已经没有任何数据了。
数据已经迁移到sdd,sdd空间已经使用完。
7.3、移除磁盘
[root@node1 data]# vgreduce vg01 /dev/sdb Removed "/dev/sdb" from volume group "vg01"[root@node1 data]#
[root@node1 data]# pvremove /dev/sdb Labels on physical volume "/dev/sdb" successfully wiped.[root@node1 data]#
7.4、验证数据完整
[root@node1 data]# md5sum disk01.img ec4bcc8776ea04479b786e063a9ace45 disk01.img
更换前与更换后计算出来的值都是一样,说明数据完整,测试成功。
标签: #磁盘分区100g显示99g