龙空技术网

恢复将虚拟机/云主机镜像文件到裸金属服务器物理盘

60KG58950400 129

前言:

现在看官们对“centos7修改fstab”大体比较关注,小伙伴们都想要分析一些“centos7修改fstab”的相关文章。那么小编在网上网罗了一些有关“centos7修改fstab””的相关资讯,希望我们能喜欢,各位老铁们快快来了解一下吧!

前言

本文中的虚拟机/云主机镜像以qcow2为例,其它格式的镜像,方法类似。

一、qcow2镜像准备1、进入RomOS

这里的ROMOS指的是类似LiveCD、WinPE的内存操作系统。

关闭需要志出为qcow2格式的虚拟机/物理机,并以pxe或cdrom或U盘方式重启动,并使用LiveCD镜像进行引导。

2、在RomOS安装支持软件包

安装支持工具软件包

yum install -y libguestfs-tool libguestfs-devel lvm2 cloud-utils nfs-utils
3、导出磁盘数据到qcow2格式镜像文件
qemu-img convert -p -c -O qcow2 /dev/sda /images/image.qcow2 

注意:存储镜像的目录,建议挂载NFS、Samba进行存储。

二、LVM格式恢复

如果qcow2的镜像在制作时,分区使用的是LVM方式分区,需要按LVM的逻辑来进行调整。

1、恢复qcow2到硬盘

# 清空已有的分区信息dd if=/dev/zero of=/dev/sda bs=1k count=512qemu-img convert -f qcow2 -O raw images.qcow2 /dev/sda

注意:存储镜像的目录,建议挂载NFS、Samba等远端存储。

2、挂载LVM分区

# 激活VGPXE root@localhost:~ # vgscan   Reading volume groups from cache.  Found volume group "centos" using metadata type lvm2  PXE root@localhost:~ # vgchange -a y centos  2 logical volume(s) in volume group "centos" now active#挂载LVM的/分区到/mnt目录PXE root@localhost:~ # mount /dev/centos/root /mnt
3、调整fstab
vim /mnt/etc/fstab## /etc/fstab# Created by anaconda on Fri Aug 12 18:23:10 2022## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#/dev/mapper/centos-root /                       xfs     defaults        0 0# 注释掉/boot分区的挂载,只保留/和swap#UUID=8abae70f-0a66-4ef1-aba3-32ba743b68d7 /boot                   xfs     defaults        0 0/dev/mapper/centos-swap swap                    swap    defaults        0 0~                                                                                                                                                                                                                                            ~                                                                                            
4、调整分区表信息
# 查看磁盘信息PXE root@192.168.8.136:/data # lsblkNAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTsda           8:0    0   100G  0 disk ├─sda1        8:1    0     1G  0 part └─sda2        8:2    0    19G  0 part sr0          11:0    1   681M  0 rom  /run/initramfs/liveloop0         7:0    0 637.8M  1 loop loop1         7:1    0     5G  1 loop ├─live-rw   253:0    0     5G  0 dm   /└─live-base 253:1    0     5G  1 dm   loop2         7:2    0   512M  0 loop └─live-rw   253:0    0     5G  0 dm   /# 查看PV信息PXE root@192.168.8.136:/data # pvs  PV         VG     Fmt  Attr PSize   PFree  /dev/sda2  centos lvm2 a--  <19.00g    0 # 激活VGPXE root@192.168.8.136:/data # vgchange -a y centos  2 logical volume(s) in volume group "centos" now active#扩展文件系统PXE root@localhost:~ # growpart /dev/sda 2CHANGED: partition=2 start=2099200 old: size=39843840 end=41943040 new: size=207615967 end=209715167PXE root@localhost:~ # lsblkNAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTsda               8:0    0   100G  0 disk ├─sda1            8:1    0     1G  0 part └─sda2            8:2    0    99G  0 part   ├─centos-swap 253:2    0     2G  0 lvm    └─centos-root 253:3    0    17G  0 lvm  /mntsr0              11:0    1   681M  0 rom  /run/initramfs/liveloop0             7:0    0 637.8M  1 loop loop1             7:1    0     5G  1 loop ├─live-rw       253:0    0     5G  0 dm   /└─live-base     253:1    0     5G  1 dm   loop2             7:2    0   512M  0 loop └─live-rw       253:0    0     5G  0 dm   /# Resize PVPXE root@localhost:~ # pvresize /dev/sda2  Physical volume "/dev/sda2" changed  1 physical volume(s) resized or updated / 0 physical volume(s) not resized# 扩容PV,将剩余空间全部给到/分区PXE root@localhost:~ # lvextend -l 100%FREE /dev/centos/root  Size of logical volume centos/root changed from <17.00 GiB (4351 extents) to 80.00 GiB (20480 extents).  Logical volume centos/root successfully resized.  # 校验PXE root@localhost:~ # lsblkNAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTsda               8:0    0   100G  0 disk ├─sda1            8:1    0     1G  0 part └─sda2            8:2    0    99G  0 part   ├─centos-swap 253:2    0     2G  0 lvm    └─centos-root 253:3    0    80G  0 lvm  /mntsr0              11:0    1   681M  0 rom  /run/initramfs/liveloop0             7:0    0 637.8M  1 loop loop1             7:1    0     5G  1 loop ├─live-rw       253:0    0     5G  0 dm   /└─live-base     253:1    0     5G  1 dm   loop2             7:2    0   512M  0 loop └─live-rw       253:0    0     5G  0 dm   /
5、重启验证三、传统分区方式恢复1、恢复qcow2到硬盘
# 清空已有的分区信息dd if=/dev/zero of=/dev/sda bs=1k count=512qemu-img convert -f qcow2 -O raw images.qcow2 /dev/sda

注意:存储镜像的目录,建议挂载NFS、Samba等远端存储。

2、调整fstab

# 挂载恢复后的/分区到/mnt目录PXE root@localhost:/mnt # mount /dev/sda3 /mntPXE root@localhost:~ # df -hFilesystem           Size  Used Avail Use% Mounted on/dev/mapper/live-rw  4.9G  1.8G  3.2G  36% /devtmpfs             2.0G     0  2.0G   0% /devtmpfs                2.0G     0  2.0G   0% /dev/shmtmpfs                3.1G   20M  3.1G   1% /runtmpfs                2.0G     0  2.0G   0% /sys/fs/cgroup/dev/sr0             681M  681M     0 100% /run/initramfs/livetmpfs                394M     0  394M   0% /run/user/0/dev/sda3             14G  1.1G   13G   8% /mnt# 获取恢复后的分区UUIDPXE root@localhost:/mnt # blkid/dev/sr0: UUID="2022-04-25-20-46-29-00" LABEL="Red Hat Enterprise Linux 7 x86_6" TYPE="iso9660" PTTYPE="dos" /dev/sda1: UUID="80adc934-a93b-44cb-99b3-51c20e0fa5e5" TYPE="xfs" /dev/sda2: UUID="d1a86e4e-58e6-48d2-9690-d5784481c7c4" TYPE="swap" /dev/sda3: UUID="627e7f32-a0c2-4e85-afaa-391abbf377b6" TYPE="xfs" /dev/loop0: TYPE="squashfs" /dev/loop1: LABEL="Anaconda" UUID="888d143e-7ca4-4fbe-8836-169588e60643" TYPE="ext4" /dev/loop2: TYPE="DM_snapshot_cow" /dev/mapper/live-rw: LABEL="Anaconda" UUID="888d143e-7ca4-4fbe-8836-169588e60643" TYPE="ext4" /dev/mapper/live-base: LABEL="Anaconda" UUID="888d143e-7ca4-4fbe-8836-169588e60643" TYPE="ext4" # 修改/etc/fstab中的分区UUIDvim /mnt/etc/fstab## /etc/fstab# Created by anaconda on Mon Aug 15 09:47:24 2022## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info##UUID=627e7f32-a0c2-4e85-afaa-391abbf377b6 /                       xfs     defaults        0 0#UUID=80adc934-a93b-44cb-99b3-51c20e0fa5e5 /boot                   xfs     defaults        0 0#UUID=d1a86e4e-58e6-48d2-9690-d5784481c7c4 swap                    swap    defaults        0 0UUID=627e7f32-a0c2-4e85-afaa-391abbf377b6 /                       xfs     defaults        0 0UUID=80adc934-a93b-44cb-99b3-51c20e0fa5e5 /boot                   xfs     defaults        0 0UUID=d1a86e4e-58e6-48d2-9690-d5784481c7c4 swap                    swap    defaults        0 0
3、调整分区表信息
# 查看现有分区大小信息PXE root@localhost:/mnt # lsblkNAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTsda           8:0    0   100G  0 disk ├─sda1        8:1    0     2G  0 part ├─sda2        8:2    0     4G  0 part └─sda3        8:3    0    14G  0 part /mntsr0          11:0    1   681M  0 rom  /run/initramfs/liveloop0         7:0    0 637.8M  1 loop loop1         7:1    0     5G  1 loop ├─live-rw   253:0    0     5G  0 dm   /└─live-base 253:1    0     5G  1 dm   loop2         7:2    0   512M  0 loop └─live-rw   253:0    0     5G  0 dm   /# 将剩余空间分给/分区PXE root@localhost:/mnt # growpart /dev/sda 3CHANGED: partition=3 start=12584960 old: size=29358080 end=41943040 new: size=197130207 end=209715167PXE root@localhost:/mnt # lsblkNAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTsda           8:0    0   100G  0 disk ├─sda1        8:1    0     2G  0 part ├─sda2        8:2    0     4G  0 part └─sda3        8:3    0    94G  0 part /mntsr0          11:0    1   681M  0 rom  /run/initramfs/liveloop0         7:0    0 637.8M  1 loop loop1         7:1    0     5G  1 loop ├─live-rw   253:0    0     5G  0 dm   /└─live-base 253:1    0     5G  1 dm   loop2         7:2    0   512M  0 loop └─live-rw   253:0    0     5G  0 dm   /# 扩展/分区大小PXE root@localhost:~ # mount /dev/sda3 /mntPXE root@localhost:~ # xfs_growfs /mntmeta-data=/dev/sda3              isize=512    agcount=4, agsize=917440 blks         =                       sectsz=512   attr=2, projid32bit=1         =                       crc=1        finobt=0 spinodes=0data     =                       bsize=4096   blocks=3669760, imaxpct=25         =                       sunit=0      swidth=0 blksnaming   =version 2              bsize=4096   ascii-ci=0 ftype=1log      =internal               bsize=4096   blocks=2560, version=2         =                       sectsz=512   sunit=0 blks, lazy-count=1realtime =none                   extsz=4096   blocks=0, rtextents=0data blocks changed from 3669760 to 24641275PXE root@localhost:~ # df -hFilesystem           Size  Used Avail Use% Mounted on/dev/mapper/live-rw  4.9G  1.7G  3.3G  34% /devtmpfs             2.0G     0  2.0G   0% /devtmpfs                2.0G     0  2.0G   0% /dev/shmtmpfs                3.1G   20M  3.1G   1% /runtmpfs                2.0G     0  2.0G   0% /sys/fs/cgroup/dev/sr0             681M  681M     0 100% /run/initramfs/livetmpfs                394M     0  394M   0% /run/user/0/dev/sda3             94G  1.1G   93G   2% /mntPXE root@localhost:~ # lsblkNAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTsda           8:0    0   100G  0 disk ├─sda1        8:1    0     2G  0 part ├─sda2        8:2    0     4G  0 part └─sda3        8:3    0    94G  0 part /mntsr0          11:0    1   681M  0 rom  /run/initramfs/liveloop0         7:0    0 637.8M  1 loop loop1         7:1    0     5G  1 loop ├─live-rw   253:0    0     5G  0 dm   /└─live-base 253:1    0     5G  1 dm   loop2         7:2    0   512M  0 loop └─live-rw   253:0    0     5G  0 dm   /PXE root@localhost:~ # umount /mnt# 更新分区表PXE root@localhost:~ # partprobe /dev/sda# 检查扩容后的分区健康状态PXE root@localhost:~ # xfs_repair -n /dev/sda3Phase 1 - find and verify superblock...        - reporting progress in intervals of 15 minutesPhase 2 - using internal log        - zero log...        - scan filesystem freespace and inode maps...        - 11:00:56: scanning filesystem freespace - 27 of 27 allocation groups done        - found root inode chunkPhase 3 - for each AG...        - scan (but don't clear) agi unlinked lists...        - 11:00:56: scanning agi unlinked lists - 27 of 27 allocation groups done        - process known inodes and perform inode discovery...        - agno = 0        - agno = 15        - agno = 16        - agno = 17        - agno = 18        - agno = 19        - agno = 20        - agno = 21        - agno = 22        - agno = 23        - agno = 24        - agno = 25        - agno = 26        - agno = 1        - agno = 2        - agno = 3        - agno = 4        - agno = 5        - agno = 6        - agno = 7        - agno = 8        - agno = 9        - agno = 10        - agno = 11        - agno = 12        - agno = 13        - agno = 14        - 11:00:56: process known inodes and inode discovery - 26816 of 26816 inodes done        - process newly discovered inodes...        - 11:00:56: process newly discovered inodes - 27 of 27 allocation groups donePhase 4 - check for duplicate blocks...        - setting up duplicate extent list...        - 11:00:56: setting up duplicate extent list - 27 of 27 allocation groups done        - check for inodes claiming duplicate blocks...        - agno = 0        - agno = 1        - agno = 2        - agno = 3        - agno = 4        - agno = 5        - agno = 6        - agno = 7        - agno = 8        - agno = 9        - agno = 10        - agno = 11        - agno = 12        - agno = 13        - agno = 14        - agno = 15        - agno = 16        - agno = 17        - agno = 18        - agno = 19        - agno = 20        - agno = 21        - agno = 22        - agno = 23        - agno = 24        - agno = 25        - agno = 26        - 11:00:56: check for inodes claiming duplicate blocks - 26816 of 26816 inodes doneNo modify flag set, skipping phase 5Phase 6 - check inode connectivity...        - traversing filesystem ...        - traversal finished ...        - moving disconnected inodes to lost+found ...Phase 7 - verify link counts...        - 11:00:56: verify and correct link counts - 27 of 27 allocation groups doneNo modify flag set, skipping filesystem flush and exiting.
4、重启验证四、工具使用1、分区扩容growpart /dev/sda 32、文件系统扩容XFS xfs_growfs /dev/vdc1EXT resize2fs /dev/vdc13、分区表更新

partprobe /dev/sda

4、分区文件系统检查EXT resize2fs /dev/vdbXFS xfs_repair -n /dev/sda35、执行顺序

分区扩容->文件系统扩容->分区表更新->文件系统检查

五、参考

标签: #centos7修改fstab #win10有13g的恢复分区 #怎么把网站上虚拟机的文件下下来 #centos7克隆虚拟机修改物理 #centos fstab恢复