Copying a live file system can be risky, especially if there are open files. You should also be careful to ensure that you don’t accidentally overwrite a partition, or existing files, with the files you are trying to copy. With some careful thought, you can effectively migrate files reliably to take advantage of more space, even on a live system.
You can use GUI more comfortably to do the same tasks.
Use: konqueror or nautilus and choose to show hidden files and folders then Ctrl+A then Ctrl+C then go to the new location and Ctrl+V then change the fstab file @ /etc by the konsole command “vi /etc/fstab” and edit the entry for “/usr” assuming it was there (ie you partitioned your system to include it)
Concerning Tar I don’t prefer it but you can use the GUI “file-roller” to do the same job, but don’t forget to enable “all folders and subfolders” option.
Happy Computing
Yes, you do need to edit the fstab, which is just one of the reasons why this is a disappointing article. He mentions editing the fstab at the very end, but he doesn’t explain it in detail. Everything else in the article was explained like you explained how to edit the fstab (in too much detail for even an inexperienced UNIX sysadmin).
More importantly, he mades a really big deal of why migrating live filesystems is critical and how you need to be really clever in order to make sure the files are available throughout the process… and then he suprises me by suggesting a really simple process (the naive approach, in fact) that doesn’t maintain filesystem availability. No matter whether you’re migrating directories or filesystems using this process, the filesystem will be unavailable between the time when the source directory is renamed and when the new filesystem is mounted, or between the time when the source filesystem is unmounted and when the new filesystem is mounted. There’s no simple way to make either transition atomic.
Another sore point is that he mades no mention of logical volume systems, which have dominated the UNIX space for many years. These systems have features for dynamically resizing volumes, and some (including Sun’s ZFS and IBM’s JFS2) support _real_ live volume migration across storage devices.
If I wanted to learn how to use cp, tar, and my shell, I would read their man pages. That’s what this article is. If you haven’t read it yet, don’t bother wasting your time… This guy just isn’t very knowledgable in this subject.
I hoped to read something about dangers of moving, downtime moments, snapshots (if linux has them) or possibilities of live moving (directory under working database). Anything besides cp/tar.
I wonder, if IBM allowed it into their library, then
Proof:
“can be risky, especially if there are open files … with some careful thought, you can effectively migrate files reliably”.
The last word, reliably!!! Where was there a separate shell to test for file writes (it should be possible, somehow) or even a find to find files with dates modified later than cp started?
More in-depth information would have been nice.
I think it’s really unsafe to do this that way. The title of the article is misleading (“Live” Unix System). If it’s really “live”, you need to go through some more steps before copying files like this.
First, you need to make sure that no process has open handles on these files. You need to make sure that while you’re copying file Z, file A doesn’t get updated anymore.
Second, when you’re unmounting/remounting, the same rule applies.
cpio. Tar while working would not preserve hard links, handle special files, gives successful error codes, loses entire archive on single error, unless of course someone has fixed it in the last 15 years or more since I last looked.
Another alternative is rsync. It handles hard links and special files and allows you do do incremental backups so you don’t have to copy files you’ve already copied (saving a great deal of time and cpu cycles) and provides an error log of all files that it couldn’t copy properly because they were being written to at the time of the copy. I use it on my own system to maintain a mirror system so I can boot off it if ever my main system goes south (it’s rare but has happened in the last 5 years). It works without a hitch.
:
http://www.mikerubel.org/computers/rsync_snapshots/
The links issue that was mentioned above. And the extended ACLs that may be attached to the files need to be take into consideration. You have to verify that they were copied correctly.
Also counting the files and the size is only 1/2 the job really he should of checked md5sums for all the files or or for the total transfer. Yes its paranoid, but its the right thing to do.
Over on LXer.com, a member recommended the Enterprise Volume Management System as the safest, most flexible way to do this.
http://evms.sourceforge.net/
It looks interesting and is supposed to be pretty easy to install.
Edited 2006-07-09 20:53
I had to do this a few months ago when setting up my laptop. I didn’t notice that the last partition I created had for some reason been designated sda3 (it was a 10 gig partition set aside for a win2k dual boot) I ended up using dd if=/dev/sda3 of=/dev/sda4 (sda4 was where my install was originally supposed to end up at… an 82G partition for gentoo) I then used the resize_reiserfs util that is part of the reiserfstools package to expand the copied fs to the limit of the partition and changed fstab to match…
Please note that I did this by booting from the install CD so nothing was being accessed on the drive by the booted OS.
You can do this far easier and safer by using LVM, resizing partitions, adding new disks to your volume group and taking a safe snapshot of data.