At work, we’re moving to a new building and getting new computers – which is kind of a big deal for me because my new computer’s going to be a lot faster and generally cooler in every regard than my current work computer. Since I’ve got a lot of software compiled in my home directory and both systems have the same architecture, I’ve been thinking about ways of moving all of my files to the new computer so that I can keep not only documents, but also any software I’ve compiled, KDE settings and the like. Here’s what I’ve come up with, ordered by complexity and increasing nerdiness, and with the thought that I’ve also got other users that will be asking how to do it that might not like working with bits and bytes as much as I do.
- Copy everything with drag and drop onto an external drive
- This has the advantage that it’s simple, but copying it all back is a pain. If you want to do things graphically, you’ll probably already be logged in – the people I work with use KDE, and I’m not sure how KDE likes having all of its session files being overwritten while the user is still logged on.
- Use rsync with SSH (or another file transfer protocol)
- This is really easy, efficient and gives you a fine degree of control over what happens. It’s the way I’ve decided to go. rsync lets you preserve file ownership and change dates. It also lets you choose what happens if there are file conflicts. Do you want to delete anything in the target folder that isn’t in the source folder? Or leave it there as long as it’s not overwritten by something in the source folder? Or maybe compare conflicting files and keep the newer one, regardless of its source? All of that is no problem. rsync also saves you a lot of bandwidth by only copying files that need to be copied, leaving the other ones alone
- Pipe files to the target computer with tar and ssh, then unpack them with cat
- This is one of those simple things that shows the powerful magic of bash – take a bunch of files, concatenate them, compress them, send them over the network to a target folder, decompress them and unpack them into the same structure they were all in before. You could do this in a bunch of steps, saving files all over the place (I show that down below) or you can do it all in one bash command using pipes so that you only touch the hard drive to read once from the source folder and write once to the target folder. That saves you a lot of time, since hard drive I/O is about the slowest thing you can do. I’ve got an example of that down below. Totally unpractical in the face of tools like rsync, but very, very cool for those who love geeky stuff like I do.
Here’s the code:
# # With rsync: # rsync -zrR --progress --delete . $TARGET_COMPUTER: # -z: compress file data during the transfer # -r: recursive into directories # -R: use relative path names # --progress: show progress during transfer # --delete: delete extraneous files from destination dirs # .: source directory, in this case my home directory # $TARGET_COMPUTER: ~/ on $TARGET_COMPUTER, which can be a host name of any sort # # With tar, ssh and cat (step by step, then all in one line): # # Archive everything in current directory to a single compressed file, archive.tar.gz tar -cvzf archive.tar.gz . # Send the archive to target by ssh cat archive.tar.gz | ssh target_computer cat ">" archive.tar.gz # Log onto to target ssh target_computer # Unpack archive and delete it when done tar xvf archive.tar.gz && rm archive.tar.gz # Now all in one line, reading and writing just once! # Archive and compress everything to stdout, then pipe that to ssh, then unpack from stdin tar -cvzf - . | ssh target_computer tar -xfz -
Doing the tar, ssh and cat combination all in one line with pipes is a lot faster than writing to your hard drive than reading it and writing other outputs again. But as cool as it is, the rsync command does all of that for you without making you worry about the details.