Compiz brings to life a variety of visual effects that make the Linux desktop easier to use, more powerful and intuitive, and more accessible for users with special needs. It is an OpenGL-based compositing and window-manager. Compiz is the original compositing window manager from Novell’s XGL project. It is developed by David Reveman and community.
Compiz is one of the first compositing window managers for the X Window System that uses 3D graphics hardware to create fast compositing desktop effects for window management.
By default Compiz configuration settings manager is not installed under Ubuntu Linux 7.10. So first install compiz manager: $ sudo apt-get install compizconfig-settings-manager
Please note that compiz only worked with 3D hardware which was supported by Xgl. Most NVIDIA and ATI graphics cards are known to work with Compiz on Xgl. On board Intel card (Intel GMA) also reported working with compiz.
Turn on 3D Compiz Effects
Right Click on Desktop > Change Desktop Background > Select Visual Effect Tab > Select Extra or Custom
(click to enlarge image)
Save the changes.
How do I use or see the 3D effects?
Now everything is turned on, but how do you use it? Just hit the following key combinations to see effects:
ALT + TAB: Switch windows
Windows key + Tab: Switch windows
Ctrl + Alt + Left/Right Arrow: Switch desktops on cube
Ctrl + Alt + Left-click anywhere on wallpaper and drag
Try minimizing and maximizing windows
Try Dragging windows
Double click titlebar
Windows key+right-click Zoom-in once
Windows key + wheel mouse up : Zoom-in manually
To get idea about 3D effects, please see following youtube video (video may not work inside RSS reader, so click here to view the same):
Most time you have a limited space on the remote SFTP/ SSH backup server. Here is the script that periodically cleanup old backup files from the server i.e it will remove old directories.
Requirements
Script will automatically calculate date from today’s date. By default it will keep only last 7 days backup on server. You can easily increase / decrease this limit. In order to run script you must meet the following criteria:
Remote SSH server with rm command execution permission
SSH Keys for password less login (see how to setup RSA / DSA keys for password less login)
Remote backup directory must be in dd-mm-yyyy or mm-dd-yyyy format. For example daily mysql backup should be stored in /mysql/mm-dd-yyyy format.
Sample Script Usage
Run the script as follows: ./rot.backup.sh 7 /mysql "rm -rf"
Where,
7 : Remove last 7 days files
/mysql : Base directory to clean up. If todays date is 9/Oct/2007, it will remove last 7 days directory /mysql/02-10-2007, /mysql/01-10-2007, …. /mysql/26-09-2007, /mysql/25-09-2007. It means script will only keep last 7 days backup on remote sftp / ssh server.
rm -rf : Command to run on directory structure
Sample Shell Script
Install following script:
#!/bin/bashif [ "$#" == "0" ];then
echo "$0 upper-limit path {command}"
exit 1
fi### SSH Server setup ###
SSH_USER="vivek"
SSH_SERVER="nas.nixcraft.in"
START=7
DIR_FORMAT="%d-%m-%Y" # DD-MM-YYYY format#DIR_FORMAT="%m-%d-%Y" #MM-DD-YYYY format## do not edit below ##
LIMIT=$( expr $START + $1 )
## default CMD ##
CMD="ls"
SSH_PATH="."
[ "$3" != "" ] && CMD="$3" || :
[ "$2" != "" ] && SSH_PATH="$2" || :
DAYS=$(for d in $(seq $START $LIMIT);do date --date="$d days ago" +"${DIR_FORMAT}"; done)
for d in $DAYS
do
ssh ${SSH_USER}@${SSH_SERVER} ${CMD} ${SSH_PATH}/$d
done
Run above script via cron tab (cronjob): @daily /path/to/rot.ssh.script 7 "/html" "rm -rf"
@daily /path/to/rot.ssh.script 7 "/mysql" "rm -rf"
netcat utility (nc command) considered as TCP/IP swiss army knife. It reads and writes data across network connections, using TCP or UDP protocol. It is designed to be a reliable “back-end” tool that can be used directly or easily driven by other programs and scripts. At the same time, it is a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.
I also install the netcat package for administering a network and you’d like to use its debugging and network exploration capabilities.
One my favorite usage is to migrating data between two server hard drives using netcat over a network. It is very easy to copy complete drive image from one server to another.
You can also use ssh for the same purpose, but encryption adds its own overheads. This is tried and trusted method (hat tip to karl) .
Your task is copy HostA /dev/sda to HostB’s /dev/sdb using netcat command. First login as root user
Command to type on hostB (receiving end ~ write image mode)
You need to open port on hostB using netcat, enter : # netcat -p 2222 -l |bzip2 -d | dd of=/dev/sdb
Where,
-p 2222 : Specifies the source port nc should use, subject to privilege restrictions and availability. Make sure port 2222 is not used by another process.
-l : Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host.
bzip2 -d : Compresses image using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. This will speed up network transfer ( -d : force decompression mode)
dd of=/dev/sda : /dev/sda is your hard disk. You can also specify partition such as /dev/sda1
Command to type on hostA (send data over a network ~ read image mode)
Now all you have to do is start copying image. Again login as root and enter: # bzip2 -c /dev/sda | netcat hostA 2222
OR use IP address: # bzip2 -c /dev/sda | netcat 192.168.1.1 2222
This process takes its own time.
A note about latest netcat version 1.84-10 and above
If you are using latest nc / netcat version above syntax will generate an error. It is an error to use -l option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored. So use nc command as follows.
On hostA, enter: # nc -l 2222 > /dev/sdb
On hostB, enter: # nc hostA 2222
OR # nc 192.168.1.1 2222
Using a second machine (hostB), connect to the listening nc process at 2222 (hostA), feeding it the file (/dev/sda)which is to be transferred. You can use bzip2 as follows.
On hostA, enter: # nc -l 2222 | bzip2 -d > /dev/sdb
On hostB, enter: # bzip2 -c /dev/sda | nc 192.168.1.1 2222
You should definitely use bs=16M or something like that. Otherwise, the copy will take forever. Copying a 300 GB hard drive over a 1 Gbps cross-over cable took about 1 1/2 hours or so using bs=16M Without this option, the same thing would have taken about 7 hours.
In short use command as follows: # netcat -p 2222 -l |bzip2 -d | dd of=/dev/sdb bs=16M
AcetoneISO is the disk image emulator that mounts images of DVD and CD media. Both Mac OS X and Linux / other UNIX like oses can mount and use ISO images using loopback device. It is a DAEMON Tools (Microsoft Windows disk image) clone / emulator program with a lot more features.
Using this cool open source software means a user does not have to swap discs to run different programs on local or network computer. You can access software distributed (over Internet) as a disk image such as ISO, DAA, BIN or many other formats (no need to burn a CD/DVD to use disk image). Other usage:
Prevent scratching, which can cause permanent damage to a disc
Speeds up access times as hard drives are faster than optical drives
Provides a backup copy of a disc, in case the original becomes damaged, lost, or stolen
Features:
Mount and Unmount ISO, MDF, NRG (if iso-9660 standard)
Convert BIN/CUE, MDF, NRG, CCD/IMG, CDI, XBOX, B5I/BWI, PDI, DAA to ISO
Burn Your ISO, CUE, TOC images directly in K3b
Blank Your CD/DVD ReWritable
Verify md5sum of image files and Generate a Md5sum file from ISO
Ability to create ISO from Folder and CD/DVD
Service-Menu support
Play a DVD-Movie ISO with Kaffeine, Mplayer, VLC, Kmplayer
Split ISOs in smaller files and Merge them
Quick Turbo Mount an ISO file from your Desktop
Compress ISO with p7zip and extract
Encrypt and Decrypt an ISO
Generate a CUE file from a IMG/BIN image
Rip a PSX cd to a bin/toc image
AcetoneISO has only one dependencies problem – Kommander. Make sure you have Kommander installed.
Step # 1: Install kommander
Use apt-get command to install kommander ( it consists of an editor and a program executor that produce dialogs that you can execute), which is required by AcetoneISO. You also need p7zip (a file archiver with highest compression ratio) to compress and extract ISO images. Use apt-get command under Debian or Ubuntu Linux as follows: # apt-get install kommander p7zip
OR $ sudo apt-get install kommander p7zip
Step # 2: Install AcetoneISO
Download source code or Debian .deb or Suse/Redhat RPM file from official website. Use apt-get / rpm command. Use apt-get command to install .deb file: # apt-get install AcetoneISO-6.7.deb
OR use rpm package for RPM based distro: # rpm -ivh AcetoneISO-6.7.noarch.rpm
Step # 2: Start AcetoneISO program
Simply type the command or click on Application > Accessories > AcetoneISO: $ acetoneiso &
How do you install and use rsync to synchronize files and directories from one location (or one server) to another location? – A common question asked by new sys admin. [continue reading…]
Michael Stutz shows us how to recover deleted files using lsof command.
From the article:
There you are, happily playing around with an audio file you’ve spent all afternoon tweaking, and you’re thinking, “Wow, doesn’t it sound great? Lemme just move it over here.” At that point your subconscious chimes in, “Um, you meant mv, not rm, right?” Oops. I feel your pain — this happens to everyone. But there’s a straightforward method to recover your lost file, and since it works on every standard Linux system, everyone ought to know how to do it.
Briefly, a file as it appears somewhere on a Linux filesystem is actually just a link to an inode, which contains all of the file’s properties, such as permissions and ownership, as well as the addresses of the data blocks where the file’s content is stored on disk. When you rm a file, you’re removing the link that points to its inode, but not the inode itself; other processes (such as your audio player) might still have it open. It’s only after they’re through and all links are removed that an inode and the data blocks it pointed to are made available for writing.
This delay is your key to a quick and happy recovery: if a process still has the file open, the data’s there somewhere, even though according to the directory listing the file already appears to be gone.
BBC news has published top 10 data disasters. Hard drives kept in dirty socks and the dangers of oiling your PC feature in a top 10 list of data disasters.
Ontrack Data Recovery has unveiled its annual Top Ten list of remarkable data loss disasters in 2006. Taken from a global poll of Ontrack data recovery experts, this year’s list of data disasters is even more incredible when you consider that in every case cited, Ontrack successfully recovered the data.
It is recommended that you always follow these backup tips.
The Advanced Maryland Automatic Network Disk Archiver (AMANDA), is a backup system that allows the administrator to set up a single master backup server to back up multiple hosts over network to tape drives/changers or disks or optical media.
Novell Cool Solutions has published a small how to about Amanda. From the article:
Amanda is simple to use but tricky to configure. It is worth sticking with Amanda until you get a fully working system. Don’t be put off by the lack of a slick GUI interface or the need to configure the software via a console shell, once it is configured Amanda does its job very efficiently.
The configuration files for Amanda are stored in the /etc/amanda directory. Each configuration that you create goes into a separate directory within /etc/amanda. There is a sample configuration in the “example” directory that you can use as a jumping off point for configuring Amanda
Contents
* Background
* Installing Amanda
* Configuring Amanda
* Creating file and directories
* Preparing tapes for use with Amanda
* Checking the configuration
* Performing the backup
* Checking the backup
* Automating the backup
* Conclusions
Read more, Using Amanda to Backup Your Linux Server at Novell.com…
From the article:
Everyone’s worst nightmare; the normal comforting hum of your computer is disturbed by clicking, pranging, banging…
It happens to everyone because it’s inevitable (hard drives are mechanical, as sure as a car will break down your hard drive will fail eventually). However, no matter how often you see it you never quite get used to it happening, the heartache of all the files you lose forever because you were “just about to back it up, honestly”. This is not a matter of explaining to you how you can best avoid data loss or how to protect against your hard drive dying, this is an article outlining how Ubuntu (or any other LiveCD available distro) could save your life…
This question asked again and again by a new Linux sys admins:
How do I perform backups for my Linux operating system?
So I am putting up all necessary information you ever need to know about backup. The main aim is to provide you necessary software, links and commands to get started as soon as possible.
Backup is essential
First a backup is essential. You need a good backup strategy to:
Minimize time from disaster such as server failure or human error (file deleted) or acts of God
To avoid downtime
Save money and time
And ultimately to save your job 😉
A backup must provide
Restoration of a single/individual files
Restoration of file systems
What to backup?
User files and dynamic data [databases] (stored in /home or specially configured partitions or /var etc).
Application software (stored in /usr)
OS files
Application configuration files (stored in /etc, /usr/local/etc or /home/user/.dotfiles)
Different types of backups
Full backups: Each file and directory is written to backup media
Incremental backups (Full + Incremental backup): This backups are used in conjunction with full backup. These backups will be incremental if each original piece of backed up information is stored only once , and then successive backups only contain the information that changed since the previous one. It use file’s modification time to determine which file need to backup.
So when you restore incremental backup:
First restore the last full backup
Next every subsequent incremental backup you need to restore
Preferred Backup Media
Tape (old and trusted method)
Network (ftp, nas, rsync etc)
Disk (hard disk, optical disk etc)
Test backups
Please note that whichever backup media you choose, you need to test your backup. Perform tests to make sure that data can be read from media.
Backup Recommendation
My years of experience show that if you follow following formulas you are most likely to get back your data in worst scenario:
(a) Rotate backup media
(b) Use multiple backup media for same data such as ftp and tape
(c) Keep old copies of backups offsite
There is not golden rule or procedure but I follow these two methods:
Method # 1: Reinstall everything, restore everything, and secure everything
Use this method (bare metal recovery) if your server is cracked or hacked or hard drive is totally out of order:
Format everything
Reinstall os
Configure data partitions (if any)
Install drivers
Restore data from backup media
Configure security
Method # 2: Use of recovery CD/DVD rom
Use this method if your box is not hacked and system cannot boot or MBR damaged or accidental file deletion etc:
Boot into rescue mode.
Debug (or troubleshoot) the problem
Verify that disk partitions stable enough (use fsck) to put backup data
Install drivers
Restore data from backup media
Configure security
Linux (and other UNIX oses) backup tools
Luckily Linux/UNIX provides good set of tools for backup. We have almost covered each and every tool mentioned below. Just follow the link to get more information about each command and its usage:
Backing to tape using tar to tape, tar over ssh, cpio, and dump command. tar and friends are good for small backups. For large scale backup or backup that demands large CPU and I/O, use other solution (see below).
It is also recommended that you use RAID or LVM (see consistent backup with LVM) or combination of both to increase reliability of data.
A note about MySQL or Oracle database backup
Backing up database server such as MySQL or Oracle needs more planning. Generally you can apply a table write lock and use mysql database dump utility to backup database. You can also use LVM volume to save database data.
A note about large scale backup
As I said earlier tar is good if you need to backup small amount of data that does not demands high CPU or I/O. Following are recommended tools for backup that demands high CPU or I/O rate:
(a) amanda – AMANDA, the Advanced Maryland Automatic Network Disk Archiver, is a backup system (open source software) that allows the administrator to set up a single master backup server to back up multiple hosts over network to tape drives/changers or disks or optical media.
(b) Third party commercial proprietary solutions:
Top three excellent commercial solutions:
If you are looking to perform the tasks of protecting large-scale computer systems use above solutions and following two books will give you good idea:
I hope this small how to provide enough information to anyone to kick start your backup operation. Tell me if I am missing something or if you have a better backup solution or strategy, please comment back.