Monitoring User and Application Activity with psacct
Monitoring User and Application Activity with psacct
One of the big advantages of using psacct on your server is that it provides excellent logging for activities of applications and users. When you are running scripts one of the important aspects of that script is how much resources it may be using and are there any resource limitations that may exist with the application. In addition, there may be times when you run a script as a user. In other words, you create a user with specific rights, maybe even using visudo. You will likely use this to reduce the security risks of a user who must issue a command with root privileges.
Install Process Accounting
# yum install psacct
Start Process Accounting
# /etc/init.d/psacct start
Starting process accounting: [ OK ]
Connect Time
The connect time in hours is based on logins and logouts. The ac command provides a total.
# ac
total 1268.26
Accounting By Day
The system’s default login accounting file is /var/log/wtmp.
# ac -d
Oct 30 total 2.87
Oct 31 total 4.52
Nov 2 total 0.04
Nov 5 total 3.37
Nov 6 total 10.39
Nov 7 total 11.65
Nov 8 total 5.09
Nov 10 total 0.89
Nov 11 total 7.02
Nov 12 total 5.16
Nov 13 total 0.30
Nov 18 total 11.65
Nov 19 total 1.58
Nov 20 total 8.20
Nov 23 total 2.34
Nov 26 total 0.25
Nov 27 total 3.49
Dec 2 total 0.93
Today total 2.45
Time Totals for Users
# ac -p
yak 8.09
nagios 0.04
haywire 33.76
hatti 12.93
hacker 334.98
geddy 30.89
usayg 198.59
amar 0.12
langoor 13.82
aanta 18.00
nildana 105.30
batley 0.00
maka 7.94
hunter 85.02
gai 416.38
dhon 2.42
total 1268.27
Commands of Users
You can search out the commands of users with the lastcomm command which prints out the previously executed commands.
Process Flag Username Terminal Time
ping S dhon pts/3 0.00 secs Thu Nov 30 18:09
# lastcomm dhon
hostname dhon pts/1 0.00 secs Mon Dec 3 18:41
bash F dhon pts/1 0.00 secs Mon Dec 3 18:41
id dhon pts/1 0.00 secs Mon Dec 3 18:41
su S dhon __ 0.02 secs Mon Dec 3 10:58
bash X dhon __ 0.04 secs Mon Dec 3 10:58
sshd SF dhon __ 0.04 secs Mon Dec 3 10:58
Search Logs for Commands
Using the lastcomm command you will be able to view each use of an individual command.
# lastcomm grep
grep aanta pts/6 0.00 secs Thu Nov 30 13:28
grep aanta pts/6 0.00 secs Thu Nov 30 13:28
grep aanta pts/5 0.00 secs Thu Nov 30 12:57
grep aanta pts/5 0.00 secs Thu Nov 30 12:57
Print Summary
The sa command will print a summary of commands that were executed. It will also condense the information into a summary file called savacct which contains the number of times that the command was executed. The useracct file keeps a summary of the commands by user.
Output Fields
cpu - sum of system and user time in cpu minutes
re - actual time in minutes
k - cpu-time averaged core usage, in 1k units
k*sec - cpu storage integral (kilo-core seconds)
u - user cpu time in cpu minutes
s - system time in cpu minutes
# /usr/sbin/sa
Print User Information
Use the -u option to provide information on individual users.
# /usr/sbin/sa -u
root 0.00 cpu 598k mem accton
root 0.00 cpu 1081k mem initlog
root 0.00 cpu 920k mem initlog
root 0.00 cpu 1172k mem touch
root 0.00 cpu 1402k mem psacct
bomb 0.01 cpu 7282k mem kdeinit *
bomb 0.00 cpu 6232k mem gnome-panel *
bomb 0.02 cpu 4848k mem gnome-terminal
Display Number of Processes
An increase in these fields indicates a problem. This prints the number of processes and the number of CPU minutes. If these numbers continue to increase it is time to look into what is happening.
# /usr/sbin/sa -m
195 220.31re 0.09cp 2220k
aanta 65 198.37re 0.08cp 2135k
root 88 21.86re 0.00cp 1084k
postgres 40 0.09re 0.00cp 4879k
smmsp 2 0.00re 0.00cp 1827k
Display All Names
This option will show each of the programs on your server so you may evaluate, real time, memory usage and which programs are running.
# /usr/sbin/sa -a
221 83.36re 0.01cp 1414k
1 0.01re 0.00cp 1471k rpmq
7 0.33re 0.00cp 2465k sendmail*
1 40.78re 0.00cp 1844k sshd
37 0.00re 0.00cp 964k bash*
32 0.00re 0.00cp 604k tmpwatch
27 0.00re 0.00cp 4984k postmaster*
26 0.00re 0.00cp 1116k df
15 0.00re 0.00cp 959k id
11 0.00re 0.00cp 709k egrep
8 0.00re 0.00cp 636k sa
7 0.00re 0.00cp 817k grep
6 0.00re 0.00cp 562k ac
5 0.01re 0.00cp 789k awk
3 0.41re 0.00cp 1219k crond*
3 0.40re 0.00cp 674k run-parts
3 0.00re 0.00cp 774k dircolors
3 0.00re 0.00cp 673k consoletype
2 40.98re 0.00cp 1344k bash
2 0.14re 0.00cp 1628k sshd*
2 0.00re 0.00cp 914k logrotate
# /usr/sbin/sa -a It will sort the programs in percentage distributions.
How To Capture Packets with TCPDUMP?
See the list of interfaces on which tcpdump can listen
# /usr/sbin/tcpdump -D
Listen on any available interface
# /usr/sbin/tcpdump -i any
Verbose Mode
# /usr/sbin/tcpdump -v
# /usr/sbin/tcpdump -vv
# /usr/sbin/tcpdump -vvv
# /usr/sbin/tcpdump -q
Limit the capture to an number of packets N
# /usr/sbin/tcpdump -c N
Display IP addresses and port numbers when capturing packets
# /usr/sbin/tcpdump -n
Capture any packets where the destination host is 192.168.0.1, display IP addresses and port numbers
# /usr/sbin/tcpdump -n dst host 192.168.0.1
Capture any packets where the source host is 192.168.0.1, display IP addresses and port numbers
# /usr/sbin/tcpdump -n src host 192.168.0.1
Capture any packets where the source or destination host is 192.168.0.1, display IP addresses and port numbers
# /usr/sbin/tcpdump -n host 192.168.0.1
Capture any packets where the destination network is 192.168.10.0/24, display IP addresses and port numbers
# /usr/sbin/tcpdump -n dst net 192.168.10.0/24
Capture any packets where the source network is 192.168.10.0/24, display IP addresses and port numbers
# /usr/sbin/tcpdump -n src net 192.168.10.0/24
Capture any packets where the source or destination network is 192.168.10.0/24,display IP addresses and port numbers
# /usr/sbin/tcpdump -n net 192.168.10.0/24
Capture any packets where the destination port is 23, display IP addresses and port numbers
# /usr/sbin/tcpdump -n dst port 23
Capture any packets where the destination port is is between 1 and 1023 inclusive, display IP addresses and port numbers
# /usr/sbin/tcpdump -n dst portrange 1-1023
Capture only TCP packets where the destination port is is between 1 and 1023 inclusive,display IP addresses and port numbers
# /usr/sbin/tcpdump -n tcp dst portrange 1-1023
Capture only UDP packets where the destination port is is between 1 and 1023 inclusive, display IP addresses and port numbers
# /usr/sbin/tcpdump -n udp dst portrange 1-1023
Capture any packets with destination IP 192.168.0.1 and destination port 23,display IP addresses and port numbers
# /usr/sbin/tcpdump -n "dst host 192.168.0.1 and dst port 23"
Capture any packets with destination IP 192.168.0.1 and destination port 80 or 443,display IP addresses and port numbers
# /usr/sbin/tcpdump -n "dst host 192.168.0.1 and (dst port 80 or dst port 443)"
Capture any ICMP packets
# /usr/sbin/tcpdump -v icmp
Capture any ARP packets
# /usr/sbin/tcpdump -v arp
Capture either ICMP or ARP packets
# /usr/sbin/tcpdump -v "icmp or arp"
Capture any packets that are broadcast or multicast
# /usr/sbin/tcpdump -n "broadcast or multicast"
Capture 500 bytes of data for each packet rather than the default of 68 bytes
# /usr/sbin/tcpdump -s 500
Capture all bytes of data within the packet
# /usr/sbin/tcpdump -s 0
Monitor all packets on eth1 interface
# /usr/sbin/tcpdump -i eth1
Monitor all traffic on port 80 ( HTTP )
# /usr/sbin/tcpdump -i eth0 'port 80'
Monitor all traffic on port 25 ( SMTP )
# /usr/sbin/tcpdump -vv -x -X -s 1500 -i eth0 'port 25'
Capture only N number of packets using tcpdump -c
# /usr/sbin/tcpdump -c 2 -i eth0
Display Captured Packets in ASCII using tcpdump -A
# /usr/sbin/tcpdump -A -i eth0
Display Captured Packets in HEX and ASCII using tcpdump -XX
# /usr/sbin/tcpdump -XX -i eth0
Capture the packets and write into a file using tcpdump -w
# /usr/sbin/tcpdump -w data.pcap -i eth0
.pcap is extension
Reading the packets from a saved file using tcpdump -r
# /usr/sbin/tcpdump -tttt -r data.pcap
Capture packets with IP address using tcpdump -n
# /usr/sbin/tcpdump -n -i eth0
Capture packets with proper readable timestamp using tcpdump -tttt
# /usr/sbin/tcpdump -n -tttt -i eth0
Read packets longer than N bytes
# /usr/sbin/tcpdump -w data.pcap greater 1024
Read packets lesser than N bytes
# /usr/sbin/tcpdump -w data1024.pcap less 1024
Receive only the packets of a specific protocol type
# /usr/sbin/tcpdump -i eth0 arp
Receive packets flows on a particular port using tcpdump port
# /usr/sbin/tcpdump -i eth0 port 22
Capture packets for particular destination IP and Port
# /usr/sbin/tcpdump -w data.pcap -i eth0 dst 10.181.140.216 and port 22
Capture TCP communication packets between two hosts
# /usr/sbin/tcpdump -w data.pcap -i eth0 dst 16.181.170.246 and port 22
Tcpdump Filter Packets – Capture all the packets other than arp and rarp
# /usr/sbin/tcpdump -i eth0 not arp and not rarp
# /usr/sbin/tcpdump -D
Listen on any available interface
# /usr/sbin/tcpdump -i any
Verbose Mode
# /usr/sbin/tcpdump -v
# /usr/sbin/tcpdump -vv
# /usr/sbin/tcpdump -vvv
# /usr/sbin/tcpdump -q
Limit the capture to an number of packets N
# /usr/sbin/tcpdump -c N
Display IP addresses and port numbers when capturing packets
# /usr/sbin/tcpdump -n
Capture any packets where the destination host is 192.168.0.1, display IP addresses and port numbers
# /usr/sbin/tcpdump -n dst host 192.168.0.1
Capture any packets where the source host is 192.168.0.1, display IP addresses and port numbers
# /usr/sbin/tcpdump -n src host 192.168.0.1
Capture any packets where the source or destination host is 192.168.0.1, display IP addresses and port numbers
# /usr/sbin/tcpdump -n host 192.168.0.1
Capture any packets where the destination network is 192.168.10.0/24, display IP addresses and port numbers
# /usr/sbin/tcpdump -n dst net 192.168.10.0/24
Capture any packets where the source network is 192.168.10.0/24, display IP addresses and port numbers
# /usr/sbin/tcpdump -n src net 192.168.10.0/24
Capture any packets where the source or destination network is 192.168.10.0/24,display IP addresses and port numbers
# /usr/sbin/tcpdump -n net 192.168.10.0/24
Capture any packets where the destination port is 23, display IP addresses and port numbers
# /usr/sbin/tcpdump -n dst port 23
Capture any packets where the destination port is is between 1 and 1023 inclusive, display IP addresses and port numbers
# /usr/sbin/tcpdump -n dst portrange 1-1023
Capture only TCP packets where the destination port is is between 1 and 1023 inclusive,display IP addresses and port numbers
# /usr/sbin/tcpdump -n tcp dst portrange 1-1023
Capture only UDP packets where the destination port is is between 1 and 1023 inclusive, display IP addresses and port numbers
# /usr/sbin/tcpdump -n udp dst portrange 1-1023
Capture any packets with destination IP 192.168.0.1 and destination port 23,display IP addresses and port numbers
# /usr/sbin/tcpdump -n "dst host 192.168.0.1 and dst port 23"
Capture any packets with destination IP 192.168.0.1 and destination port 80 or 443,display IP addresses and port numbers
# /usr/sbin/tcpdump -n "dst host 192.168.0.1 and (dst port 80 or dst port 443)"
Capture any ICMP packets
# /usr/sbin/tcpdump -v icmp
Capture any ARP packets
# /usr/sbin/tcpdump -v arp
Capture either ICMP or ARP packets
# /usr/sbin/tcpdump -v "icmp or arp"
Capture any packets that are broadcast or multicast
# /usr/sbin/tcpdump -n "broadcast or multicast"
Capture 500 bytes of data for each packet rather than the default of 68 bytes
# /usr/sbin/tcpdump -s 500
Capture all bytes of data within the packet
# /usr/sbin/tcpdump -s 0
Monitor all packets on eth1 interface
# /usr/sbin/tcpdump -i eth1
Monitor all traffic on port 80 ( HTTP )
# /usr/sbin/tcpdump -i eth0 'port 80'
Monitor all traffic on port 25 ( SMTP )
# /usr/sbin/tcpdump -vv -x -X -s 1500 -i eth0 'port 25'
Capture only N number of packets using tcpdump -c
# /usr/sbin/tcpdump -c 2 -i eth0
Display Captured Packets in ASCII using tcpdump -A
# /usr/sbin/tcpdump -A -i eth0
Display Captured Packets in HEX and ASCII using tcpdump -XX
# /usr/sbin/tcpdump -XX -i eth0
Capture the packets and write into a file using tcpdump -w
# /usr/sbin/tcpdump -w data.pcap -i eth0
.pcap is extension
Reading the packets from a saved file using tcpdump -r
# /usr/sbin/tcpdump -tttt -r data.pcap
Capture packets with IP address using tcpdump -n
# /usr/sbin/tcpdump -n -i eth0
Capture packets with proper readable timestamp using tcpdump -tttt
# /usr/sbin/tcpdump -n -tttt -i eth0
Read packets longer than N bytes
# /usr/sbin/tcpdump -w data.pcap greater 1024
Read packets lesser than N bytes
# /usr/sbin/tcpdump -w data1024.pcap less 1024
Receive only the packets of a specific protocol type
# /usr/sbin/tcpdump -i eth0 arp
Receive packets flows on a particular port using tcpdump port
# /usr/sbin/tcpdump -i eth0 port 22
Capture packets for particular destination IP and Port
# /usr/sbin/tcpdump -w data.pcap -i eth0 dst 10.181.140.216 and port 22
Capture TCP communication packets between two hosts
# /usr/sbin/tcpdump -w data.pcap -i eth0 dst 16.181.170.246 and port 22
Tcpdump Filter Packets – Capture all the packets other than arp and rarp
# /usr/sbin/tcpdump -i eth0 not arp and not rarp
How to change the linux hostname?
# hostname
test.com
# hostname server.com
# hostname
server.com
# vi /etc/hostname
server.com
Now restart and see the changes.
How to change MySql root password?
For every database, you should set the root or sa passwords to something other than the default, unless you want to get hacked. For mysql, the system administrator user is called root. You will use the mysqladmin utility from a command line to set the new password.
Syntax:
# mysqladmin -u root password “new_password”
# mysqladmin -u root -h host_name password “new_password”
Example:
# mysqladmin -u root password Pa55w0rD
# mysqladmin -u root -h localhost password linuxgEEks
You need to restart the database server after this change
# /etc/init.d/mysql restart
Syntax:
# mysqladmin -u root password “new_password”
# mysqladmin -u root -h host_name password “new_password”
Example:
# mysqladmin -u root password Pa55w0rD
# mysqladmin -u root -h localhost password linuxgEEks
You need to restart the database server after this change
# /etc/init.d/mysql restart
How To Backup MySQL Database to a file?
Backing up your database is a very important system administration task, and should generally be run from a cron job at scheduled intervals. We will use the mysqldump utility included with mysql to dump the contents of the database to a text file that can be easily re-imported.
Syntax:
# mysqldump -h localhost -u root -pmypassword database_name > dumpfile_name.sql
Example:
# mysqldump -h localhost -u root -pPa55w0rD database110 > backup_file.sql
This will give you a text file containing all the commands required to re-create the database.
Syntax:
# mysqldump -h localhost -u root -pmypassword database_name > dumpfile_name.sql
Example:
# mysqldump -h localhost -u root -pPa55w0rD database110 > backup_file.sql
This will give you a text file containing all the commands required to re-create the database.
How To Set SSH Login Message?
To set ssh login message, its very easy and interesting.
# vi /etc/motd
write the message of your own
######### Welcome to the SSH World #########
### This is the Email Server, please exit properly ###
########################################
Save and Quit
// To check quit the ssh terminal and re-login...
######### Welcome to the SSH World #########
### This is the Email Server, please exit properly ###
########################################
Thats all, Enjoy!!!!!
# vi /etc/motd
write the message of your own
######### Welcome to the SSH World #########
### This is the Email Server, please exit properly ###
########################################
Save and Quit
// To check quit the ssh terminal and re-login...
######### Welcome to the SSH World #########
### This is the Email Server, please exit properly ###
########################################
Thats all, Enjoy!!!!!
nmap in details
nmap is a tool to check the status of ports in any machine
Example1 : To scan a particular system for open ports
#nmap hostname
Example2 : Scanning for a single port on a machine
#nmap –p 22 hostname
–p indicates port.
Example3 : For scanning only ports
#nmap –F hostname
-F is for fast scan and this will not do any other scanning like IP address, hostname, operating system, and uptime etc.
Example4 : Scanning only TCP ports
#nmap –sT hostname
-s is for scanning and T is for only scanning of TCP ports
Example5 : Scanning only UDP ports
#nmap –sU hostname
-U indicates UDP port scanning
Exmaple6 : Scan for ports and get the version of different services running on that machine
#nmap –sV hostname
-V indicates version of each network service running on that host
Example7 : Check which protocol is supported by the remote machine
#nmap –sO hostname
Example8 : Scan a system for operating system and uptime details
# nmap -O hostname
-O is for operating system scan along with default port scan
Example9 : Scan a network
#nmap networkID/subnetmask
For the above command you can try in this way
#nmap x.x.x.x/24
Example1 : To scan a particular system for open ports
#nmap hostname
Example2 : Scanning for a single port on a machine
#nmap –p 22 hostname
–p indicates port.
Example3 : For scanning only ports
#nmap –F hostname
-F is for fast scan and this will not do any other scanning like IP address, hostname, operating system, and uptime etc.
Example4 : Scanning only TCP ports
#nmap –sT hostname
-s is for scanning and T is for only scanning of TCP ports
Example5 : Scanning only UDP ports
#nmap –sU hostname
-U indicates UDP port scanning
Exmaple6 : Scan for ports and get the version of different services running on that machine
#nmap –sV hostname
-V indicates version of each network service running on that host
Example7 : Check which protocol is supported by the remote machine
#nmap –sO hostname
Example8 : Scan a system for operating system and uptime details
# nmap -O hostname
-O is for operating system scan along with default port scan
Example9 : Scan a network
#nmap networkID/subnetmask
For the above command you can try in this way
#nmap x.x.x.x/24
Netstat in Linux
List all ports
# netstat -a | more
List all tcp ports using netstat -at
# netstat -at
List all udp ports using netstat -au
# netstat -au
List only listening ports
# netstat -l
List only listening TCP Ports using netstat -lt
# netstat -lt
List only listening UDP Ports using netstat -lu
# netstat -lu
List only the listening UNIX Ports using netstat -lx
# netstat -lx
Show statistics for all ports
# netstat -s
Show statistics for TCP/UDP ports
# netstat -st
# netstat -su
Display PID and program names
# netstat -pt
Don’t resolve host, port and user name
# netstat -an
Print netstat information continuously
# netstat -c
Find the non supportive Address families in your system
# netstat --verbose
Display the kernel routing information
# netstat -r
Find out on which port a program is running
# netstat -ap | grep ssh
Find out which process is using a particular port
# netstat -an | grep ':80'
Show the list of network interfaces
# netstat -i
Display extended information on the interfaces
# netstat -ie
# netstat -a | more
List all tcp ports using netstat -at
# netstat -at
List all udp ports using netstat -au
# netstat -au
List only listening ports
# netstat -l
List only listening TCP Ports using netstat -lt
# netstat -lt
List only listening UDP Ports using netstat -lu
# netstat -lu
List only the listening UNIX Ports using netstat -lx
# netstat -lx
Show statistics for all ports
# netstat -s
Show statistics for TCP/UDP ports
# netstat -st
# netstat -su
Display PID and program names
# netstat -pt
Don’t resolve host, port and user name
# netstat -an
Print netstat information continuously
# netstat -c
Find the non supportive Address families in your system
# netstat --verbose
Display the kernel routing information
# netstat -r
Find out on which port a program is running
# netstat -ap | grep ssh
Find out which process is using a particular port
# netstat -an | grep ':80'
Show the list of network interfaces
# netstat -i
Display extended information on the interfaces
# netstat -ie
How to install SendMailAnalyser in linux, centOS?
SendmailAnalyzer can work in any platform where Sendmail and Perl could run. What you need is a modern Perl distribution 5.8.x or more is good but older version should also work.
Download sendmailanalyzer-x.x.tar.gz and perform the following opertaions
# tar -zxvf sendmailanalyzer-x.x.tar.gz
# cd sendmailanalyzer-x.x/
# perl Makefile.PL
# make && make install
Start SendmailAnalyzer daemon:
# /usr/local/sendmailanalyzer/sendmailanalyzer -f
Add httpd configuratiosn for SendmailAnalyzer
Alias /sareport /usr/local/sendmailanalyzer/www
<Directory /usr/local/sendmailanalyzer/www>
Options ExecCGI
AddHandler cgi-script .cgi
DirectoryIndex sa_report.cgi
Order deny,allow
Deny from all
Allow from 127.0.0.1
Allow from ::1
# Allow from .example.com
</Directory>
Test:
http://server_ip_address/sareport
Additional tasks to be added in crontab
# SendmailAnalyzer log reporting daily cache
0 1 * * * /usr/local/sendmailanalyzer/sa_cache > /dev/null 2>&1
# On huge MTA you may want to have five minutes caching
#*/5 * * * * /usr/local/sendmailanalyzer/sa_cache -a > /dev/null 2>&1
Logrotate:
/etc/logrotate.d/syslog to restart SendmailAnalyzer when maillog is rotated or create a cron job.
For example:
/var/log/cron /var/log/debug /var/log/maillog /var/log/messages /var/log/secure /var/log/spooler /var/log/syslog
{
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2>/dev/null` 2>/dev/null || true
/PATH_TO/rc.sendmailanalyzer restart >/dev/null 2>&1 || true
# or /etc/rc.d/init.d/sendmailanalyzer restart >/dev/null 2>&1 || true
endscript
}
Download sendmailanalyzer-x.x.tar.gz and perform the following opertaions
# tar -zxvf sendmailanalyzer-x.x.tar.gz
# cd sendmailanalyzer-x.x/
# perl Makefile.PL
# make && make install
Start SendmailAnalyzer daemon:
# /usr/local/sendmailanalyzer/sendmailanalyzer -f
Add httpd configuratiosn for SendmailAnalyzer
Alias /sareport /usr/local/sendmailanalyzer/www
<Directory /usr/local/sendmailanalyzer/www>
Options ExecCGI
AddHandler cgi-script .cgi
DirectoryIndex sa_report.cgi
Order deny,allow
Deny from all
Allow from 127.0.0.1
Allow from ::1
# Allow from .example.com
</Directory>
Test:
http://server_ip_address/sareport
Additional tasks to be added in crontab
# SendmailAnalyzer log reporting daily cache
0 1 * * * /usr/local/sendmailanalyzer/sa_cache > /dev/null 2>&1
# On huge MTA you may want to have five minutes caching
#*/5 * * * * /usr/local/sendmailanalyzer/sa_cache -a > /dev/null 2>&1
Logrotate:
/etc/logrotate.d/syslog to restart SendmailAnalyzer when maillog is rotated or create a cron job.
For example:
/var/log/cron /var/log/debug /var/log/maillog /var/log/messages /var/log/secure /var/log/spooler /var/log/syslog
{
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2>/dev/null` 2>/dev/null || true
/PATH_TO/rc.sendmailanalyzer restart >/dev/null 2>&1 || true
# or /etc/rc.d/init.d/sendmailanalyzer restart >/dev/null 2>&1 || true
endscript
}
How to install darkstat in linux, centOS?
Darkstat - Web Based Network Traffic & Bandwidth Monitoring Tool on Linux
# yum install darkstat
# darkstat -i eth0
Test:
http://ip-address:667
# yum install darkstat
# darkstat -i eth0
Test:
http://ip-address:667
How to install Monitorix in linux, centOS?
-->Monitorix is a lightweight system monitoring tool that can track services and resources of a system. This is one of the best tool to track system activities in simple and easiest way.
It can monitor system attributes like
-->System load
-->Active processes
-->Memory allocation
-->Kernel usage
-->Context switches and forks
-->VFS usage
-->Kernel usage per processor
-->Filesystems usage
-->Disk I/O activity
-->Inode usage
-->Time spent in I/O activity
-->Network traffic and usage
-->IPv4 states
-->IPv6 states
-->Active close
-->Passive close
-->UDP statistics
-->System services demand
-->IMAP and POP3 services
-->SMTP service
-->Network port traffic (Ports: 21, 22, 25, 80, 110, 139, 3306, 53, 143)
-->Users using the system
-->Devices interrupt activity
Installation Procedure
# yum install httpd rrdtool rrdtool-perl perl-libwww-perl perl-MailTools perl-MIME-Lite perl-CGI perl-DBI
Note: Some times on updated package this may not be enough, use the following command to update and download more packages to support the configurations.
# yum -y install rrdtool rrdtool-perl perl-libwww-perl perl-MailTools perl-MIME-Lite perl-CGI perl-DBI perl-XML-Simple perl-Config-General perl-HTTP-Server-Simple perl-IO-Socket-SSL
Download monitorix and install
# rpm -ivh http://www.monitorix.org/monitorix-n.n.n-1.noarch.rpm
After successful installations
# service monitorix start
#chkconfig monitorix on ----> Add to startup
Log file: /var/log/monitorix
Testing:
http://ip-address:8080/monitorix/
Thats all, comments and suggestions are welcome!!
It can monitor system attributes like
-->System load
-->Active processes
-->Memory allocation
-->Kernel usage
-->Context switches and forks
-->VFS usage
-->Kernel usage per processor
-->Filesystems usage
-->Disk I/O activity
-->Inode usage
-->Time spent in I/O activity
-->Network traffic and usage
-->IPv4 states
-->IPv6 states
-->Active close
-->Passive close
-->UDP statistics
-->System services demand
-->IMAP and POP3 services
-->SMTP service
-->Network port traffic (Ports: 21, 22, 25, 80, 110, 139, 3306, 53, 143)
-->Users using the system
-->Devices interrupt activity
Installation Procedure
# yum install httpd rrdtool rrdtool-perl perl-libwww-perl perl-MailTools perl-MIME-Lite perl-CGI perl-DBI
Note: Some times on updated package this may not be enough, use the following command to update and download more packages to support the configurations.
# yum -y install rrdtool rrdtool-perl perl-libwww-perl perl-MailTools perl-MIME-Lite perl-CGI perl-DBI perl-XML-Simple perl-Config-General perl-HTTP-Server-Simple perl-IO-Socket-SSL
Download monitorix and install
# rpm -ivh http://www.monitorix.org/monitorix-n.n.n-1.noarch.rpm
After successful installations
# service monitorix start
#chkconfig monitorix on ----> Add to startup
Log file: /var/log/monitorix
Testing:
http://ip-address:8080/monitorix/
Thats all, comments and suggestions are welcome!!
Download whole website using command
# wget -r --level=0 -convert-links --page-requisites --no-parent www.website.com
The wget options:
-r
--recursive, perform recursive
-l
--level=,Use 0 for infinite depth level or use number greater than 0 for limited depth.
-k
-convert-links,Modify links inside downloaded files to point to local files.
-p
--page-requisites, Get all images, css, js files which make up the web page.
-np
--no-parent, Don't download parent directory contents.
The wget options:
-r
--recursive, perform recursive
-l
--level=,Use 0 for infinite depth level or use number greater than 0 for limited depth.
-k
-convert-links,Modify links inside downloaded files to point to local files.
-p
--page-requisites, Get all images, css, js files which make up the web page.
-np
--no-parent, Don't download parent directory contents.
Install Cacti in Linux
Cacti is a complete frontend to RRDTool, it stores all of the necessary information to create graphs and populate them with data in a MySQL database.
We need to install the following software to install cacti.
1) MySQL Server : Store cacti data
2) NET-SNMP server – SNMP (Simple Network Management Protocol) is a protocol used for network management.
3) PHP with net-snmp module – Access SNMP data using PHP.
4) Apache / lighttpd / ngnix webserver : Web server to display graphs created with PHP and RRDTOOL.
Install the software
# yum install mysql-server mysql php-mysql php-pear php-common php-gd php-devel php php-mbstring php-cli php-snmp php-pear-Net-SMTP php-mysql httpd
Configure MySQL server
Setting up root password:-
# mysqladmin -u root password NEWPASSWORD
Create cacti MySQL database
# mysql -u root -p -e ‘create database cacti’
Create a user name cacti with a password your password, then enter
Login to mysql
# mysql -u root –p
mysql> GRANT ALL ON cacti.* TO cacti@localhost IDENTIFIED BY ‘your password’;
mysql> FLUSH privileges;
mysql> \q
Intall snmpd
Type the following command to install net-snmpd
# yum install net-snmp-utils php-snmp net-snmp-libs
To configure snmpd, open the snmpd.conf configuration file.
# vi /etc/snmp/snmpd.conf and modify like the following
com2sec local localhost public
group MyRWGroup v1 local
group MyRWGroup v2c local
group MyRWGroup usm local
view all included .1 80
access MyRWGroup “” any noauth exact all all none
syslocation Unknown (edit /etc/snmp/snmpd.conf)
syscontact Root (configure /etc/snmp/snmp.local.conf)
pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat
Save and closed the configuration file and start the snmp service. Type the following.
# /etc/init.d/snmpd start
# chkconfig snmpd on
Install cacti
Update the repository:
rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
# yum install cacti
Install cacti tables
Type the following command to find out cacti.sql path:
# rpm -ql cacti | grep cacti.sql
Sample output:
/usr/share/doc/cacti-0.8.7d/cacti.sql
Type the following command to install cacti tables, use cacti user and password
# mysql -u cacti -p cacti < /usr/share/doc/cacti-0.8.7d/cacti.sql
Configure cacti database string, Open /var/www/cacti/include
# config.php
Modify the following changes as follows
/* make sure these values refect your actual database/host/user/password */
$database_type = “mysql”;
$database_default = “cacti”;
$database_hostname = “localhost”;
$database_username = “cacti”;
$database_password = “your password”;
$database_port = “3306″;
Configure httpd for cacti, Update allow from line, set to your LAN subnet to allow access to cacti
Open /etc/httpd/conf.d/cacti.conf file
# vi /etc/httpd/conf.d/cacti.conf
Alias /cacti/ /var/www/cacti/
<Directory /var/www/cacti/>
DirectoryIndex index.php
Options -Indexes
AllowOverride all
order deny,allow
allow from 172.16.0.0/16 #your network address
AddType application/x-httpd-php .php
php_flag magic_quotes_gpc on
php_flag track_vars on
</Directory>
Restart the httpd
# /etc/init.d/httpd restart
Setup cacti cronjob
Open /etc/cron.d/cacti file
# vi /etc/cron.d/cacti
Uncomment the line:
*/5 * * * * cacti /usr/bin/php /usr/share/cacti/poller.php > /dev/null 2>&1
Save and close the file.
Now cacti is ready, you can run the cacti type the following
http://server-IP-address/cacti/
or http://localhost/cacti
Note: The default username and password for cacti is admin / admin.
We need to install the following software to install cacti.
1) MySQL Server : Store cacti data
2) NET-SNMP server – SNMP (Simple Network Management Protocol) is a protocol used for network management.
3) PHP with net-snmp module – Access SNMP data using PHP.
4) Apache / lighttpd / ngnix webserver : Web server to display graphs created with PHP and RRDTOOL.
Install the software
# yum install mysql-server mysql php-mysql php-pear php-common php-gd php-devel php php-mbstring php-cli php-snmp php-pear-Net-SMTP php-mysql httpd
Configure MySQL server
Setting up root password:-
# mysqladmin -u root password NEWPASSWORD
Create cacti MySQL database
# mysql -u root -p -e ‘create database cacti’
Create a user name cacti with a password your password, then enter
Login to mysql
# mysql -u root –p
mysql> GRANT ALL ON cacti.* TO cacti@localhost IDENTIFIED BY ‘your password’;
mysql> FLUSH privileges;
mysql> \q
Intall snmpd
Type the following command to install net-snmpd
# yum install net-snmp-utils php-snmp net-snmp-libs
To configure snmpd, open the snmpd.conf configuration file.
# vi /etc/snmp/snmpd.conf and modify like the following
com2sec local localhost public
group MyRWGroup v1 local
group MyRWGroup v2c local
group MyRWGroup usm local
view all included .1 80
access MyRWGroup “” any noauth exact all all none
syslocation Unknown (edit /etc/snmp/snmpd.conf)
syscontact Root (configure /etc/snmp/snmp.local.conf)
pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat
Save and closed the configuration file and start the snmp service. Type the following.
# /etc/init.d/snmpd start
# chkconfig snmpd on
Install cacti
Update the repository:
rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
# yum install cacti
Install cacti tables
Type the following command to find out cacti.sql path:
# rpm -ql cacti | grep cacti.sql
Sample output:
/usr/share/doc/cacti-0.8.7d/cacti.sql
Type the following command to install cacti tables, use cacti user and password
# mysql -u cacti -p cacti < /usr/share/doc/cacti-0.8.7d/cacti.sql
Configure cacti database string, Open /var/www/cacti/include
# config.php
Modify the following changes as follows
/* make sure these values refect your actual database/host/user/password */
$database_type = “mysql”;
$database_default = “cacti”;
$database_hostname = “localhost”;
$database_username = “cacti”;
$database_password = “your password”;
$database_port = “3306″;
Configure httpd for cacti, Update allow from line, set to your LAN subnet to allow access to cacti
Open /etc/httpd/conf.d/cacti.conf file
# vi /etc/httpd/conf.d/cacti.conf
Alias /cacti/ /var/www/cacti/
<Directory /var/www/cacti/>
DirectoryIndex index.php
Options -Indexes
AllowOverride all
order deny,allow
allow from 172.16.0.0/16 #your network address
AddType application/x-httpd-php .php
php_flag magic_quotes_gpc on
php_flag track_vars on
</Directory>
Restart the httpd
# /etc/init.d/httpd restart
Setup cacti cronjob
Open /etc/cron.d/cacti file
# vi /etc/cron.d/cacti
Uncomment the line:
*/5 * * * * cacti /usr/bin/php /usr/share/cacti/poller.php > /dev/null 2>&1
Save and close the file.
Now cacti is ready, you can run the cacti type the following
http://server-IP-address/cacti/
or http://localhost/cacti
Note: The default username and password for cacti is admin / admin.
Monitor Network Switch and Ports Using Nagios
1. Enable switch.cfg in nagios.cfg
Uncomment the switch.cfg line in /usr/local/nagios/etc/nagios.cfg as shown below.
cfg_file=/usr/local/nagios/etc/objects/switch.cfg
2. Add new hostgroup for switches in switch.cfg
Add the following switches hostgroup to the /usr/local/nagios/etc/objects/switch.cfg file.
define hostgroup
{
hostgroup_name switches
alias Network Switches
}
3. Add a new host for the switch to be monitered
In this example, I’ve defined a host to monitor the core switch in the /usr/local/nagios/etc/objects/switch.cfg file. Change the address directive to your switch ip-address accordingly.
define host
{
use generic-switch
host_name core-switch
alias Cisco Core Switch
address 192.168.1.50
hostgroups switches
}
4. Add common services for all switches
Displaying the uptime of the switch and verifying whether switch is alive are common services for all switches. So, define these services under the switches hostgroup_name as shown below.
# Service definition to ping the switch using check_ping
define service
{
use generic-service
hostgroup_name switches
service_description PING
check_command check_ping!200.0,20%!600.0,60%
normal_check_interval 5
retry_check_interval 1
}
# Service definition to monitor switch uptime using check_snmp
define service
{
use generic-service
hostgroup_name switches
service_description Uptime
check_command check_snmp!-C public -o sysUpTime.0
}
5. Add service to monitor port bandwidth usage
check_local_mrtgtraf uses the Multil Router Traffic Grapher – MRTG. So, you need to install MRTG for this to work properly. The *.log file mentioned below should point to the MRTG log file on your system.
define service
{
use generic-service
host_name core-switch
service_description Port 1 Bandwidth Usage
check_command check_local_mrtgtraf!/var/lib/mrtg/192.168.1.11_1.log!AVG!1000000,2000000!5000000,5000000!10
}
6. Add service to monitor an active switch port
Use check_snmp to monitor the specific port as shown below. The following two services monitors port#1 and port#5. To add additional ports, change the value ifOperStatus.n accordingly. i.e n defines the port#.
# Monitor status of port number 1 on the Cisco core switch
define service
{
use generic-service
host_name core-switch
service_description Port 1 Link Status
check_command check_snmp!-C public -o ifOperStatus.1 -r 1 -m RFC1213-MIB
}
# Monitor status of port number 5 on the Cisco core switch
define service
{
use generic-service
host_name core-switch
service_description Port 5 Link Status
check_command check_snmp!-C public -o ifOperStatus.5 -r 1 -m RFC1213-MIB
}
7. Add services to monitor multiple switch ports together
Sometimes you may need to monitor the status of multiple ports combined together. i.e Nagios should send you an alert, even if one of the port is down. In this case, define the following service to monitor multiple ports.
# Monitor ports 1 - 6 on the Cisco core switch.
define service
{
use generic-service
host_name core-switch
service_description Ports 1-6 Link Status
check_command check_snmp!-C public -o ifOperStatus.1 -r 1 -m RFC1213-MIB, -o ifOperStatus.2 -r 1 -m RFC1213-MIB, -o ifOperStatus.3 -r 1 -m RFC1213-MIB, -o ifOperStatus.4 -r 1 -m RFC1213-MIB, -o ifOperStatus.5 -r 1 -m RFC1213-MIB, -o ifOperStatus.6 -r 1 -m RFC1213-MIB
}
8. Validate configuration and restart nagios
Verify the nagios configuration to make sure there are no warnings and errors.
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
Total Warnings: 0
Total Errors: 0
Things look okay - No serious problems were detected during the pre-flight check
Restart the nagios server to start monitoring the VPN device.
# /etc/rc.d/init.d/nagios stop
Stopping nagios: .done.
# /etc/rc.d/init.d/nagios start
Starting nagios: done.
Check: http://Your-server-ip/nagios or http://localhost/nagios in browsers
Network Related Commands
# dhclient eth0
active interface 'eth0' in dhcp mode
# ethtool eth0
show network statistics of eth0
# host www.example.com
lookup hostname to resolve name to ip address and viceversa
# hostname
show hostname of system
# ifconfig eth0
show configuration of an ethernet network card
# ifconfig eth0 192.168.1.1 netmask 255.255.255.0
configure IP Address
# ifconfig eth0 promisc
configure 'eth0' in promiscuous mode to gather packets (sniffing)
# ifdown eth0
disable an interface 'eth0'
# ifup eth0
activate an interface 'eth0'
# ip link show
show link status of all network interfaces
# iwconfig eth1
show wireless networks
# iwlist scan
wifi scanning to display the wireless connections available
# mii-tool eth0
show link status of 'eth0'
# netstat -tup
show all active network connections and their PID
# netstat -tupl
show all network services listening on the system and their PID
# netstat -rn
show routing table alike "route -n"
# nslookup www.example.com
lookup hostname to resolve name to ip address and viceversa
# route -n
show routing table
# route add -net 0/0 gw IP_Gateway
configure default gateway
# route add -net 192.168.0.0 netmask 255.255.0.0 gw 192.168.1.1
configure static route to reach network '192.168.0.0/16'
# route del 0/0 gw IP_gateway
remove static route
# echo "1" > /proc/sys/net/ipv4/ip_forward
activate ip routing
# tcpdump tcp port 80
show all HTTP traffic
# whois www.example.com
lookup on Whois database
active interface 'eth0' in dhcp mode
# ethtool eth0
show network statistics of eth0
# host www.example.com
lookup hostname to resolve name to ip address and viceversa
# hostname
show hostname of system
# ifconfig eth0
show configuration of an ethernet network card
# ifconfig eth0 192.168.1.1 netmask 255.255.255.0
configure IP Address
# ifconfig eth0 promisc
configure 'eth0' in promiscuous mode to gather packets (sniffing)
# ifdown eth0
disable an interface 'eth0'
# ifup eth0
activate an interface 'eth0'
# ip link show
show link status of all network interfaces
# iwconfig eth1
show wireless networks
# iwlist scan
wifi scanning to display the wireless connections available
# mii-tool eth0
show link status of 'eth0'
# netstat -tup
show all active network connections and their PID
# netstat -tupl
show all network services listening on the system and their PID
# netstat -rn
show routing table alike "route -n"
# nslookup www.example.com
lookup hostname to resolve name to ip address and viceversa
# route -n
show routing table
# route add -net 0/0 gw IP_Gateway
configure default gateway
# route add -net 192.168.0.0 netmask 255.255.0.0 gw 192.168.1.1
configure static route to reach network '192.168.0.0/16'
# route del 0/0 gw IP_gateway
remove static route
# echo "1" > /proc/sys/net/ipv4/ip_forward
activate ip routing
# tcpdump tcp port 80
show all HTTP traffic
# whois www.example.com
lookup on Whois database
User and Group Related Commands
# chage -E 2005-12-31 user1
set deadline for user password
# groupadd [group-name]
create a new group
# groupdel [group-name]
delete a group
# groupmod -n moon sun
rename a group from moon to sun
# grpck
check correct syntax and file format of '/etc/group' and groups existence
# newgrp - [group-name]
log into a new group to change default group of newly created files
# passwd
change password
# passwd user1
change a user password (only by root)
# pwck
check correct syntax and file format of '/etc/passwd' and users existence
# useradd -c "User Linux" -g admin -d /home/user1 -s /bin/bash user1
create a new user "user1" belongs "admin" group
# useradd user1
create a new user
# userdel -r user1
delete a user ( '-r' eliminates home directory)
# usermod -c "User FTP" -g system -d /ftp/user1 -s /bin/nologin user1
change user attributes
set deadline for user password
# groupadd [group-name]
create a new group
# groupdel [group-name]
delete a group
# groupmod -n moon sun
rename a group from moon to sun
# grpck
check correct syntax and file format of '/etc/group' and groups existence
# newgrp - [group-name]
log into a new group to change default group of newly created files
# passwd
change password
# passwd user1
change a user password (only by root)
# pwck
check correct syntax and file format of '/etc/passwd' and users existence
# useradd -c "User Linux" -g admin -d /home/user1 -s /bin/bash user1
create a new user "user1" belongs "admin" group
# useradd user1
create a new user
# userdel -r user1
delete a user ( '-r' eliminates home directory)
# usermod -c "User FTP" -g system -d /ftp/user1 -s /bin/nologin user1
change user attributes
ls commands
See list SCSI devices (or hosts) and their attributes under Linux operating systems
# lsscsi -g
Use this command to list block devices
# lsblk
To see file system type
# lsblk -f
To output info about permissions
# lsblk -m
Use this command to see Linux distribution-specific information, enter
# lsb_release
# lsb_release -a
Use this command to see USB buses in the Linux based system and the devices connected to them
# lsusb
lscpu command shows information about CPU architecture information like number of CPUs, threads, cores
# lscpu
lspci command shows information about PCI buses in the system and devices connected
# lspci
lspci command can be used to find out if a given PCI hardware
# lspci | grep VT6120
lshw command finds detailed information about the hardware configuration
# lshw / lshw-gtk
Use ls command to list directory contents
# ls
# ls -l ## long format
# ls -F ## appends a character revealing the nature of a file
# ls -a ## Show all files including hidden files
# ls -R ## recursively lists subdirectories
# ls -d ## Get info about a symbolic link or directory
# ls -t ## Sort the list of files by modification time
# ls -h ## Show sizes in human readable format
# ls -B ## In directories, ignore files that end with ‘~’ (backup files)
# ls -Z ## Display the SELinux security context
# ls --group-directories-first -l ## Show directories first (group directories). Useful on server.
# ls --color ## Colorize the # ls output
# ls --hide='*.txt' -l ## Hide or ignore files whose names ends with .txt
Use this command list open files, network ports, active process
#lsof | less
List all open file
#lsof -u vivek -i
See all files opened by user "vivek"
#lsof -i 4 -a -p 7007
List all open IPv4 network files in use by the process whose PID is 7007
#lsof -i TCP80
Find process running on tcp port 80
#lsof -i 6
List only open IPv6 network files
#lsof -i 4
List only open IPv4 network files
#lsof -i TCP1-1024
List process open in port range 1 to 1024
#lsof -i @server.host.example1200-1205
List all files using any protocol on ports 1200 to 1205 of host server.host.example, use
#lsof /dev/sr0
List all open files on device /dev/sr0
#lsof /dev/dvd
Find out why my DVD drive does not eject?
#lsof -i -u^root
See all files open by all users except root
#lsof /etc/foobar
Find out who's looking at the /etc/foobar file?
Use lsattr to lists the file attributes on a second extended file system
# lsattr /etc/passwd
Use lshal command to display items in the HAL (Hardware Abstraction Layer)
# lshal | less
Use this command to show the content of given initramfs images
# lsinitramfs /boot/initrd.img
Use this command to list all device driver loaded currently in the Linux Kernel
# lsmod
See information about the PCMCIA sockets and devices
# lspcmcia
Use this command to to lists all locks associated with the local files of the system
# lslk
Use this command to display the number of messages in a mailbox
# lsmbox
# lsscsi -g
Use this command to list block devices
# lsblk
To see file system type
# lsblk -f
To output info about permissions
# lsblk -m
Use this command to see Linux distribution-specific information, enter
# lsb_release
# lsb_release -a
Use this command to see USB buses in the Linux based system and the devices connected to them
# lsusb
lscpu command shows information about CPU architecture information like number of CPUs, threads, cores
# lscpu
lspci command shows information about PCI buses in the system and devices connected
# lspci
lspci command can be used to find out if a given PCI hardware
# lspci | grep VT6120
lshw command finds detailed information about the hardware configuration
# lshw / lshw-gtk
Use ls command to list directory contents
# ls
# ls -l ## long format
# ls -F ## appends a character revealing the nature of a file
# ls -a ## Show all files including hidden files
# ls -R ## recursively lists subdirectories
# ls -d ## Get info about a symbolic link or directory
# ls -t ## Sort the list of files by modification time
# ls -h ## Show sizes in human readable format
# ls -B ## In directories, ignore files that end with ‘~’ (backup files)
# ls -Z ## Display the SELinux security context
# ls --group-directories-first -l ## Show directories first (group directories). Useful on server.
# ls --color ## Colorize the # ls output
# ls --hide='*.txt' -l ## Hide or ignore files whose names ends with .txt
Use this command list open files, network ports, active process
#lsof | less
List all open file
#lsof -u vivek -i
See all files opened by user "vivek"
#lsof -i 4 -a -p 7007
List all open IPv4 network files in use by the process whose PID is 7007
#lsof -i TCP80
Find process running on tcp port 80
#lsof -i 6
List only open IPv6 network files
#lsof -i 4
List only open IPv4 network files
#lsof -i TCP1-1024
List process open in port range 1 to 1024
#lsof -i @server.host.example1200-1205
List all files using any protocol on ports 1200 to 1205 of host server.host.example, use
#lsof /dev/sr0
List all open files on device /dev/sr0
#lsof /dev/dvd
Find out why my DVD drive does not eject?
#lsof -i -u^root
See all files open by all users except root
#lsof /etc/foobar
Find out who's looking at the /etc/foobar file?
Use lsattr to lists the file attributes on a second extended file system
# lsattr /etc/passwd
Use lshal command to display items in the HAL (Hardware Abstraction Layer)
# lshal | less
Use this command to show the content of given initramfs images
# lsinitramfs /boot/initrd.img
Use this command to list all device driver loaded currently in the Linux Kernel
# lsmod
See information about the PCMCIA sockets and devices
# lspcmcia
Use this command to to lists all locks associated with the local files of the system
# lslk
Use this command to display the number of messages in a mailbox
# lsmbox
Text Manipulating Commands in Linux
# cat example.txt | awk 'NR%2==1'
remove all even lines from example.txt
# echo a b c | awk '{print $1}'
view the first column of a line
# echo a b c | awk '{print $1,$3}'
view the first and third column of a line
# cat -n file1
number row of a file
# comm -1 file1 file2
compare contents of two files by deleting only unique lines from 'file1'
# comm -2 file1 file2
compare contents of two files by deleting only unique lines from 'file2'
# comm -3 file1 file2
compare contents of two files by deleting only the lines that appear on both files
# diff file1 file2
find differences between two files
# grep Aug /var/log/messages
look up words "Aug" on file '/var/log/messages'
# grep ^Aug /var/log/messages
look up words that begin with "Aug" on file '/var/log/messages'
# grep [0-9] /var/log/messages
select from file '/var/log/messages' all lines that contain numbers
# grep Aug -R /var/log/*
search string "Aug" at directory '/var/log' and below
# paste file1 file2
merging contents of two files for columns
# paste -d '+' file1 file2
merging contents of two files for columns with '+' delimiter on the center
# sdiff file1 file2
find differences between two files and merge interactively alike "diff"
# sed 's/string1/string2/g' example.txt
replace "string1" with "string2" in example.txt
# sed '/^$/d' example.txt
remove all blank lines from example.txt
# sed '/ *#/d; /^$/d' example.txt
remove comments and blank lines from example.txt
# sed -e '1d' exampe.txt
eliminates the first line from file example.txt
# sed -n '/string1/p'
view only lines that contain the word "string1"
# sed -e 's/ *$//' example.txt
remove empty characters at the end of each row
# sed -e 's/string1//g' example.txt
remove only the word "string1" from text and leave intact all
# sed -n '1,5p' example.txt
print from 1th to 5th row of example.txt
# sed -n '5p;5q' example.txt
print row number 5 of example.txt
# sed -e 's/00*/0/g' example.txt
replace more zeros with a single zero
# sort file1 file2
sort contents of two files
# sort file1 file2 | uniq
sort contents of two files omitting lines repeated
# sort file1 file2 | uniq -u
sort contents of two files by viewing only unique line
# sort file1 file2 | uniq -d
sort contents of two files by viewing only duplicate line
# echo 'word' | tr '[:lower:]' '[:upper:]'
convert from lower case in upper case
remove all even lines from example.txt
# echo a b c | awk '{print $1}'
view the first column of a line
# echo a b c | awk '{print $1,$3}'
view the first and third column of a line
# cat -n file1
number row of a file
# comm -1 file1 file2
compare contents of two files by deleting only unique lines from 'file1'
# comm -2 file1 file2
compare contents of two files by deleting only unique lines from 'file2'
# comm -3 file1 file2
compare contents of two files by deleting only the lines that appear on both files
# diff file1 file2
find differences between two files
# grep Aug /var/log/messages
look up words "Aug" on file '/var/log/messages'
# grep ^Aug /var/log/messages
look up words that begin with "Aug" on file '/var/log/messages'
# grep [0-9] /var/log/messages
select from file '/var/log/messages' all lines that contain numbers
# grep Aug -R /var/log/*
search string "Aug" at directory '/var/log' and below
# paste file1 file2
merging contents of two files for columns
# paste -d '+' file1 file2
merging contents of two files for columns with '+' delimiter on the center
# sdiff file1 file2
find differences between two files and merge interactively alike "diff"
# sed 's/string1/string2/g' example.txt
replace "string1" with "string2" in example.txt
# sed '/^$/d' example.txt
remove all blank lines from example.txt
# sed '/ *#/d; /^$/d' example.txt
remove comments and blank lines from example.txt
# sed -e '1d' exampe.txt
eliminates the first line from file example.txt
# sed -n '/string1/p'
view only lines that contain the word "string1"
# sed -e 's/ *$//' example.txt
remove empty characters at the end of each row
# sed -e 's/string1//g' example.txt
remove only the word "string1" from text and leave intact all
# sed -n '1,5p' example.txt
print from 1th to 5th row of example.txt
# sed -n '5p;5q' example.txt
print row number 5 of example.txt
# sed -e 's/00*/0/g' example.txt
replace more zeros with a single zero
# sort file1 file2
sort contents of two files
# sort file1 file2 | uniq
sort contents of two files omitting lines repeated
# sort file1 file2 | uniq -u
sort contents of two files by viewing only unique line
# sort file1 file2 | uniq -d
sort contents of two files by viewing only duplicate line
# echo 'word' | tr '[:lower:]' '[:upper:]'
convert from lower case in upper case
Install proftd in linux
1. Download proftpd rpm package, download it from http://rpm.pbone.net
# wget ftp://ftp.pbone.net/mirror/centos.karan.org/el5/extras/testing/x86_64/RPMS/proftpd-1.3.1-3.el5.kb.x86_64.rpm
2. Install rpm package
# rpm -i proftpd-1.3.1-3.el5.kb.x86_64.rpm
3. Use ftpasswd to create user and group for ftp login. Complete manual click here.
Add users
# mkdir /etc/proftpd
# ftpasswd –passwd –file=/etc/proftpd/passwd –name=bob –uid=1001 –home=/home/bob –shell=/bin/false
Add group
# ftpasswd –group –file=/etc/proftpd/group –name=group-name –gid=group-id –member=user-member1 –member=user-member2 … –member=user-memberN
4. Edit /etc/proftpd.conf file
AuthUserFile /etc/proftpd/passwd
AuthGroupFile /etc/proftpd/group
#Disable PAM authentification
#AuthPAMConfig proftpd
#AuthOrder mod_auth_pam.c* mod_auth_unix.c
AuthPAM off
5. Restart proftpd service and put proftpd service in startup list.
# /etc/init.d/proftpd start
# chkconfig proftpd on
# wget ftp://ftp.pbone.net/mirror/centos.karan.org/el5/extras/testing/x86_64/RPMS/proftpd-1.3.1-3.el5.kb.x86_64.rpm
2. Install rpm package
# rpm -i proftpd-1.3.1-3.el5.kb.x86_64.rpm
3. Use ftpasswd to create user and group for ftp login. Complete manual click here.
Add users
# mkdir /etc/proftpd
# ftpasswd –passwd –file=/etc/proftpd/passwd –name=bob –uid=1001 –home=/home/bob –shell=/bin/false
Add group
# ftpasswd –group –file=/etc/proftpd/group –name=group-name –gid=group-id –member=user-member1 –member=user-member2 … –member=user-memberN
4. Edit /etc/proftpd.conf file
AuthUserFile /etc/proftpd/passwd
AuthGroupFile /etc/proftpd/group
#Disable PAM authentification
#AuthPAMConfig proftpd
#AuthOrder mod_auth_pam.c* mod_auth_unix.c
AuthPAM off
5. Restart proftpd service and put proftpd service in startup list.
# /etc/init.d/proftpd start
# chkconfig proftpd on
Mount Linux partition in Windows
Ext2Fsd free software to mount linux partition into my Windows system. It’s so easy to install and use. Just install it and with their friendly navigation we can mount it painlessly.
Package Auto Update Notifications
Install apticron
Type the following command at a shell prompt:
# apt-get update
# apt-get install apticron
Configure apticron to send email notifications
The default coniguration file is located at /etc/apticron/apticron.conf. Open file using text editor:
# vi /etc/apticron/apticron.conf
You need to set email address to email the notification as follows:
EMAIL="your_email@domain.com"
================================================================
sample configuration file
# apticron.conf
#
# set EMAIL to a list of addresses which will be notified of impending updates
#
EMAIL="admin@myhost.com"
#
# Set LISTCHANGES_PROFILE if you would like apticron to invoke apt-listchanges
# with the --profile option. You should add a corresponding profile to
# /etc/apt/listchanges.conf
#
# LISTCHANGES_PROFILE="apticron"
#
# Set SYSTEM if you would like apticron to use something other than the output
# of "hostname -f" for the system name in the mails it generates
#
# SYSTEM="foobar.example.com"
#
# Set IPADDRESSNUM if you would like to configure the maximal number of IP
# addresses apticron displays. The default is to display 1 address of each
# family type (inet, inet6), if available.
#
# IPADDRESSNUM="1"
#
# Set IPADDRESSES to a whitespace seperated list of reachable addresses for
# this system. By default, apticron will try to work these out using the
# "ip" command
#
# IPADDRESSES="192.10.2.1 2001:db8:1:2:3::1"
Save and close the file. /etc/cron.daily/apticron is the cron script for executing apticron daily and it will send you notfication when updates available.
Type the following command at a shell prompt:
# apt-get update
# apt-get install apticron
Configure apticron to send email notifications
The default coniguration file is located at /etc/apticron/apticron.conf. Open file using text editor:
# vi /etc/apticron/apticron.conf
You need to set email address to email the notification as follows:
EMAIL="your_email@domain.com"
================================================================
sample configuration file
# apticron.conf
#
# set EMAIL to a list of addresses which will be notified of impending updates
#
EMAIL="admin@myhost.com"
#
# Set LISTCHANGES_PROFILE if you would like apticron to invoke apt-listchanges
# with the --profile option. You should add a corresponding profile to
# /etc/apt/listchanges.conf
#
# LISTCHANGES_PROFILE="apticron"
#
# Set SYSTEM if you would like apticron to use something other than the output
# of "hostname -f" for the system name in the mails it generates
#
# SYSTEM="foobar.example.com"
#
# Set IPADDRESSNUM if you would like to configure the maximal number of IP
# addresses apticron displays. The default is to display 1 address of each
# family type (inet, inet6), if available.
#
# IPADDRESSNUM="1"
#
# Set IPADDRESSES to a whitespace seperated list of reachable addresses for
# this system. By default, apticron will try to work these out using the
# "ip" command
#
# IPADDRESSES="192.10.2.1 2001:db8:1:2:3::1"
Save and close the file. /etc/cron.daily/apticron is the cron script for executing apticron daily and it will send you notfication when updates available.
SSH Manipulations
SSH Banner Message
Login as root and edit ssh config file
# vi /etc/ssh/sshd_config
Find this variable in the config file
# Banner /some/locations/file
Uncomment it and save the file
Restart openssh server
# /etc/init.d/ssh restart
SSH Timeout
echo “TMOUT=300 >> /etc/bashrc
echo “readonly TMOUT” >> /etc/bashrc
echo “export TMOUT” >> /etc/bashrc
Login as root and edit ssh config file
# vi /etc/ssh/sshd_config
Find this variable in the config file
# Banner /some/locations/file
Uncomment it and save the file
Restart openssh server
# /etc/init.d/ssh restart
SSH Timeout
echo “TMOUT=300 >> /etc/bashrc
echo “readonly TMOUT” >> /etc/bashrc
echo “export TMOUT” >> /etc/bashrc
Extract a single file from single tar ball
Extracting Specific Files
Extract a file called etc/default/sysstat from config.tar.gz tarball
#tar -ztvf config.tar.gz
#tar -zxvf config.tar.gz etc/default/sysstat
#tar -xvf {tarball.tar} {path/to/file}
This is also valid
#tar --extract --file={tarball.tar} {file}
Extract a directory called css from cbz.tar
##tar --extract --file=cbz.tar css
Wildcard based extracting
You can also extract those files that match a specific globbing pattern (wildcards). For example, to extract from cbz.tar all files that begin with pic, no matter their directory prefix, you could type:
#tar -xf cbz.tar --wildcards --no-anchored 'pic*'
To extract all php files, enter
#tar -xf cbz.tar --wildcards --no-anchored '*.php'
Where,
-x: instructs tar to extract files.
-f: specifies filename / tarball name.
-v: Verbose (show progress while extracting files).
-j : filter archive through bzip2, use to decompress .bz2 files.
-z: filter archive through gzip, use to decompress .gz files.
–wildcards: instructs tar to treat command line arguments as globbing patterns.
–no-anchored: informs it that the patterns apply to member names after any / delimiter.
Tar listing
Tar command provides the option to list files inside compressed tar ball. However mtools includes command called lz which gunzips and shows a listing of a gzip’d tar’d archive without extracting files.
For example, display listing of file called backup.tar.gz type command:
#lz backup.tar.gz
As you see lz provides a listing of a gzip’d tar’d archive, that is a tar archive compressed with the gzip command. It is not strictly necessary on Debian GNU/Linux (or other Linux/BSD/Solaris oses), because the GNU tar(1) program provides the same capability with the command:
#tar -tzf backup.tar.gz
Extract a file called etc/default/sysstat from config.tar.gz tarball
#tar -ztvf config.tar.gz
#tar -zxvf config.tar.gz etc/default/sysstat
#tar -xvf {tarball.tar} {path/to/file}
This is also valid
#tar --extract --file={tarball.tar} {file}
Extract a directory called css from cbz.tar
##tar --extract --file=cbz.tar css
Wildcard based extracting
You can also extract those files that match a specific globbing pattern (wildcards). For example, to extract from cbz.tar all files that begin with pic, no matter their directory prefix, you could type:
#tar -xf cbz.tar --wildcards --no-anchored 'pic*'
To extract all php files, enter
#tar -xf cbz.tar --wildcards --no-anchored '*.php'
Where,
-x: instructs tar to extract files.
-f: specifies filename / tarball name.
-v: Verbose (show progress while extracting files).
-j : filter archive through bzip2, use to decompress .bz2 files.
-z: filter archive through gzip, use to decompress .gz files.
–wildcards: instructs tar to treat command line arguments as globbing patterns.
–no-anchored: informs it that the patterns apply to member names after any / delimiter.
Tar listing
Tar command provides the option to list files inside compressed tar ball. However mtools includes command called lz which gunzips and shows a listing of a gzip’d tar’d archive without extracting files.
For example, display listing of file called backup.tar.gz type command:
#lz backup.tar.gz
As you see lz provides a listing of a gzip’d tar’d archive, that is a tar archive compressed with the gzip command. It is not strictly necessary on Debian GNU/Linux (or other Linux/BSD/Solaris oses), because the GNU tar(1) program provides the same capability with the command:
#tar -tzf backup.tar.gz
Locking and Unlocking User Accounts in Linux
To lock, you can use the follow command
# passwd -l username
To Unlock the same account
# passwd -u username
Creating command Alias in Linux
Creating aliases is very easy. You can either enter them at the command line as you're working, or more likely, you'll put them in one of your startup files, like your .bashrc file, so they will be available every time you log in.
I created the l alias above by entering the following command into my .bashrc file:
alias l="ls -al"
As you can see, the syntax is very easy:
1. Start with the alias command
2. Then type the name of the alias you want to create
3. Then an = sign, with no spaces on either side of the =
4. Then type the command (or commands) you want your alias to execute when it is run. This can be a simple command, or can be a powerful combination of commands.
Sample aliases example
To get you going, here is a list of sample aliases I use all the time. I've pretty much just copied them here from my .bashrc file:
alias l="ls -al"
alias lm="ls -al|more"
alias html="cd /web/apache/htdocs/devdaily/html"
alias logs="cd /web/apache/htdocs/devdaily/logs"
alias qp="ps auxwww|more"
alias nu="who|wc -l"
alias aug="ls -al|grep Sep|grep -v 2010"
I created the l alias above by entering the following command into my .bashrc file:
alias l="ls -al"
As you can see, the syntax is very easy:
1. Start with the alias command
2. Then type the name of the alias you want to create
3. Then an = sign, with no spaces on either side of the =
4. Then type the command (or commands) you want your alias to execute when it is run. This can be a simple command, or can be a powerful combination of commands.
Sample aliases example
To get you going, here is a list of sample aliases I use all the time. I've pretty much just copied them here from my .bashrc file:
alias l="ls -al"
alias lm="ls -al|more"
alias html="cd /web/apache/htdocs/devdaily/html"
alias logs="cd /web/apache/htdocs/devdaily/logs"
alias qp="ps auxwww|more"
alias nu="who|wc -l"
alias aug="ls -al|grep Sep|grep -v 2010"
Ubuntu: Very useful Commands
Command privileges
sudo command - run command as root
sudo su – root shell open
sudo su user – open shell as a user
sudo -k – forget your password sudo
gksudo command – sudo visual dialog (GNOME)
kdesudo command – sudo visual dialog (KDE)
sudo visudo – edit / etc / sudoers
gksudo nautilus – root file manager (GNOME)
kdesudo konqueror – root file manager (KDE)
passwd – change your password
Command Network
ifconfig – displays information network
iwconfig – displays information from wireless
sudo iwlist scan – scan wireless networks
sudo /etc/init.d/networking restart – reset the network
(file) /etc/network/interfaces – manual configuration
ifup interface – bring online interface
ifdown interface – disable interface
Commands Display
sudo /etc/init.d/gdm restart – reset X (Gnome)
sudo /etc/init.d/kdm restart – reset X (KDE)
(file) /etc/X11/xorg.conf – show Configuration
sudo dpkg-reconfigure - reconfigure xserver-xorg-phigh - reset configuration X
Ctrl+Alt+Bksp – X display reset if frozen
Ctrl+Alt+FN – switch to tty N
Ctrl+Alt+F7 – switch back to X display
Commands Service System
start service – service to start work (Upstart)
stop service – service to stop working (Upstart)
status service – check if service is running (Upstart)
/etc/init.d/service start – start service (SysV)
/etc/init.d/service stop – stop service (SysV)
/etc/init.d/service status – check service (SysV)
/etc/init.d/service restart – reset service (SysV)
runlevel – get current runlevel
Commands for Firewall
ufw enable – turn on the firewall
ufw disable – turn off the firewall
ufw default allow – allow all connections by default
ufw default deny – drop all connections by default
ufw status – current rules and
ufw allow port – to allow traffic on port
ufw deny port – port block
ufw deny from ip – ip block
Command System
lsb_release -a – get the version of Ubuntu
uname -r – get kernel version
uname -a – get all the information kernel
Commands for Package Manager
apt-get update – refresh updates available
apt-get upgrade – update all packages
apt-get dist-upgrade – version update
apt-get install pkg – installing pkg
apt-get remove pkg – uninstall pkg
apt-get autoremove – removing packages obsotletos
apt-get -f install – try to fix packages
dpkg –configure -a – try to fix a broken package
dpkg -i pkg.deb – install file pkg.deb
(file) /etc/apt/sources.list – list of repositories APT
Special Packages For commands
ubuntu-desktop – Setting the standard Ubuntu
kubuntu-desktop – KDE Desktop
xubuntu-desktop – desktop XFCE
ubuntu-minimal – core earnings Ubuntu
ubuntu-standard – the standard utilities Ubuntu
ubuntu-restricted-extras – not free, but useful
kubuntu-restricted-extras – ditto KDE
xubuntu-restricted-extras – ditto XFCE
build-essential – packages used to compile
linux-image-generic – latest generic kernel image
linux-headers-generic – latest headlines
Applications commands
nautilus – File Manager (GNOME)
dolphin – File Manager (KDE)
konqueror – Web browser (KDE)
kate – text editor (KDE)
gedit – text editor (GNOME)
sudo command - run command as root
sudo su – root shell open
sudo su user – open shell as a user
sudo -k – forget your password sudo
gksudo command – sudo visual dialog (GNOME)
kdesudo command – sudo visual dialog (KDE)
sudo visudo – edit / etc / sudoers
gksudo nautilus – root file manager (GNOME)
kdesudo konqueror – root file manager (KDE)
passwd – change your password
Command Network
ifconfig – displays information network
iwconfig – displays information from wireless
sudo iwlist scan – scan wireless networks
sudo /etc/init.d/networking restart – reset the network
(file) /etc/network/interfaces – manual configuration
ifup interface – bring online interface
ifdown interface – disable interface
Commands Display
sudo /etc/init.d/gdm restart – reset X (Gnome)
sudo /etc/init.d/kdm restart – reset X (KDE)
(file) /etc/X11/xorg.conf – show Configuration
sudo dpkg-reconfigure - reconfigure xserver-xorg-phigh - reset configuration X
Ctrl+Alt+Bksp – X display reset if frozen
Ctrl+Alt+FN – switch to tty N
Ctrl+Alt+F7 – switch back to X display
Commands Service System
start service – service to start work (Upstart)
stop service – service to stop working (Upstart)
status service – check if service is running (Upstart)
/etc/init.d/service start – start service (SysV)
/etc/init.d/service stop – stop service (SysV)
/etc/init.d/service status – check service (SysV)
/etc/init.d/service restart – reset service (SysV)
runlevel – get current runlevel
Commands for Firewall
ufw enable – turn on the firewall
ufw disable – turn off the firewall
ufw default allow – allow all connections by default
ufw default deny – drop all connections by default
ufw status – current rules and
ufw allow port – to allow traffic on port
ufw deny port – port block
ufw deny from ip – ip block
Command System
lsb_release -a – get the version of Ubuntu
uname -r – get kernel version
uname -a – get all the information kernel
Commands for Package Manager
apt-get update – refresh updates available
apt-get upgrade – update all packages
apt-get dist-upgrade – version update
apt-get install pkg – installing pkg
apt-get remove pkg – uninstall pkg
apt-get autoremove – removing packages obsotletos
apt-get -f install – try to fix packages
dpkg –configure -a – try to fix a broken package
dpkg -i pkg.deb – install file pkg.deb
(file) /etc/apt/sources.list – list of repositories APT
Special Packages For commands
ubuntu-desktop – Setting the standard Ubuntu
kubuntu-desktop – KDE Desktop
xubuntu-desktop – desktop XFCE
ubuntu-minimal – core earnings Ubuntu
ubuntu-standard – the standard utilities Ubuntu
ubuntu-restricted-extras – not free, but useful
kubuntu-restricted-extras – ditto KDE
xubuntu-restricted-extras – ditto XFCE
build-essential – packages used to compile
linux-image-generic – latest generic kernel image
linux-headers-generic – latest headlines
Applications commands
nautilus – File Manager (GNOME)
dolphin – File Manager (KDE)
konqueror – Web browser (KDE)
kate – text editor (KDE)
gedit – text editor (GNOME)
Installing GRUB using grub-install
In order to install GRUB under a UNIX-like OS (such as gnu), invoke the program grub-install as the superuser (root).
The usage is basically very simple. You only need to specify one argument to the program, namely, where to install the boot loader. The argument has to be either a device file (like ‘/dev/hda’). For example, under Linux the following will install GRUB into the MBR of the first IDE disk:
# grub-install /dev/hda
Likewise, under GNU/Hurd, this has the same effect:
# grub-install /dev/hd0
But all the above examples assume that GRUB should put images under the /boot directory. If you want GRUB to put images under a directory other than /boot, you need to specify the option --boot-directory. The typical usage is that you create a GRUB boot floppy with a filesystem. Here is an example:
# mke2fs /dev/fd0
# mount -t ext2 /dev/fd0 /mnt
# mkdir /mnt/boot
# grub-install --boot-directory=/mnt/boot /dev/fd0
# umount /mnt
Some BIOSes have a bug of exposing the first partition of a USB drive as a floppy instead of exposing the USB drive as a hard disk (they call it “USB-FDD” boot). In such cases, you need to install like this:
# losetup /dev/loop0 /dev/sdb1
# mount /dev/loop0 /mnt/usb
# grub-install --boot-directory=/mnt/usb/bugbios --force --allow-floppy /dev/loop0
This install doesn't conflict with standard install as long as they are in separate directories.
Note that grub-install is actually just a shell script and the real task is done by grub-mkimage and grub-setup. Therefore, you may run those commands directly to install GRUB, without using grub-install. Don't do that, however, unless you are very familiar with the internals of GRUB. Installing a boot loader on a running OS may be extremely dangerous.
Windows: control panel shortcuts
Accessibility Options........................access.cpl
Add New Hardware ........................sysdm.cpl
Add/Remove Programs ........................appwiz.cpl
Date/Time Properties ........................timedate.cpl
Display Properties ........................desk.cpl
FindFast ........................findfast.cpl
Fonts Folder ........................fonts
Internet Properties ........................inetcpl.cpl
Joystick Properties ........................joy.cpl
Keyboard Properties ........................main.cpl keyboard
Microsoft Exchange ........................mlcfg32.cpl
Microsoft Mail Post Office...................wgpocpl.cpl
Modem Properties ........................modem.cpl
Mouse Properties ........................main.cpl
Multimedia Properties........................mmsys.cpl
Network Properties ........................netcpl.cpl
Password Properties ........................password.cpl
PC Card ........................main.cpl pc card (PCMCIA)
Power Management.............................main.cpl power
Power Management.............................powercfg.cpl
Printers Folder ........................printers
Regional Settings ........................intl.cpl
Scanners and Cameras ........................sticpl.cpl
Sound Properties ........................mmsys.cpl sounds
System Properties ........................sysdm.cpl
Add New Hardware ........................sysdm.cpl
Add/Remove Programs ........................appwiz.cpl
Date/Time Properties ........................timedate.cpl
Display Properties ........................desk.cpl
FindFast ........................findfast.cpl
Fonts Folder ........................fonts
Internet Properties ........................inetcpl.cpl
Joystick Properties ........................joy.cpl
Keyboard Properties ........................main.cpl keyboard
Microsoft Exchange ........................mlcfg32.cpl
Microsoft Mail Post Office...................wgpocpl.cpl
Modem Properties ........................modem.cpl
Mouse Properties ........................main.cpl
Multimedia Properties........................mmsys.cpl
Network Properties ........................netcpl.cpl
Password Properties ........................password.cpl
PC Card ........................main.cpl pc card (PCMCIA)
Power Management.............................main.cpl power
Power Management.............................powercfg.cpl
Printers Folder ........................printers
Regional Settings ........................intl.cpl
Scanners and Cameras ........................sticpl.cpl
Sound Properties ........................mmsys.cpl sounds
System Properties ........................sysdm.cpl
Exim Mail Commands in Details
Print a count of the messages in the queue:
[root@localhost]# exim -bpc
Print a listing of the messages in the queue (time queued, size, message-id, sender, recipient):
[root@localhost]# exim -bp
Print a summary of messages in the queue (count, volume, oldest, newest, domain, and totals):
[root@localhost]# exim -bp | exiqsumm
Print what Exim is doing right now:
[root@localhost]# exiwhat
Test how exim will route a given address:
[root@localhost]# exim -bt alias@localdomain.com
#user@thishost.com
<-- alias@localdomain.com
router = localuser, transport = local_delivery
[root@localhost]# exim -bt user@thishost.com
user@thishost.com
router = localuser, transport = local_delivery
[root@localhost]# exim -bt user@remotehost.com
router = lookuphost, transport = remote_smtp
host mail.remotehost.com [1.2.3.4] MX=0
Run a pretend SMTP transaction from the command line, as if it were coming from the given IP address. This will display Exim's checks, ACLs, and filters as they are applied. The message will NOT actually be delivered.
[root@localhost]# exim -bh 192.168.11.22
Display all of Exim's configuration settings:
[root@localhost]# exim -bP
Searching the queue with exiqgrep
Exim includes a utility that is quite nice for grepping through the queue, called exiqgrep. Learn it. Know it. Live it. If you're not using this, and if you're not familiar with the various flags it uses, you're probably doing things the hard way, like piping `exim -bp` into awk, grep, cut, or `wc -l`. Don't make life harder than it already is.
First, various flags that control what messages are matched. These can be combined to come up with a very particular search.
Use -f to search the queue for messages from a specific sender:
[root@localhost]# exiqgrep -f [luser]@domain
Use -r to search the queue for messages for a specific recipient/domain:
[root@localhost]# exiqgrep -r [luser]@domain
Use -o to print messages older than the specified number of seconds. For example, messages older than 1 day:
[root@localhost]# exiqgrep -o 86400 [...]
Use -y to print messages that are younger than the specified number of seconds. For example, messages less than an hour old:
[root@localhost]# exiqgrep -y 3600 [...]
Use -s to match the size of a message with a regex. For example, 700-799 bytes:
[root@localhost]# exiqgrep -s '^7..$' [...]
Use -z to match only frozen messages, or -x to match only unfrozen messages.
There are also a few flags that control the display of the output.
Use -i to print just the message-id as a result of one of the above two searches:
[root@localhost]# exiqgrep -i [ -r | -f ] ...
Use -c to print a count of messages matching one of the above searches:
[root@localhost]# exiqgrep -c ...
Print just the message-id of the entire queue:
[root@localhost]# exiqgrep -i
Managing the queue
The main exim binary (/usr/sbin/exim) is used with various flags to make things happen to messages in the queue. Most of these require one or more message-IDs to be specified in the command line, which is where `exiqgrep -i` as described above really comes in handy.
Start a queue run:
[root@localhost]# exim -q -v
Start a queue run for just local deliveries:
[root@localhost]# exim -ql -v
Remove a message from the queue:
[root@localhost]# exim -Mrm <message-id> [ <message-id> ... ]
Freeze a message:
[root@localhost]# exim -Mf <message-id> [ <message-id> ... ]
Thaw a message:
[root@localhost]# exim -Mt <message-id> [ <message-id> ... ]
Deliver a message, whether it's frozen or not, whether the retry time has been reached or not:
[root@localhost]# exim -M <message-id> [ <message-id> ... ]
Deliver a message, but only if the retry time has been reached:
[root@localhost]# exim -Mc <message-id> [ <message-id> ... ]
Force a message to fail and bounce as "cancelled by administrator":
[root@localhost]# exim -Mg <message-id> [ <message-id> ... ]
Remove all frozen messages:
[root@localhost]# exiqgrep -z -i | xargs exim -Mrm
Remove all messages older than five days (86400 * 5 = 432000 seconds):
[root@localhost]# exiqgrep -o 432000 -i | xargs exim -Mrm
Freeze all queued mail from a given sender:
[root@localhost]# exiqgrep -i -f luser@example.tld | xargs exim -Mf
View a message's headers:
[root@localhost]# exim -Mvh <message-id>
View a message's body:
[root@localhost]# exim -Mvb <message-id>
View a message's logs:
[root@localhost]# exim -Mvl <message-id>
Add a recipient to a message:
[root@localhost]# exim -Mar <message-id> <address> [ <address> ... ]
Edit the sender of a message:
[root@localhost]# exim -Mes <message-id> <address>
Searching the logs with exigrep
The exigrep utility (not to be confused with exiqgrep) is used to search an exim log for a string or pattern. It will print all log entries with the same internal message-id as those that matched the pattern, which is very handy since any message will take up at least three lines in the log. exigrep will search the entire content of a log entry, not just particular fields.
One can search for messages sent from a particular IP address:
[root@localhost]# exigrep '<= .* \[12.34.56.78\] ' /path/to/exim_log
Search for messages sent to a particular IP address:
[root@localhost]# exigrep '=> .* \[12.34.56.78\]' /path/to/exim_log
This example searches for outgoing messages, which have the "=>" symbol, sent to "user@domain.tld". The pipe to grep for the "<=" symbol will match only the lines with information on the sender - the From address, the sender's IP address, the message size, the message ID, and the subject line if you have enabled logging the subject. The purpose of doing such a search is that the desired information is not on the same log line as the string being searched for.
[root@localhost]# exigrep '=> .*user@domain.tld' /path/to/exim_log | fgrep '<='
Generate and display Exim stats from a logfile:
[root@localhost]# eximstats /path/to/exim_mainlog
Same as above, with less verbose output:
[root@localhost]# eximstats -ne -nr -nt /path/to/exim_mainlog
Same as above, for one particular day:
[root@localhost]# fgrep YYYY-MM-DD /path/to/exim_mainlog | eximstats
To delete all queued messages containing a certain string in the body:
[root@localhost]# grep -lr 'a certain string' /var/spool/exim/input/ | \
sed -e 's/^.*\/\([a-zA-Z0-9-]*\)-[DH]$/\1/g' | xargs exim -Mrm
Note that the above only delves into /var/spool/exim in order to grep for queue files with the given string, and that's just because exiqgrep doesn't have a feature to grep the actual bodies of messages. If you are deleting these files directly, YOU ARE DOING IT WRONG! Use the appropriate exim command to properly deal with the queue.
If you have to feed many, many message-ids (such as the output of an `exiqgrep -i` command that returns a lot of matches) to an exim command, you may exhaust the limit of your shell's command line arguments. In that case, pipe the listing of message-ids into xargs to run only a limited number of them at once. For example, to remove thousands of messages sent from hero@linux-geek.com:
[root@localhost]# exiqgrep -i -f '<hero@linux-geek.com>' | xargs exim -Mrm
[root@localhost]# exim -bpc
Print a listing of the messages in the queue (time queued, size, message-id, sender, recipient):
[root@localhost]# exim -bp
Print a summary of messages in the queue (count, volume, oldest, newest, domain, and totals):
[root@localhost]# exim -bp | exiqsumm
Print what Exim is doing right now:
[root@localhost]# exiwhat
Test how exim will route a given address:
[root@localhost]# exim -bt alias@localdomain.com
#user@thishost.com
<-- alias@localdomain.com
router = localuser, transport = local_delivery
[root@localhost]# exim -bt user@thishost.com
user@thishost.com
router = localuser, transport = local_delivery
[root@localhost]# exim -bt user@remotehost.com
router = lookuphost, transport = remote_smtp
host mail.remotehost.com [1.2.3.4] MX=0
Run a pretend SMTP transaction from the command line, as if it were coming from the given IP address. This will display Exim's checks, ACLs, and filters as they are applied. The message will NOT actually be delivered.
[root@localhost]# exim -bh 192.168.11.22
Display all of Exim's configuration settings:
[root@localhost]# exim -bP
Searching the queue with exiqgrep
Exim includes a utility that is quite nice for grepping through the queue, called exiqgrep. Learn it. Know it. Live it. If you're not using this, and if you're not familiar with the various flags it uses, you're probably doing things the hard way, like piping `exim -bp` into awk, grep, cut, or `wc -l`. Don't make life harder than it already is.
First, various flags that control what messages are matched. These can be combined to come up with a very particular search.
Use -f to search the queue for messages from a specific sender:
[root@localhost]# exiqgrep -f [luser]@domain
Use -r to search the queue for messages for a specific recipient/domain:
[root@localhost]# exiqgrep -r [luser]@domain
Use -o to print messages older than the specified number of seconds. For example, messages older than 1 day:
[root@localhost]# exiqgrep -o 86400 [...]
Use -y to print messages that are younger than the specified number of seconds. For example, messages less than an hour old:
[root@localhost]# exiqgrep -y 3600 [...]
Use -s to match the size of a message with a regex. For example, 700-799 bytes:
[root@localhost]# exiqgrep -s '^7..$' [...]
Use -z to match only frozen messages, or -x to match only unfrozen messages.
There are also a few flags that control the display of the output.
Use -i to print just the message-id as a result of one of the above two searches:
[root@localhost]# exiqgrep -i [ -r | -f ] ...
Use -c to print a count of messages matching one of the above searches:
[root@localhost]# exiqgrep -c ...
Print just the message-id of the entire queue:
[root@localhost]# exiqgrep -i
Managing the queue
The main exim binary (/usr/sbin/exim) is used with various flags to make things happen to messages in the queue. Most of these require one or more message-IDs to be specified in the command line, which is where `exiqgrep -i` as described above really comes in handy.
Start a queue run:
[root@localhost]# exim -q -v
Start a queue run for just local deliveries:
[root@localhost]# exim -ql -v
Remove a message from the queue:
[root@localhost]# exim -Mrm <message-id> [ <message-id> ... ]
Freeze a message:
[root@localhost]# exim -Mf <message-id> [ <message-id> ... ]
Thaw a message:
[root@localhost]# exim -Mt <message-id> [ <message-id> ... ]
Deliver a message, whether it's frozen or not, whether the retry time has been reached or not:
[root@localhost]# exim -M <message-id> [ <message-id> ... ]
Deliver a message, but only if the retry time has been reached:
[root@localhost]# exim -Mc <message-id> [ <message-id> ... ]
Force a message to fail and bounce as "cancelled by administrator":
[root@localhost]# exim -Mg <message-id> [ <message-id> ... ]
Remove all frozen messages:
[root@localhost]# exiqgrep -z -i | xargs exim -Mrm
Remove all messages older than five days (86400 * 5 = 432000 seconds):
[root@localhost]# exiqgrep -o 432000 -i | xargs exim -Mrm
Freeze all queued mail from a given sender:
[root@localhost]# exiqgrep -i -f luser@example.tld | xargs exim -Mf
View a message's headers:
[root@localhost]# exim -Mvh <message-id>
View a message's body:
[root@localhost]# exim -Mvb <message-id>
View a message's logs:
[root@localhost]# exim -Mvl <message-id>
Add a recipient to a message:
[root@localhost]# exim -Mar <message-id> <address> [ <address> ... ]
Edit the sender of a message:
[root@localhost]# exim -Mes <message-id> <address>
Searching the logs with exigrep
The exigrep utility (not to be confused with exiqgrep) is used to search an exim log for a string or pattern. It will print all log entries with the same internal message-id as those that matched the pattern, which is very handy since any message will take up at least three lines in the log. exigrep will search the entire content of a log entry, not just particular fields.
One can search for messages sent from a particular IP address:
[root@localhost]# exigrep '<= .* \[12.34.56.78\] ' /path/to/exim_log
Search for messages sent to a particular IP address:
[root@localhost]# exigrep '=> .* \[12.34.56.78\]' /path/to/exim_log
This example searches for outgoing messages, which have the "=>" symbol, sent to "user@domain.tld". The pipe to grep for the "<=" symbol will match only the lines with information on the sender - the From address, the sender's IP address, the message size, the message ID, and the subject line if you have enabled logging the subject. The purpose of doing such a search is that the desired information is not on the same log line as the string being searched for.
[root@localhost]# exigrep '=> .*user@domain.tld' /path/to/exim_log | fgrep '<='
Generate and display Exim stats from a logfile:
[root@localhost]# eximstats /path/to/exim_mainlog
Same as above, with less verbose output:
[root@localhost]# eximstats -ne -nr -nt /path/to/exim_mainlog
Same as above, for one particular day:
[root@localhost]# fgrep YYYY-MM-DD /path/to/exim_mainlog | eximstats
To delete all queued messages containing a certain string in the body:
[root@localhost]# grep -lr 'a certain string' /var/spool/exim/input/ | \
sed -e 's/^.*\/\([a-zA-Z0-9-]*\)-[DH]$/\1/g' | xargs exim -Mrm
Note that the above only delves into /var/spool/exim in order to grep for queue files with the given string, and that's just because exiqgrep doesn't have a feature to grep the actual bodies of messages. If you are deleting these files directly, YOU ARE DOING IT WRONG! Use the appropriate exim command to properly deal with the queue.
If you have to feed many, many message-ids (such as the output of an `exiqgrep -i` command that returns a lot of matches) to an exim command, you may exhaust the limit of your shell's command line arguments. In that case, pipe the listing of message-ids into xargs to run only a limited number of them at once. For example, to remove thousands of messages sent from hero@linux-geek.com:
[root@localhost]# exiqgrep -i -f '<hero@linux-geek.com>' | xargs exim -Mrm
Deleting mail from the mail queue in sendmail
Sendmail does not provide a command-line argument to remove messages from the mail queue. It may be necessary to manually remove messages from the mail queue rather than allowing Sendmail to attempt redelivery of messages for Timeout.queureturn days (5, by default).
The proper way to remove messages from the mail queue is to use the qtool.pl program included in the contrib subdirectory of the Sendmail source code distribution. qtool.pl uses the same file locking mechanism as Sendmail.
Removing "double bounce" messages
The following is a Perl script that calls /usr/local/bin/qtool.pl to remove "double bounce" messages. A "double bounce" is a message that is addressed to a non-existent user and that is sent from an invalid return address. Busy mail relays often have hundreds to thousands of these messages.
The script below will delete a queued message if it is (1) "deferred" (unable to be returned to the sender), (2) being sent from our postmaster email address, and (3) the subject is unique to delivery failure notifications.
#!/usr/bin/perl
use strict;
my $qtool = "/usr/local/bin/qtool.pl";
my $mqueue_directory = "/var/spool/mqueue";
my $messages_removed = 0;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf(\w{14})/ ) {
my $qf_file = $_;
my $queue_id = $1;
my $deferred = 0;
my $from_postmaster = 0;
my $delivery_failure = 0;
my $double_bounce = 0;
open (QF_FILE, $_);
while(<QF_FILE>) {
$deferred = 1 if ( /^MDeferred/ );
$from_postmaster = 1 if ( /^S<>$/ );
$delivery_failure = 1 if \
( /^H\?\?Subject: DELIVERY FAILURE: (User|Recipient)/ );
if ( $deferred && $from_postmaster && $delivery_failure ) {
$double_bounce = 1;
last;
}
}
close (QF_FILE);
if ($double_bounce) {
print "Removing $queue_id...\n";
system "$qtool", "-d", $qf_file;
$messages_removed++;
}
}
}
print "\n$messages_removed total \"double bounce\" message(s) removed from ";
print "mail queue.\n";
Queued mail by domain
The following Perl script will show all queued mail by domain. A message may be counted more than once if it has multiple envelope recipients from different domains.
#!/usr/bin/perl
use strict;
my $mqueue_directory = "/var/spool/mqueue";
my %occurrences;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf\w{14}/ ) {
open (QF_FILE, $_);
while(<QF_FILE>) {
# Lines beginning with R contain an envelope recipient
if ( /^R.*:<.*\@(.*)>$/ ) {
my $domain = lc($1);
# Add 1 to the %occurrences hash
$occurrences{$domain}++;
}
}
}
}
# Subroutine to sort hash by ascending value
sub hashValueAscendingNum {
$occurrences{$a} <=> $occurrences{$b};
}
# Print sorted results
foreach my $key (sort hashValueAscendingNum (keys(%occurrences))) {
print "$occurrences{$key} $key\n";
}
Removing mail by domain
The following Perl script will remove all mail in the mail queue addressed to domain. Messages with multiple envelope recipients to different domains will not be deleted.
#!/usr/bin/perl
use strict;
# Exit immediately if domain was not specified as command-line argument
if (!(defined($ARGV[0]))) {
(my $basename = $0) =~ s!^.*/!!;
print "Usage: $basename domain\n";
exit 1;
}
# Convert domain supplied as command-line argument to lowercase
my $domain_to_remove = lc($ARGV[0]);
my $qtool = "/usr/local/bin/qtool.pl";
my $mqueue_directory = "/var/spool/mqueue";
my $messages_removed = 0;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf\w{14}/ ) {
my $QF_FILE = $_;
my $envelope_recipients = 0;
my $match = 1;
open (QF_FILE, $_);
while(<QF_FILE>) {
# If any of the envelope recipients contain a domain other than
# $domain_to_remove, do not match the message
if ( /^R.*:<.*\@(.*)>$/ ) {
my $recipient_domain = lc($1);
$envelope_recipients++;
if ($recipient_domain ne $domain_to_remove) {
$match = 0;
last;
}
}
}
close (QF_FILE);
# $QF_FILE may not contain an envelope recipient at the time it is opened
# and read. Do not match $QF_FILE in that case.
if ($match == 1 && $envelope_recipients != 0) {
print "Removing $QF_FILE...\n";
system "$qtool", "-d", $QF_FILE;
$messages_removed++;
}
}
}
print "$messages_removed total message(s) removed from mail queue.\n";
Queued mail by email address
The following Perl script will show all queued mail by email address. A message may be counted more than once if it has multiple envelope recipients.
#!/usr/bin/perl
use strict;
my $mqueue_directory = "/var/spool/mqueue";
my %occurrences;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf\w{14}/ ) {
open (QF_FILE, $_);
while(<QF_FILE>) {
# Lines beginning with R contain an envelope recipient
if ( /^R.*:<(.*)>$/ ) {
my $domain = lc($1);
# Add 1 to the %occurrences hash
$occurrences{$domain}++;
}
}
}
}
# Subroutine to sort hash by ascending value
sub hashValueAscendingNum {
$occurrences{$a} <=> $occurrences{$b};
}
# Print sorted results
foreach my $key (sort hashValueAscendingNum (keys(%occurrences))) {
print "$occurrences{$key} $key\n";
}
Removing mail by email address
The following Perl script will remove all mail in the mail queue addressed to email_address. Messages with multiple envelope recipients will not be deleted.
#!/usr/bin/perl
use strict;
# Exit immediately if email_address was not specified as command-line argument
if (!(defined($ARGV[0]))) {
(my $basename = $0) =~ s!^.*/!!;
print "Usage: $basename email_address\n";
exit 1;
}
# Convert email address supplied as command-line argument to lowercase
my $address_to_remove = lc($ARGV[0]);
my $qtool = "/usr/local/bin/qtool.pl";
my $mqueue_directory = "/var/spool/mqueue";
my $messages_removed = 0;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf\w{14}/ ) {
my $QF_FILE = $_;
my $envelope_recipients = 0;
my $match = 1;
open (QF_FILE, $_);
while(<QF_FILE>) {
# If any of the envelope recipients contain an email address other than
# $address_to_remove, do not match the message
if ( /^R.*:<(.*)>$/ ) {
my $recipient_address = lc($1);
$envelope_recipients++;
if ($recipient_address ne $address_to_remove) {
$match = 0;
last;
}
}
}
close (QF_FILE);
# $QF_FILE may not contain an envelope recipient at the time it is opened
# and read. Do not match $QF_FILE in that case.
if ($match == 1 && $envelope_recipients != 0) {
print "Removing $QF_FILE...\n";
system "$qtool", "-d", $QF_FILE;
$messages_removed++;
}
}
}
print "$messages_removed total message(s) removed from mail queue.\n";
Older notes
Note: the preferred method of queue removal is to use qtool.pl as illustrated above.
In order to remove mail from the queue, you have to delete the df* and qf* files from your mail queue directory, generally /var/spool/mqueue. The qf* file is the header of the message and the control file, and the df* file is the body of the message.
script to move undeliverable email in our /var/spool/mqueue mail queue to an alternate /tmp/mqueue directory.
#!/bin/sh
if [ -z $@ ] ; then
echo "Usage: $0 email_address"
exit 1
fi
for i in `(cd /var/spool/mqueue; grep -l "To:.*$1" qf* | cut -c3-)`
do
mv /var/spool/mqueue/*$i /tmp/mqueue
done
If you have multiple mail queues, such as q1, q2, q3, q4, and q5, you can use the following script:
#!/bin/sh
if [ -z $@ ] ; then
echo "Usage: $0 email_address"
exit 1
fi
for i in q1 q2 q3 q4 q5
do
for j in `(cd /var/spool/mqueue/$i; grep -l "To:.*$1" qf* | cut -c3-)`
do
mv /var/spool/mqueue/$i/*$j /tmp/mqueue
done
done
For example, running the script while passing the command-line argument badsender@baddomain.com will look for each qf* file in the mail queue containing To:.*badsender@baddomain.com. The regular
expression .* will match zero or more occurrences of any characters, numbers, or whitespace. For example, it would match:
To: badsender@baddomain.com
To: Bad Sender <badsender@baddomain.com>
The script then moves any other files (i.e. the body of the message) in the mail queue with the same Sendmail message ID to the alternate directory. It does this with the cut -c3- command, as the Sendmail message ID is the 3rd through the last character.
The mail is moved to /tmp/mqueue. If you are confident that you do not want the messages, you can delete them from this directory, or you could change the script to remove the files.
The proper way to remove messages from the mail queue is to use the qtool.pl program included in the contrib subdirectory of the Sendmail source code distribution. qtool.pl uses the same file locking mechanism as Sendmail.
Removing "double bounce" messages
The following is a Perl script that calls /usr/local/bin/qtool.pl to remove "double bounce" messages. A "double bounce" is a message that is addressed to a non-existent user and that is sent from an invalid return address. Busy mail relays often have hundreds to thousands of these messages.
The script below will delete a queued message if it is (1) "deferred" (unable to be returned to the sender), (2) being sent from our postmaster email address, and (3) the subject is unique to delivery failure notifications.
#!/usr/bin/perl
use strict;
my $qtool = "/usr/local/bin/qtool.pl";
my $mqueue_directory = "/var/spool/mqueue";
my $messages_removed = 0;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf(\w{14})/ ) {
my $qf_file = $_;
my $queue_id = $1;
my $deferred = 0;
my $from_postmaster = 0;
my $delivery_failure = 0;
my $double_bounce = 0;
open (QF_FILE, $_);
while(<QF_FILE>) {
$deferred = 1 if ( /^MDeferred/ );
$from_postmaster = 1 if ( /^S<>$/ );
$delivery_failure = 1 if \
( /^H\?\?Subject: DELIVERY FAILURE: (User|Recipient)/ );
if ( $deferred && $from_postmaster && $delivery_failure ) {
$double_bounce = 1;
last;
}
}
close (QF_FILE);
if ($double_bounce) {
print "Removing $queue_id...\n";
system "$qtool", "-d", $qf_file;
$messages_removed++;
}
}
}
print "\n$messages_removed total \"double bounce\" message(s) removed from ";
print "mail queue.\n";
Queued mail by domain
The following Perl script will show all queued mail by domain. A message may be counted more than once if it has multiple envelope recipients from different domains.
#!/usr/bin/perl
use strict;
my $mqueue_directory = "/var/spool/mqueue";
my %occurrences;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf\w{14}/ ) {
open (QF_FILE, $_);
while(<QF_FILE>) {
# Lines beginning with R contain an envelope recipient
if ( /^R.*:<.*\@(.*)>$/ ) {
my $domain = lc($1);
# Add 1 to the %occurrences hash
$occurrences{$domain}++;
}
}
}
}
# Subroutine to sort hash by ascending value
sub hashValueAscendingNum {
$occurrences{$a} <=> $occurrences{$b};
}
# Print sorted results
foreach my $key (sort hashValueAscendingNum (keys(%occurrences))) {
print "$occurrences{$key} $key\n";
}
Removing mail by domain
The following Perl script will remove all mail in the mail queue addressed to domain. Messages with multiple envelope recipients to different domains will not be deleted.
#!/usr/bin/perl
use strict;
# Exit immediately if domain was not specified as command-line argument
if (!(defined($ARGV[0]))) {
(my $basename = $0) =~ s!^.*/!!;
print "Usage: $basename domain\n";
exit 1;
}
# Convert domain supplied as command-line argument to lowercase
my $domain_to_remove = lc($ARGV[0]);
my $qtool = "/usr/local/bin/qtool.pl";
my $mqueue_directory = "/var/spool/mqueue";
my $messages_removed = 0;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf\w{14}/ ) {
my $QF_FILE = $_;
my $envelope_recipients = 0;
my $match = 1;
open (QF_FILE, $_);
while(<QF_FILE>) {
# If any of the envelope recipients contain a domain other than
# $domain_to_remove, do not match the message
if ( /^R.*:<.*\@(.*)>$/ ) {
my $recipient_domain = lc($1);
$envelope_recipients++;
if ($recipient_domain ne $domain_to_remove) {
$match = 0;
last;
}
}
}
close (QF_FILE);
# $QF_FILE may not contain an envelope recipient at the time it is opened
# and read. Do not match $QF_FILE in that case.
if ($match == 1 && $envelope_recipients != 0) {
print "Removing $QF_FILE...\n";
system "$qtool", "-d", $QF_FILE;
$messages_removed++;
}
}
}
print "$messages_removed total message(s) removed from mail queue.\n";
Queued mail by email address
The following Perl script will show all queued mail by email address. A message may be counted more than once if it has multiple envelope recipients.
#!/usr/bin/perl
use strict;
my $mqueue_directory = "/var/spool/mqueue";
my %occurrences;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf\w{14}/ ) {
open (QF_FILE, $_);
while(<QF_FILE>) {
# Lines beginning with R contain an envelope recipient
if ( /^R.*:<(.*)>$/ ) {
my $domain = lc($1);
# Add 1 to the %occurrences hash
$occurrences{$domain}++;
}
}
}
}
# Subroutine to sort hash by ascending value
sub hashValueAscendingNum {
$occurrences{$a} <=> $occurrences{$b};
}
# Print sorted results
foreach my $key (sort hashValueAscendingNum (keys(%occurrences))) {
print "$occurrences{$key} $key\n";
}
Removing mail by email address
The following Perl script will remove all mail in the mail queue addressed to email_address. Messages with multiple envelope recipients will not be deleted.
#!/usr/bin/perl
use strict;
# Exit immediately if email_address was not specified as command-line argument
if (!(defined($ARGV[0]))) {
(my $basename = $0) =~ s!^.*/!!;
print "Usage: $basename email_address\n";
exit 1;
}
# Convert email address supplied as command-line argument to lowercase
my $address_to_remove = lc($ARGV[0]);
my $qtool = "/usr/local/bin/qtool.pl";
my $mqueue_directory = "/var/spool/mqueue";
my $messages_removed = 0;
use File::Find;
# Recursively find all files and directories in $mqueue_directory
find(\&wanted, $mqueue_directory);
sub wanted {
# Is this a qf* file?
if ( /^qf\w{14}/ ) {
my $QF_FILE = $_;
my $envelope_recipients = 0;
my $match = 1;
open (QF_FILE, $_);
while(<QF_FILE>) {
# If any of the envelope recipients contain an email address other than
# $address_to_remove, do not match the message
if ( /^R.*:<(.*)>$/ ) {
my $recipient_address = lc($1);
$envelope_recipients++;
if ($recipient_address ne $address_to_remove) {
$match = 0;
last;
}
}
}
close (QF_FILE);
# $QF_FILE may not contain an envelope recipient at the time it is opened
# and read. Do not match $QF_FILE in that case.
if ($match == 1 && $envelope_recipients != 0) {
print "Removing $QF_FILE...\n";
system "$qtool", "-d", $QF_FILE;
$messages_removed++;
}
}
}
print "$messages_removed total message(s) removed from mail queue.\n";
Older notes
Note: the preferred method of queue removal is to use qtool.pl as illustrated above.
In order to remove mail from the queue, you have to delete the df* and qf* files from your mail queue directory, generally /var/spool/mqueue. The qf* file is the header of the message and the control file, and the df* file is the body of the message.
script to move undeliverable email in our /var/spool/mqueue mail queue to an alternate /tmp/mqueue directory.
#!/bin/sh
if [ -z $@ ] ; then
echo "Usage: $0 email_address"
exit 1
fi
for i in `(cd /var/spool/mqueue; grep -l "To:.*$1" qf* | cut -c3-)`
do
mv /var/spool/mqueue/*$i /tmp/mqueue
done
If you have multiple mail queues, such as q1, q2, q3, q4, and q5, you can use the following script:
#!/bin/sh
if [ -z $@ ] ; then
echo "Usage: $0 email_address"
exit 1
fi
for i in q1 q2 q3 q4 q5
do
for j in `(cd /var/spool/mqueue/$i; grep -l "To:.*$1" qf* | cut -c3-)`
do
mv /var/spool/mqueue/$i/*$j /tmp/mqueue
done
done
For example, running the script while passing the command-line argument badsender@baddomain.com will look for each qf* file in the mail queue containing To:.*badsender@baddomain.com. The regular
expression .* will match zero or more occurrences of any characters, numbers, or whitespace. For example, it would match:
To: badsender@baddomain.com
To: Bad Sender <badsender@baddomain.com>
The script then moves any other files (i.e. the body of the message) in the mail queue with the same Sendmail message ID to the alternate directory. It does this with the cut -c3- command, as the Sendmail message ID is the 3rd through the last character.
The mail is moved to /tmp/mqueue. If you are confident that you do not want the messages, you can delete them from this directory, or you could change the script to remove the files.
MRTG: Install and Configure in centOS
The Multi Router Traffic Grapher MRTG is a tool to monitor the traffic load on network-links.
MRTG generates HTML pages containing PNG images which provide a LIVE visual representation of this traffic. You need the following packages:
Requirements:
mrtg : Multi Router Traffic Grapher
net-snmp and net-snmp-utils : SNMP (Simple Network Management Protocol) is a protocol used for network management. The NET-SNMP project includes various SNMP tools. net-snmp package contains the snmpd and snmptrapd daemons, documentation, etc. Net-snmp-utils package
1:Install MRTG
Type the following command to install packages using yum command under CentOS / Fedora Linux:
# yum install mrtg net-snmp net-snmp-utils
2:Configure snmpd
If you need to monitor localhost including interface and other stuff such as CPU, memory etc, configure snmpd. Open /etc/snmp/snmpd.conf, enter:
# vi /etc/snmp/snmpd.conf
Update it as follows to only allow access from localhost:
com2sec local localhost public
group MyRWGroup v1 local
group MyRWGroup v2c local
group MyRWGroup usm local
view all included .1 80
access MyRWGroup "" any noauth exact all all none
syslocation Your_Location
syscontact Root <your@emailaddress.com>
Save and close the file.
# chkconfig snmpd on
# service snmpd restart
Make sure you see interface IP, by running the following command:
# snmpwalk -v 1 -c public localhost IP-MIB::ipAdEntIfIndex
Sample Outputs:
IP-MIB::ipAdEntIfIndex.123.xx.yy.zzz = INTEGER: 2
IP-MIB::ipAdEntIfIndex.127.0.0.1 = INTEGER: 1
3:Configure MRTG
Use cfgmaker command to creates /etc/mrtg/mrtg.cfg file.
# cfgmaker --global 'WorkDir: /var/www/mrtg' --output /etc/mrtg/mrtg.cfg public@localhost
--global 'WorkDir: /var/www/mrtg' : add global config entries i.e. set workdir to store MRTG graphs.
--output /etc/mrtg/mrtg.cfg: configr output filename
public@localhost: public is the community name and it is by default. Using the wrong community name you will give no response from the device. localhost is the DNS name or the IP number of an SNMP-managable device.
Finally, run indexmaker to create web pages which display the status of an array of mrtg interface status pages
# indexmaker --output=/var/www/mrtg/index.html /etc/mrtg/mrtg.cfg
4: Verify Cron Job
/etc/cron.d/mrtg runs mrtg command to monitor the traffic load on network links
# cat /etc/cron.d/mrtg
Sample Output
*/5 * * * * root LANG=C LC_ALL=C /usr/bin/mrtg /etc/mrtg/mrtg.cfg --lock-file /var/lock/mrtg/mrtg_l --confcache-file
/var/lib/mrtg/mrtg.ok
# chkconfig --list crond
If it is off in run level # 3, run the following to turn on crond service:
# chkconfig crond on
# service crond on
View mrtg graphs:
You need Apache web server to view graphs:
# yum install httpd
# chkconfig httpd on
# service httpd on
Go to a web browser and type
http://your-ip.add.ress/mrtg/
Mount partitions with ntfs file system with read/write access
If the rpmforge repo is disabled by default,
# yum --enablerepo=rpmforge install fuse fuse-ntfs-3g
For CentOS 6,
# yum install ntfs-3g
if you prefer to leave EPEL disabled by default
# yum --enablerepo epel install ntfs-3g
For Additional Functionality
# yum install ntfsprogs ntfsprogs-gnomevfs
Mounting NTFS Drives
# mkdir /mnt/drv1
# mkdir /mnt/drv2
# mkdir /mnt/drv3
Mounting with Read Only Access, add the line in /etc/fstab
/dev/sda1 /mnt/drv1 ntfs-3g ro,umask=0222,defaults 0 0
Mounting with Read Write Access, add the line in /etc/fstab
/dev/sda1 /mnt/drv1 ntfs-3g ro,umask=0222,defaults 0 0
/dev/sda1 /mnt/drv2 ntfs-3g ro,umask=0222,defaults 0 0
/dev/sda1 /mnt/drv3 ntfs-3g ro,umask=0222,defaults 0 0
# mount /mnt/drv1
# mount /mnt/drv2
# mount /mnt/drv3
Thats All, Enjoy Linux
# yum --enablerepo=rpmforge install fuse fuse-ntfs-3g
For CentOS 6,
# yum install ntfs-3g
if you prefer to leave EPEL disabled by default
# yum --enablerepo epel install ntfs-3g
For Additional Functionality
# yum install ntfsprogs ntfsprogs-gnomevfs
Mounting NTFS Drives
# mkdir /mnt/drv1
# mkdir /mnt/drv2
# mkdir /mnt/drv3
Mounting with Read Only Access, add the line in /etc/fstab
/dev/sda1 /mnt/drv1 ntfs-3g ro,umask=0222,defaults 0 0
Mounting with Read Write Access, add the line in /etc/fstab
/dev/sda1 /mnt/drv1 ntfs-3g ro,umask=0222,defaults 0 0
/dev/sda1 /mnt/drv2 ntfs-3g ro,umask=0222,defaults 0 0
/dev/sda1 /mnt/drv3 ntfs-3g ro,umask=0222,defaults 0 0
# mount /mnt/drv1
# mount /mnt/drv2
# mount /mnt/drv3
Thats All, Enjoy Linux
System Information Related Commands
Show architecture of machine
# arch
Show the timetable of 2007
# cal 2007
Show information CPU info
# cat /proc/cpuinfo
Show interrupts
# cat /proc/interrupts
Verify memory use
# cat /proc/meminfo
Show file(s) swap
# cat /proc/swaps
Show version of the kernel
# cat /proc/version
Show network adpters and statistics
# cat /proc/net/dev
Show mounted file system(s)
# cat /proc/mounts
Save date changes on BIOS
# clock -w
Show system date
# date
set date and time - MonthDayhoursMinutesYear.Seconds
# date 041217002007.00
Show hardware system components - (SMBIOS / DMI)
# dmidecode -q
Displays the characteristics of a hard-disk
# hdparm -i /dev/hda
Perform test reading on a hard-disk
# hdparm -tT /dev/sda
Display PCI devices
# lspci -tv
Show USB devices
# lsusb -tv
Show architecture of machine
# uname -m
Show used kernel version
# uname -r
# arch
Show the timetable of 2007
# cal 2007
Show information CPU info
# cat /proc/cpuinfo
Show interrupts
# cat /proc/interrupts
Verify memory use
# cat /proc/meminfo
Show file(s) swap
# cat /proc/swaps
Show version of the kernel
# cat /proc/version
Show network adpters and statistics
# cat /proc/net/dev
Show mounted file system(s)
# cat /proc/mounts
Save date changes on BIOS
# clock -w
Show system date
# date
set date and time - MonthDayhoursMinutesYear.Seconds
# date 041217002007.00
Show hardware system components - (SMBIOS / DMI)
# dmidecode -q
Displays the characteristics of a hard-disk
# hdparm -i /dev/hda
Perform test reading on a hard-disk
# hdparm -tT /dev/sda
Display PCI devices
# lspci -tv
Show USB devices
# lsusb -tv
Show architecture of machine
# uname -m
Show used kernel version
# uname -r
Archiving and Backup related commands
Decompress a file called 'file1.bz2'
# bunzip2 file1.bz2
Compress a file called 'file1'
# bzip2 file1
Decompress a file called 'file1.gz'
# gunzip file1.gz
Compress a file called 'file1'
# gzip file1
Compress with maximum compression
# gzip -9 file1
Create an archive rar called 'file1.rar'
# rar a file1.rar test_file
Compress 'file1', 'file2' and 'dir1' simultaneously
# rar a file1.rar file1 file2 dir1
Decompress rar archive
# rar x file1.rar
Create a uncompressed tarball
# tar -cvf archive.tar file1
Create an archive containing 'file1', 'file2' and 'dir1'
# tar -cvf archive.tar file1 file2 dir1
Show contents of an archive
# tar -tf archive.tar
Extract a tarball
# tar -xvf archive.tar
Extract a tarball into / tmp
# tar -xvf archive.tar -C /tmp
Create a tarball compressed into bzip2
# tar -cvfj archive.tar.bz2 dir1
Decompress a compressed tar archive in bzip2
# tar -xvfj archive.tar.bz2
Create a tarball compressed into gzip
# tar -cvfz archive.tar.gz dir1
Decompress a compressed tar archive in gzip
# tar -xvfz archive.tar.gz
Decompress rar archive
# unrar x file1.rar
Decompress a zip archive
# unzip file1.zip
Create an archive compressed in zip
# zip file1.zip file1
Compress in zip several files and directories simultaneously
# zip -r file1.zip file1 file2 dir1
Hard Disk related commands in Linux
Checking Disk capacity, Partition tables, etc.
[root@server ~]# fdisk -l
Get Detailed/current information directly from hard drive
[root@server ~]# hdparm -I /dev/sda
Check available/used/free spaces in each partitions
[root@server ~]# df -h
Check Hard drive speeds
[root@server ~]# hdparm -Tt /dev/sda
To list the partition tables for the specified devices
#fdisk -l
Pass print option to displays the partition table
#parted /dev/sda print
To display all disks and storage controllers in the system
#lshw -class disk -class storage
Find Out Disks Name Only
#lshw -short -C disk
The smartctl command act as a control and monitor Utility for SMART disks under Linux and Unix like operating systems
#smartctl -d ata -a -i /dev/sda
Partition the new disk using fdisk command
#fdisk -l | grep '^Disk'
Format the new disk using mkfs.ext3 command
#mkfs.ext3 /dev/sdb1
Mount the new disk using mount command
#mkdir /disk1
#mount /dev/sdb1 /disk1
#df -H
Label the partition
#e2label /dev/sdb1 /backup
Checking the Hard Disk for errors
#fsck.file_system_type, E.g #fsck.ext3
Show list of partitions mounted
# df -h [man]
show the used space by installed deb packages, sorting by size
#dpkg-query -W -f='${Installed-Size;10}t${Package}n' | sort -k1,1n
Estimate space used by directory 'dir1'
#du -sh dir1
Show size of the files and directories sorted by size
#du -sk * | sort -rn
Show size of the files and directories ordered by size
#ls -lSr |more
Show space used by rpm packages installed sorted by size
# rpm -q -a --qf '%10{SIZE}t%{NAME}n' | sort -k1,1n
Format a floppy disk
# fdformat -n /dev/fd0
Create a filesystem type linux ext2 on hda1 partition
# mke2fs /dev/hda1
Create a filesystem type linux ext3 on hda1 partition
# mke2fs -j /dev/hda1
Create a filesystem type linux on hda1 partition
# mkfs /dev/hda1
Create a FAT32 filesystem
# mkfs -t vfat 32 -F /dev/hda1
Create a swap filesystem
# mkswap /dev/hda3
Force umount when the device is busy
# fuser -km /mnt/hda2
Mount disk called hda2 - verify existence of the directory '/ mnt/hda2'
# mount /dev/hda2 /mnt/hda2
Mount a floppy disk
# mount /dev/fd0 /mnt/floppy
Mount a cdrom / dvdrom
# mount /dev/cdrom /mnt/cdrom
Mount a cdrw / dvdrom
# mount /dev/hdc /mnt/cdrecorder
Mount a cdrw / dvdrom [man]
# mount /dev/hdb /mnt/cdrecorder
Mount a file or iso image
# mount -o loop file.iso /mnt/cdrom
Mount a Windows FAT32 file system
# mount -t vfat /dev/hda5 /mnt/hda5
Mount a usb pen-drive or flash-drive
# mount /dev/sda1 /mnt/usbdisk
Mount a windows network share
# mount -t smbfs -o username=user,password=pass //WinClient/share /mnt/share
Unmount disk called hda2 - exit from mount point '/ mnt/hda2' first
# umount /dev/hda2
Run umount without writing the file /etc/mtab - useful when the file is read-only or the hard disk is full
# umount -n /mnt/hda2
Subscribe to:
Posts (Atom)