Thursday 28 April 2011

HA in KVM


Following is the article I found in : http://lwn.net/Articles/392449/
We are trying to improve the integration of KVM with the most common
HA stacks, but we would like to share with the community what we are
trying to achieve and how before we take a wrong turn.

This is a pretty long write-up, but please bear with me.
---


  Virtualization has boosted flexibility on the data center, allowing
  for efficient usage of computer resources, increased server
  consolidation, load balancing on a per-virtual machine basis -- you
  name it, However we feel there is an aspect of virtualization that
  has not been fully exploited so far: high availability (HA).

  Traditional HA solutions can be classified in two groups: fault
  tolerant servers, and software clustering.

  Broadly speaking, fault tolerant servers protect us against hardware
  failures and, generally, rely on redundant hardware (often
  proprietary), and hardware failure detection to trigger fail-over.

  On the other hand, software clustering, as its name indicates, takes
  care of software failures and usually requires a standby server
  whose software configuration for the part we are trying to make
  fault tolerant must be identical to that of the active server.

  Existing open source HA stacks such as pacemaker/corosync and Red
  Hat Cluster Suite rely on software clustering techniques to detect
  both hardware failures and software failures, and employ fencing to
  avoid split-brain situations which, in turn, makes it possible to
  perform failover safely. However, when applied to virtualization
  environments these solutions show some limitations:

    - Hardware detection relies on polling mechanisms (for example
      pinging a network interface to check for network connectivity),
      imposing a trade off between failover time and the cost of
      polling. The alternative is having the failing system send an
      alarm to the HA software to trigger failover. The latter
      approach is preferable but it is not always applicable when
      dealing with bare-metal; depending on the failure type the
      hardware may not able to get a message out to notify the HA
      software. However, when it comes to virtualization environments
      we can certainly do better. If a hardware failure, be it real
      hardware or virtual hardware, is fully contained within a
      virtual machine the host or hypervisor can detect that and
      notify the HA software safely using clean resources.

    - In most cases, when a hardware failure is detected the state of
      the failing node is not known which means that some kind of
      fencing is needed to lock resources away from that
      node. Depending on the hardware and the cluster configuration
      fencing can be a pretty expensive operation that contributes to
      system downtime. Virtualization can help here. Upon failure
      detection the host or hypervisor could put the virtual machine
      in a quiesced state and release its hardware resources before
      notifying the HA software, so that it can start failover
      immediately without having to mingle with the failing virtual
      machine (we now know that it is in a known quiesced state). Of
      course this only makes sense in the event-driven failover case
      described above.

    - Fencing operations commonly involve killing the virtual machine,
      thus depriving us of potentially critical debugging information:
      a dump of the virtual machine itself. This issue could be solved
      by providing a virtual machine control that puts the virtual
      machine in a known quiesced state, releases its hardware
      resources, but keeps the guest and device model in memory so
      that forensics can be conducted offline after failover. Polling
      HA resource agents should use this new command if postmortem
      analysis is important.

  We are pursuing a scenario where current polling-based HA resource
  agents are complemented with an event-driven failure notification
  mechanism that allows for faster failover times by eliminating the
  delay introduced by polling and by doing without fencing. This would
  benefit traditional software clustering stacks and bring a feature
  that is essential for fault tolerance solutions such as Kemari.

  Additionally, for those who want or need to stick with a polling
  model we would like to provide a virtual machine control that
  freezes a virtual machine into a failover-safe state without killing
  it, so that postmortem analysis is still possible.

  In the following sections we discuss the RAS-HA integration
  challenges and the changes that need to be made to each component of
  the qemu-KVM stack to realize this vision. While at it we will also
  delve into some of the limitations of the current hardware error
  subsystems of the Linux kernel.


HARDWARE ERRORS AND HIGH AVAILABILITY

  The major open source software stacks for Linux rely on polling
  mechanisms to detect both software errors and hardware failures. For
  example, ping or an equivalent is widely used to check for network
  connectivity interruptions. This is enough to get the job done in
  most cases but one is forced to make a trade off between service
  disruption time and the burden imposed by the polling resource
  agent.

  On the hardware side of things, the situation can be improved if we
  take advantage of CPU and chipset RAS capabilities to trigger
  failover in the event of a non-recoverable error or, even better, do
  it preventively when hardware informs us things might go awry. The
  premise is that RAS features such as hardware failure notification
  can be leveraged to minimize or even eliminate service
  down-times.

  Generally speaking, hardware errors reported to the operating system
  can be classified into two broad categories: corrected errors and
  uncorrected errors. The later are not necessarily critical errors
  that require a system restart; depending on the hardware and the
  software running on the affected system resource such errors may be
  recoverable. The picture looks like this (definitions taken from
  "Advanced Configuration and Power Interface Specification, Revision
  4.0a" and slightly modified to get rid of ACPI jargon):

    - Corrected error: Hardware error condition that has been
      corrected by the hardware or by the firmware by the time the
      kernel is notified about the existence of an error condition.

    - Uncorrected error: Hardware error condition that cannot be
      corrected by the hardware or by the firmware. Uncorrected errors
      are either fatal or non-fatal.

        o A fatal hardware error is an uncorrected or uncontained
   error condition that is determined to be unrecoverable by
   the hardware. When a fatal uncorrected error occurs, the
   system is usually restarted to prevent propagation of the
   error.

        o A non-fatal hardware error is an uncorrected error condition
   from which the kernel can attempt recovery by trying to
   correct the error. These are also referred to as correctable
   or recoverable errors.

  Corrected errors are inoffensive in principle, but they may be
  harbingers of fatal non-recoverable errors. It is thus reasonable in
  some cases to do preventive failover or live migration when a
  certain threshold is reached. However this is arguably the job
  systems management software, not the HA, so this case will not be
  discussed in detail here.

  Uncorrected errors are the ones HA software cares about.

  When a fatal hardware error occurs the firmware may decide to
  restart the hardware. If the fatal error is relayed to the kernel
  instead the safest thing to do is to panic to avoid further
  damage. Even though it is theoretically possible to send a
  notification from the kernel's error or panic handler, this is a
  extremely hardware-dependent operation and will not be considered
  here. To detect this type of failures one's old reliable
  polling-based resource agent is the way to go.

  Non-fatal or recoverable errors are the most interesting in the
  pack.  Detection should ideally be performed in a non-intrusive way
  and feed the policy engine with enough information about the error
  to make the right call. If the policy engine decides that the error
  might compromise service continuity it should notify the HA stack so
  that failover can be started immediately.


REQUIREMENTS

  * Linux kernel

  One of the main goals is to notify HA software about hardware errors
  as soon as they are detected so that service downtime can be
  minimized. For this a hardware error subsystem that follows an
  event-driven model is preferable because it allows us to eliminate
  the cost associated with polling. A file based API that provides a
  sys_poll interface and process signaling both fit the bill (the
  latter is pretty limited in its semantics an may not be adequate to
  communicate non-memory type errors).

  The hardware error subsystem should provide enough information to be
  able to map error sources (memory, PCI devices, etc) to processes or
  virtual machines, so that errors can be contained. For example, if a
  memory failure occurs but only affects user-space addresses being
  used by a regular process or a KVM guest there is no need to bring
  down the whole machine.

  In some cases, when a failure is detected in a hardware resource in
  use by one or more virtual machines it might be necessary to put
  them in a quiesced state before notifying the associated qemu
  process.

  Unfortunately there is no generic hardware error layer inside the
  kernel, which means that each hardware error subsystem does its own
  thing and there is even some overlap between them. See HARDWARE ERRORS IN LINUX below for a brief description of the current mess.

  * qemu-kvm

  Currently KVM is only notified about memory errors detected by the
  MCE subsystem. When running on newer x86 hardware, if MCE detects an
  error on user-space it signals the corresponding process with
  SIGBUS. Qemu, upon receiving the signal, checks the problematic
  address which the kernel stored in siginfo and decides whether to
  inject the MCE to the virtual machine.

  An obvious limitation is that we would like to be notified about
  other types of error too and, as suggested before, a file-based
  interface that can be sys_poll'ed might be needed for that.  

  On a different note, in a HA environment the qemu policy described
  above is not adequate; when a notification of a hardware error that
  our policy determines to be serious arrives the first thing we want
  to do is to put the virtual machine in a quiesced state to avoid
  further wreckage. If we injected the error into the guest we would
  risk a guest panic that might detectable only by polling or, worse,
  being killed by the kernel, which means that postmortem analysis of
  the guest is not possible. Once we had the guests in a quiesced
  state, where all the buffers have been flushed and the hardware
  sources released, we would have two modes of operation that can be
  used together and complement each other.

    - Proactive: A qmp event describing the error (severity, topology,
      etc) is emitted. The HA software would have to register to
      receive hardware error events, possibly using the libvirt
      bindings. Upon receiving the event the HA software would know
      that the guest is in a failover-safe quiesced state so it could
      do without fencing and proceed to the failover stage directly.

    - Passive: Polling resource agents that need to check the state of
      the guest generally use libvirt or a wrapper such as virsh. When
      the state is SHUTOFF or CRASHED the resource agent proceeds to
      the facing stage, which might be expensive and usually involves
      killing the qemu process. We propose adding a new state that
      indicates the failover-safe state described before. In this
      state the HA software would not need to use fencing techniques
      and since the qemu process is not killed postmortem analysis of
      the virtual machine is still possible.


HARDWARE ERRORS IN LINUX

  In modern x86 machines there is a plethora of error sources:

    - Processor machines check exception.
    - Chipset error message signals.
    - APEI (ACPI4).
    - NMI.
    - PCIe AER.
    - Non-platform devices (SCSI errors, ATA errors, etc).

  Detection of processor, memory, PCI express, and platform errors in
  the Linux kernel is currently provided by the MCE, the EDAC, and the
  PCIe AER subsystems, which covers the first 5 items in the list
  above. There is some overlap between them with regard to the errors
  they can detect and the hardware they poke into, but they are
  essentially independent systems with completely different
  architectures. To make things worse, there is no standard mechanism
  to notify about non-platform devices beyond the venerable printk().

  Regarding the user space notification mechanism, things do not get
  any better. Each error notification subsystem does its own thing:

    - MCE: Communicates with user space through the /dev/mcelog
      special device and
      /sys/devices/system/machinecheck/machinecheckN/. mcelog is
      usually the tool that hooks into /dev/mcelog (this device can be
      polled) to collect and decode the machine check errors.
      Alternatively,
      /sys/devices/system/machinecheck/machinecheckN/trigger can be
      used to set a program to be run when a machine check event is
      detected. Additionally, when an machine check error that affects
      only user space processes they are signaled SIGBUS.

      The MCE subsystem used to deal only with CPU errors, but it was
      extended to handle memory errors too and there is also initial
      support for ACPI4's APEI. The current MCE APEI implementation
      reaps memory errors notified through SCI, but support for other
      errors (platform, PCIe) and transports covered in the
      specification is in the works.

    - EDAC: Exports memory errors, ECC errors from non-memory devices
      (L1, L2 and L3 caches, DMA engines, etc), and PCI bus parity and
      SERR errors through /sys/devices/system/edac/*.

    - NMI: Uses printk() to write to the system log. When EDAC is
      enabled the NMI handler can also instruct EDAC to check for
      potential ECC errors.

    - PCIe AER subsystem: Notifies PCI-core and AER-capable drivers
      about errors in the PCI bus and uses printk() to write to the
      system log.
---

DHCP server was getting continuous requests from a client {EBtables}

I've had the following issue today:

while I was looking at logs of the dns/dhcp server it was being flooded by the frequent dhcp request from a client therefore we decided to block the requests using ebtables which are similar to iptables but works on layer 2 and can block all traffic from a mac address. So installed and configured ebtables { ebtables only works on bridges it doesn't work on ethernet interfaces but a bridge can be created which consists of single ethernet interface } then that blocked the mac address of the client which was flooding dhcp server with frequent requests. Big Boss also suggested that it can be done via switches as well and these will be generally caused either by apple MAC's or a Virtual servers such as vmware esx which are not bound properly.

I've used the following example from :::

http://ebtables.sourceforge.net/examples/real.html#all


the example from that site is:


This setup and description was given by Ashok Aiyar. The original website where he posted this setup is no longer available but the website is archived here. The contents were edited to bring the original text, which dates from the Linux 2.4 days, up-to-date.

Why filter AppleTalk?

There are many situations where it is appropriate to filter AppleTalk. Here's one of them. We tunnel/route AppleTalk between five networks using netatalk. There are very similarly named Tektronix Phaser printers in two of these networks, and often print jobs intended for one are unintentionally sent to the other. We would prefer for each of these Phasers to be visible only in the network in which it is located, and not in all five networks. Unlike CAP, netatalk does not support filtering. Therefore, on this page I describe one method to add external filters to netatalk, on the basis of the MAC address associated with an AppleTalk object or node.
There are pros and cons to filtering on the basis of MAC addresses. They have the advantage of being more robust because AppleTalk node numbers can change with every reboot, while the MAC address will not. They have the disadvantage of not being fine-grained; MAC-based filtering will block all the services associated with the filtered AppleTalk node. In general, AppleTalk nodes in our networks are associated with a single service.

Iptables versus Ebtables

The Linux netfilter code supports filtering of IPV4, IPV6 and DECnet packets on the basis of MAC addresses. However such filters do not apply to any other type of ethernet frame. So, an iptables rule such as:

iptables -I INPUT -m mac --mac-source TE:KP:HA:SE:R8:60 -j DROP

results in only IPV4, IPV6 and DECnet packets from that source address being dropped. More to the point, DDP and AARP packets from the same source address are not dropped. Ebtables appeared to be perfectly suited to filter Ethernet frames on the basis of MAC address as well as ethernet protocol type. However, it only supports bridge interfaces, and not regular Ethernet interfaces. Bart De Schuymer, the author of ebtables brought to my attention that a Linux bridge interface can have just a single Ethernet interface. Thanks to Bart's generous advice, a working Ethernet filtering setup is described below.

Setting up Ebtables

To setup a bridge with a single interface, first create the bridge interface (br0). Then add the relevant ethernet interface to the bridge. Finally, assign to the bridge the IP address previously assigned to the ethernet interface. The commands to do this are detailed below:

brctl addbr br0             # create bridge interface
brctl stp br0 off           # disable spanning tree protocol on br0
brctl addif br0 eth0        # add eth0 to br0
ifconfig br0 aaa.bbb.ccc.ddd netmask 255.255.255.0 broadcast aaa.bbb.ccc.255
ifconfig eth0 0.0.0.0
route add -net aaa.bbb.ccc.0 netmask 255.255.255.0 br0
route add default gw aaa.bbb.ccc.1 netmask 0.0.0.0 metric 1 br0

Now network traffic will be routed through the br0 interface rather than the underlying eth0. Atalkd can be started to route AppleTalk between br0 and any other desired interfaces. Note that atalkd.conf has to be modified so that the reference to eth0 is replaced with br0. For example, the atalkd.conf for PC1 shown on my AppleTalk tunneling page is modified to:

br0  -seed -phase 2 -net 2253  -addr 2253.102  -zone "Microbio-Immun"
tap0 -seed -phase 2 -net 60000 -addr 60000.253 -zone "Microbio-Immun"
tap1 -seed -phase 2 -net 60001 -addr 60001.253 -zone "Microbio-Immun"
tap2 -seed -phase 2 -net 60002 -addr 60002.253 -zone "Microbio-Immun"
tap3 -seed -phase 2 -net 60003 -addr 60003.253 -zone "Microbio-Immun"

Verify that AppleTalk routing is working, and then proceed to set up Ethernet filters using ebtables. For this the MAC addresses of the AppleTalk nodes that are not to be routed must be known. One simple method of discovering the MAC address is to send the AppleTalk object a few aecho packets, and then read the MAC address from /proc/net/aarp. A sample ebtables filter is shown below:

ebtables -P INPUT ACCEPT
ebtables -P FORWARD ACCEPT
ebtables -P OUTPUT ACCEPT
ebtables -A INPUT -p LENGTH -s TE:KP:HA:SE:R8:60 -j DROP

Currently, ebtables doesn't support filtering of 802.2 and 802.3 packets such as the DDP and AARP packets used by AppleTalk. However all such packets can be dropped on the basis of the length field – if I understand Bart de Schuymer's explanation correctly. Therefore in the example above, all ethernet 802.2, 802.3 packets from the node with the MAC address TE:KP:HA:SE:R8:60 are dropped. This includes AppleTalk packets, but not IPV4, and ARP packets. This node is left visible in the network in which it is located, but not in any networks to which AppleTalk is routed.

Acknowledgements, Final Comments and Useful Links:

Bart de Schuymer's advice and patient explanations are greatly appreciated. In my experience atalkd bound to the br0 interface is as stable as atalkd bound to the eth0 interface. In addition the MAC address based filters described here work well for their intended purpose. While this works, there is a performance penalty associated with receiving all IP traffic through br0 and not eth0. This is because traffic destined for the bridge is queued twice (once more than normal) – that's a lot of overhead. The ebtablesbroute table can be used to circumvent this and directly route the traffic entering the bridge port. This way it will be queued only once, eliminating the performance penalty. In the example above:

brctl addbr br0
brctl stp br0 off
brctl addif br0 eth0
ifconfig br0 0.0.0.0
ifconfig eth0 a.b.c.d netmask 255.255.255.0 broadcast a.b.c.255

The following two ebtables BROUTE table rules should be used:

ebtables -t broute -A BROUTING -p IPv4 -i eth0 --ip-dst a.b.c.d -j DROP
ebtables -t broute -A BROUTING -p ARP -i eth0 -d MAC_of_eth0 -j DROP

Atalkd should still be bound to br0, thus allowing AppleTalk to be filtered by ebtables. As best as I can tell this configuration eliminates the performance penalty on IP traffic throughput. Because we tunnel AppleTalk through IP, this configuration removes any throughput penalties in using a bridge interface and ebtables to route AppleTalk.



Wednesday 13 April 2011

Unattended Batch Jobs using SSH Agent


Overview
SSH isn't only a great interactive tool but also a resource for automation. Batch scripts, cron jobs, and other automated tasks can benefit from the security provided by SSH, but only if implemented properly. The major challenge is authentication: how can a client prove its identity when no human is available to type a password? You must carefully select an authentication method, and then equally carefully make it work. Once this infrastructure is established, you must invoke SSH properly to avoid prompting the user.
Note that all kind of unattended authentication presents a security problem and requires compromise, and SSH is no exception. Without a human present when needed to provide credentials (type a password, provide a thumbprint, etc.), those credentials must be stored persistently somewhere on the host system. Therefore, an attacker who compromises the system badly enough can use those credentials to impersonate the program and gain whatever access it has. Selecting a technique is a matter of understanding the pros and cons of the available methods, and picking your preferred poison. If you can't live with this fact, you shouldn't expect strong security of unattended remote jobs.
Example
In this example, we show how to use the Public Key Authentication together with an SSH Agent to backup files from a remote SSH Server (Rabbit) to a local SSH client (Opal), fully automated and driven by cron. In this example we useOpenSSH.
The following Tasks have to be setup:
  1. Create Cryptographic Keys with SSH-KEYGEN on the SSH Client
  2. Install the generated Public Key on the SSH Server
  3. Activate Public Key Authentication in both SSH Client and SSH Server
  4. Start the SSH Agent and load the Private Keys on the SSH Client
  5. Start the Backup (e.g. ssh -2 rabbit "cat /u01/file.gz" > file.gz
We will now show the needed steps in more detail.
1.  Create Cryptographic Keys with SSH-KEYGEN on the SSH Client
A key is a digital identity. It's a unique string of binary data that means, "This is me, honestly, I swear." And with a little cryptographic magic, your SSH client can prove to a server that its key is genuine, and you are really you.
An SSH identity uses a pair of keys, one private and one public. The private key is a closely guarded secret only you have. Your SSH clients use it to prove your identity to servers. The public key is, like the name says, public. You place it freely into your accounts on SSH server machines. During authentication, the SSH client and server have a little conversation about your private and public key. If they match (according to a cryptographic test), your identity is proven, and authentication succeeds.
Generating Key Pairs with ssh-keygen
To use cryptographic authentication, you must first generate a key pair for yourself, consisting of a private key (your digital identity that sits on the client machine) and a public key (that sits on the server machine). To do this, use thessh-keygen program. In this example we use
Go to the SSH Client and generate the RSA and DSA Keys. In the example we only use the DSA Key Pairs (~/.ssh/id_dsa and ~/.ssh/id_dsa.pub). In the example we used an empty passphrase.
zahn@opal:~/.ssh> ssh-keygen

Generating public/private rsa1 key pair.
Enter file in which to save the key (~/.ssh/identity):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ~/.ssh/identity.
Your public key has been saved in ~/.ssh/identity.pub.
zahn@opal:~/.ssh> ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (~/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ~/.ssh/id_dsa.
Your public key has been saved in ~/.ssh/id_dsa.pub.
zahn@opal:~/.ssh> ls -l
-rw-------    1 zahn     dba           668 Mar 15 19:16 id_dsa
-rw-r--r--    1 zahn     dba           610 Mar 15 19:16 id_dsa.pub
-rw-------    1 zahn     dba           535 Mar 15 19:15 identity
-rw-r--r--    1 zahn     dba           339 Mar 15 19:15 identity.pub
2.  Install the generated Public Key on the SSH Server
After creating the key pair on the SSH Client, you must install your public key in your account on the SSH Server. A remote account may have many public keys installed for accessing it in various ways. Copy the keys from Opal to Rabbit using scp and Password Authentication.
zahn@opal:~/.ssh> scp id_dsa.pub rabbit:/home/zahn/.ssh
zahn@rabbit's password:

zahn@opal:~/.ssh> scp identity.pub rabbit:/home/zahn/.ssh
zahn@rabbit's password:

zahn@opal:~/.ssh> ssh rabbit
zahn@rabbit's password:

zahn@rabbit:~/.ssh> cat identity.pub >> authorized_keys
zahn@rabbit:~/.ssh> cat id_dsa.pub >> authorized_keys2

zahn@rabbit:~/.ssh> rm id_dsa.pub identity.pub
zahn@rabbit:~/.ssh> chmod 644 *
3.  Activate Public Key Authentication in both SSH Client and SSH Server
Public Key Authentication is enabled in the SSH Server Configuration File/etc/ssh/sshd_config for Red Hat Linux. This have to be done as user root. After editing this file, restart your SSH Server.
PubkeyAuthentication yes
root@opal: /etc/rc.d/init.d/sshd restartroot@rabbit: /etc/rc.d/init.d/sshd restart
4.  Start the SSH Agent and load the Private Keys on the SSH Client
In public-key authentication, a private key is the client's credentials. Therefore the batch job needs access to the key, which must be stored where the job can access it. Store the key in an agent, which keeps secrets out of the filesystem but requires a human to decrypt the key at system boot time.
The ssh-agent provides another, somewhat less vulnerable method of key storage for batch jobs. A human invokes an agent and loads the needed keys from passphrase-protected key files, just once. Thereafter, unattended jobs use this long-running agent for authentication.
In this case, the keys are still in plaintext but within the memory space of the running agent rather than in a file on disk. As a matter of practical cracking, it is more difficult to extract a data structure from the address space of a running process than to gain illicit access to a file. Also, this solution avoids the problem of an intruder's walking off with a backup tape containing the plaintext key.
Security can still be compromised by overriding filesystem permissions, though.The agent provides access to its services via a Unix-domain socket, which appears as a node in the filesystem. Anyone who can read and write that socket can instruct the agent to sign authentication requests and thus gain use of the keys. But this compromise isn't quite so devastating since the attacker can't get the keys themselves through the agent socket. She merely gains use of the keys for as long as the agent is running and as long as she can maintain her compromise of the host.
Another bit of complication with the agent method is that you must arrange for the batch jobs to find the agent. SSH clients locate an agent via anenvironment variable pointing to the agent socket, such as SSH_AUTH_SOCK. When you start the agent for batch jobs, you need to record its output where the jobs can find it. For instance, if the job is a shell script you can store the environment values in a file.
Generally, you run a single ssh-agent in your local login session, before running any SSH clients. You can run the agent by hand, but people usually edit their login files to run the agent automatically. SSH Clients communicate with the agent via the process environment, so all clients within your login session have access to the agent.
Start the Agent from your $HOME/.bash_profile on the SSH Client (Opal)
# Start ssh-agent if not already running
# The ssh-agent environment is setup by
# ~/.bashrc after ~/.bash_profile


if [ $LOGNAME = "zahn" ]
then
  pid=`ps | grep ssh-agent | grep -v grep`
  if [ "$pid" = "" ]
  then
    echo "Starting SSH Agent: ssh-agent"
    exec ssh-agent $SHELL
  else
    echo "Setup Env for SSH Agent from ~/.agent_info"
    . ./.agent_info
  fi
fi
The line exec ssh-agent $SHELL runs the agent and then invokes the given shell in $SHELL as a child process. The visual effect is simply that another shell prompt appers, but this shell has access to the agent. If an agent is already running, the environment is stored in a file for all other SSH clients.
Setup SSH Agent Environment and load the Keys in $HOME/.bashrc
Once the agent is running, it's time to load the private keys into it using the ssh-add program. By default, ssh-add loads the keys from your default identity files.
# Setup Environment for ssh-agent
test -n "$SSH_AGENT_PID" && echo \
"SSH_AGENT_PID=$SSH_AGENT_PID; \
export SSH_AGENT_PID" > ~/.agent_info
test -n "$SSH_AUTH_SOCK" && echo \
"SSH_AUTH_SOCK=$SSH_AUTH_SOCK; \
export SSH_AUTH_SOCK" >>  ~/.agent_info

# Load the Private Keys into the running SSH Agent

if [ $LOGNAME = "zahn" ]
then
  pid=`ps | grep ssh-agent | grep -v grep`
  if [ "$pid" != "" ]
  then
    if /usr/bin/tty 1> /dev/null 2>&1
    then
      ssh-add 1> /dev/null 2>&1
    fi
  fi
fi
5.  Start the Backup on the SSH Client
You can now use the running SSH Agent from any Backup Script. You have to read the SSH Agent Environment from your ~/.agent_info file. No password or passphrase is needed.
This is an example of the Backup Script started by cron on the SSH Client Opal (No error handling is show here).
#!/bin/sh
# Backup Script using unattended SSH

AGENT_INFO=/home/zahn/.agent_info; export AGENT_INFO

# Fetch saved Files from RABBIT
cd ~/backup

# Source Environment for SSH Agent
. $AGENT_INFO

# Copy Data from Rabbit
ssh -2 rabbit "cat /u01/file.gz" > file.gz
If you have any troubles use the SSH Option "-v" to display the debug output:
ssh -v -2 rabbit "cat /u01/file.gz" > file.gz
Other useful commands
Here are some other useful commands
ssh-add -l
List the keys the agent currently holds
ssh-add _D
Delete all keys from the agent
ssh-agent -k
Kill the current agent

References:

1)  http://www.akadia.com/services/ssh_agent.html

Monday 11 April 2011

argument expected error in a shell script


[: argument expected

I was getting the above error while creating a shell script like for example:

for datos in `ls -rt $UNXLOG/26-Jan*`
do
export arch=`echo $datos |cut -d, -f1`
if [ `grep -c INACTIVO ${arch}` -eq 0 ]
then
export linea1=`grep Debut ${arch}`
export horatot=`echo $linea1 |cut -d' ' -f5`
export hora=`echo $horatot |cut -c1-2`

if [ ${hora} -le 19 ]
then
echo "Listando log - ${arch} ...\n" >> $UNXLOG/qq.log
more ${arch} >>$UNXLOG/qq.log
echo "--------------------------------------------------------------------------------" >>$UNXLOG/qq.log
fi
fi
done


The problem is that hora has no value at all. Put a
echo hora = $hora
in front of the if statement to see that. This means that the if statement is just:
Code:
if [       -le 19 ]

One solution is to expand hora with a default value:

if [ ${hora:-1} -le 19 ]

Now if hora is unset or set to null, the if statement will see 1.


References:

http://www.unix.com/unix-dummies-questions-answers/16696-test-argument-expected-error.html  { accessed on apr 11 2011 }

Friday 8 April 2011

sshd: Connection closed by UNKNOWN

/var/log/secure file was getting these error messages when I was trying to create a script which monitors user activity and I was thinking to use /var/log/secure to start a script if the file changes as it logs all user login and logouts but the file was frequently changing because of the error message:


Apr  8 15:15:13 host185 sshd[14804]: Connection closed by UNKNOWN


Then I did some research and found out that we can find who is initiating a ssh connection using the following command:

#lsof -i TCP:22 | grep LISTEN
sshd     3581   root    3u  IPv6  11611       TCP *:ssh (LISTEN)

and then we see that sshd pid is 3581 and then we can use the following  command to get the ip address of the ssh connection initiator: 


#strace -f -e getpeername -p 3581

and in my case the connection closed by UNKNOWN is caused beacuse some process in localhost is trying to check the status of ssh causing the error. 

Haven't done further research to stop it as it is not from external IP addresses.

Monday 4 April 2011

IO wait load tracking to a process.


How to identify what processes are generating IO wait load.
-------------------------------------------------------------

An easy way to identify what process is generating your IO Wait load is to enable block I/O debugging. This is done by setting /proc/sys/vm/block_dump to a non zero value like:

echo 1 > /proc/sys/vm/block_dump
This will cause messages like the following to start appearing in dmesg:

bash(6856): dirtied inode 19446664 (ld-2.5.so) on md1
Using the following one-liner will produce a summary output of the dmesg entries:

dmesg | egrep "READ|WRITE|dirtied" | egrep -o '([a-zA-Z]*)' | sort | uniq -c | sort -rn | head
    354 md
    324 export
    288 kjournald
     53 irqbalance
     45 pdflush
     14 portmap
     14 bash
     10 egrep
     10 crond
      8 ncftpput
Once you are finished you should disable block I/O debugging by setting /proc/sys/vm/block_dump to a zero value like:

echo 0 > /proc/sys/vm/block_dump


References:

1) http://www.scriptbits.net/2009/07/how-to-identify-what-processes-are-generating-io-wait-load/

Creating RPMs


How to create RPM packages:

To create rpm packages we need to set up our system first. we need some developmental tools and user account to create rpm packages.

#yum groupinstall "Development Tools"
#yum install rpmdevtools { in fedora}

in centos
#yum install redhat-rpm-config

#useradd rpdev

# su - rpdev

After installing the rpmbuild package, we need to create the files and direcotries under the home directory of the user to be used for building rpm packages. To avoid possible system libraries and other files damage, you should never build an rpm with the root user. we should always use a non-privileged user account to do this.

To create rpm's we need a directory structure and .rpmmacros file under the home directory overriding the default location of the RPM building tree.


$mkdir-p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}


and then

echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros


Building rpm's is like building and compiling the software from either the source or source rpms, to do that we need tools to compile and build source packages such as make and gcc

# yum install make

# yum install gcc


To create an RPM package we need to create a ".spec" file that describes the information about the software being packaged. we then run the "rpmbuild" command on the spec file, which will carry out the steps specified in the spec file to create the described packages.


Usually we'll place the sources (tar.gz) files into "~/rpmbuild/SOURCES", spec file in "~/rpmbuild/SPECS/" and the name of the spec file should be the base name of the package. To create all packages(both binary and source packages), we have to run the following command from the SPECS directory:

$ rpmbuild -ba 'name'.spec


There are different statges while rpmbuild reads, writes and executes the instructions in the spec file such as %_specdir{~/rpmbuild/SPECS}, %_sourcedir{~/rpmbuild/SOURCES}, %_builddir {~/rpmbuild/BUILD source files are unpacked and compiled in a subdirectory underneath this}, %_buildrootdir {~/rpmbuild/BUILDROOT}, %_rpmdir {~/rpmbuild/RPMS binary rpms are created and stored here } and %_srcrpmdir {~/rpmbuild/SRPMS Source rpm directory}.

To package a program, its probably best if we try a dry run going through the build and installation procedure with rpm.


Creating a blank spec file

$ cd ~/rpmbuild/SPECS
$ vi program.spec

The following is an example of what the template may be like:
Name:
Version:
Release: 1%{?dist}
Summary:
Group:
License:
URL:
Source0:
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)

BuildRequires:
Requires:

%description

%prep
%setup -q

%build
%configure
make %{?_smp_mflags}

%install
rm -rf %{buildroot}
make install DESTDIR=%{buildroot}

%clean
rm -rf %{buildroot}

%files
%defattr(-,root,root,-)
%doc

%changelog



Using source rpms:
----------------------

We can also build software from source rpms,

$rpm -ivh sourcepackage-name*.src.rpm

This places the package's .spec file into ~/rpmbuild/SPECS and other source and patch files in ~/rpmbuild/SOURCES.
We can also unpack the .src.rpm in a directory using rpm2cpio:

$ mkdir PROGRAMNAME_src_rpm
$ cd PROGRAMNAME_src_rpm
$ rpm2cpio ../PROGRAMNAME-*.src.rpm | cpio -i

Creating RPMs from the spec file
-------------------------------------
$ rpmbuild -ba program.spec

if this works then binary rpm files are created under ~/rpmbuild/RPMS/ and the source rpm will be in ~/rpmbuild/SPRMS.

As an example we'll create wget rpm's:

By creating an rpm we'll automate the process of untaring the sources, running ./configure command, make command and make install commands.
we can specify the directory to be installed to ./configure with --prefix=/fullpathe. To automate the process we'll place the code in the sources directory and write a confiugration to dictate where to find the source to be compiled and how to build and install the code. The configuration (spec) file is the input the utility called rpmbuild. Therefore copy the source file into the ~/rpmbuild/SOURCES directory. Then create a spec file as follows:

Troubleshooting:

"error: File /usr/src/redhat/SOURCES/nano-1.2.0.tar.gz: No such file or directory" when trying to rpmbuild -ba *.spec. Solution: rpmbuild can not see your /etc/rpm/macros or $HOME/.rpmmacros.

Can not write to /var/tmp/nano-1.2-1-root. Solution: change the buildroot in spec file to Buildroot: %{_buildroot}

What "Group:" entry should I use? Solution: To see all valid groups less /usr/share/doc/rpm-*/GROUPS

RPM build errors: Installed (but unpackaged) files found:..... Add the name of the unpackaged files to the %files section of the spec file to resolve the error

Further advanced usage details will be followed shortly.


References:

1) http://www.ibm.com/developerworks/library/l-rpm1/
2) http://wiki.centos.org/HowTos/SetupRpmBuildEnvironment
3) http://fedoraproject.org/wiki/How_to_create_an_RPM_package
4) http://www.ibm.com/developerworks/linux/library/l-rpm2/index.html
5) http://www.ibm.com/developerworks/linux/library/l-rpm3/index.html