ovirt-shell – some examples

oVirt is commonly managed via Engine WebAdmin interface. The reason is simple: it is very easy and intuitive. The official oVirt Administration manual shows the web interface as example too.

However, I can’t always rely on a web browser. There are sometimes that I have just the terminal. Because of my needs, I started to experiment the ovirt-shell alternative. For the basic tasks, I didn’t find problems: start vm, migrate vm, set a host in maintenance state, update some vm attribute… Below I will show some examples of the commands I’ve tried.

First, how can I connect?

[root@engine ~]# ovirt-shell -l https://engine.example.com/api -u admin@internal
Password:

You will see something like this:

===================================================
 >>> connected to oVirt manager 3.4.0.0 <<<
===================================================
+++++++++++++++++++++++++++++++++++++++++++++++++++
Welcome to oVirt shell
+++++++++++++++++++++++++++++++++++++++++++++++++++
[oVirt shell (connected)]#

Once connected, it’s possible use double <TAB> to see possible commands:

[oVirt shell (connected)]# <TAB> <TAB>
EOF     add          clear   console    echo file history list remove show   summary 
action  capabilities connect disconnect exit help info    ping shell  status update

In case of doubt, the commands could be prefixed with ‘help’. For instance ‘help show host’. I won’t paste here the output of the commands because they are too big and I think you can see it by your own. Check some examples I used with descriptions:

Show details of VM web01.example.com:

show vm web01.example.com

Start VM web01.example.com:

action vm web01.example.com start

Run once VM web01.example.com in stateless mode:

action vm web01.example.com start --vm-stateless

Shutdown gracefully VM web01.example.com:

action vm web01.example.com shutdown

Migrate VM web01.example.com to any host:

action vm web01.example.com migrate

Migrate VM web01.example.com.br to host node01.example.com:

action vm web01.example.com migrate --host-name node01.example.com

Put host node02.example.com in maintenance mode:

action host node02.example.com deactivate

Activate host node03.example.com:

action host node03.example.com activate

List all VMs running on host node01.example.com:

list vms --query 'host=node01.example.com'

Edit VM web01.example.com to  boot firstly via PXE and then HD:

update vm web01.example.com --os-boot 'boot.dev=network,boot.dev=hd'

You can run just one command without get the ovirt-shell prompt:

ovirt-shell -l https://engine.example.com/api -u admin@internal -E show vm web01.example.com

If you want to run a series of commands, write them in a file and use the ‘-f’ option:

ovirt-shell -l https://engine.example.com/api -u admin@internal  -f cmds.txt

For more information, check the official page of CLI, in other words, ovirt-shell.

Hugepages and KSM on Linux

Searching for Linux optimization some time ago I got two cool features: huge pages and ksm.

Huge pages

The Linux Virtual Memory subsystem (VM) provides memory to the system in blocks, called pages. By default, each page has 4 kilobytes of size. In theory, a system with 2GiB of RAM can allocate 524,288 pages with 4KiB of size. Along with the payload, pages carries some control bits. These bits are scanned by kscand so that VM subsystem can manage the page life cycle. Each page has an entry in the page table. Thus, as much pages you have, more resources will be demanded to manage all of them.

That way, Linux can allocate untill 4 MiB page size for x86 systems and 2 MiBpage size for x86_64. You can enable it defining into your .config file of kernel source:

CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y

Once you have a kernel compiled with this two options enabled, you can allocate a number of hugepages using sysctl tool:

sysctl -w vm.nr_hugepages = 10

This will tell Linux to reserve space to 10 hugepages. You can check it out:

[root@localhost ~] grep ^Huge /proc/meminfo
HugePages_Total: 10
HugePages_Free: 10
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

In case of x86_64 system, it means 2 MiB * 10 = 20 MiB. That pages are allocated contiguously. So, it is recommended you allocate huge pages in the boot time. Huge pages can’t be moved to swap space too.

It is commonly use in virtualization hosts and database servers.

KSM

The Kernel Samepage Merging is a mechanism which is used mainly by hypervisiors to save memory. When it is enabled, a kernel thread scans memory searching for pages with the same content but with different owner. When it occurs, Linux merges them and maps both to the same location. The new common page is marked as copy-on-write too. As soon as a process need to motify that page, Linux breaks it again into two pages. To use KSM, the kernel have to be compiled with CONFIG_KSM=y. Once the kernel is compiled with KSM, you can enable KSM to scan with:

echo 1 > /sys/kernel/mm/ksm/run

Now, your kernel will scan 100 pages each 20 millisecs by default. You can modify this writing into the files pages_to_scan and sleep_millisecs that you can find in /sys/kernel/mm/ksm folder. Monitor the KSM:

eduardo@symphony:~$ ( cd /sys/kernel/mm/ksm/ ; while true ; do clear ; for i in * ; do echo -n "${i}: " ; cat ${i} ; done ; sleep 1 ; done )

For more details, find hugetlbpage.tx and ksm.txt in Documentation folder of your Linux source.

LVS and keepalived – An example

The purpose of the post is just show an example of a Linux box operating as a loadbalancer with LVS and keepalived together.

Briefing:

LVS, or Linux Virtual Server, is a feature of the Linux Kernel for loadbalance services in a Linux box. Check the official site for more.

keepalived is a routing software that implement VRRP in order to manage dynamic gateway and failure. Here you can read more.

The components:

  • Two Linux box with LVS and keepalived, one master and another slave, in case of failure of the master.
  • Two Webservers. I will assume you already have the webservers configured and working.
LVS
LVS overview

Configuration

LVS is a kernel feature, but in order to handle it, ipvsadm package is needed. In CentOS, you can resolve with:

yum install -y ipvsadm keepalived

keepalived is also shipped with most of the distros.

Once you installed both, ipvsadm and keepalive, lets configure them. Below follows the /etc/keepalived/keepalived.conf commented:

global_defs {
  router_id LVS_DEVEL
}

router_id is just a string to identify.

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        100.10.20.50
    }
}

Above is defined the instance of the virtual router. In the first machine it will have the state as MASTER. In the second machine, use SLAVE instead. Another parameter should be different in both, the virtual_router_id. The number does not matter, just keep it different. The priority will make sense when you have 2 slaves. It defines what SLAVE should become MASTER first.
The authentication section is needed to the keepalived servers trust each other.
The virtual_ipaddress is the IP that the clients will connect to.

virtual_server 100.10.20.50 80 {
    delay_loop 15
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 100.10.20.55 80 {
        HTTP_GET {
            path /
            status_code 200
        }
    }
    real_server 100.10.20.56 80 {
        HTTP_GET {
            path /
            status_code 200
        }
    }
}

The delay_loop defines the timer for polling service in seconds. lb_algo is round-robind. lb_kind is direct routing, no need tunneling or nat. persistence_timeout is the persistence time in seconds for LVS. The real_server sections define the webservers. Both have the HTTP_GET checker. keepalived will do a http get at the ‘/’ path and expect a 200 code. If it’s not ok, keepalived will disable forwarding till receive 200 status code.

As soon as keepalived is started, it will configure the VIP in the interface defined at vrrp_instance.

Now in the webservers, a dummy interface must be configured. So they can receive packages destinated to the VIP 100.10.20.50. First load the dummy module:

modprobe dummy

Create the dummy interface and configure it:

ip link add dummy0 type dummy
ip link set dummy0 up
ip addr add 100.10.20.50/32

CentOS 7 login with Samba4 LDAP

Centralized authentication may be a good solution for large environments. Sysadmin can better manage users’ logins and permissions. Here I list a few steps in order to implement authentication of a CentOS 7 server against an Samba4 LDAP service.

My example environment

  • Samba4 server = ldap1.example.com
  • CentOS 7 client = localhost.example.com

My user
*I will assume your Samba4 server is already running.

dn: CN=Eduardo de Lima Ramos,OU=people,DC=example,DC=com
uidNumber: 10000
unixHomeDirectory: /home/eduardo.ramos
gidNumber: 10
loginShell: /bin/bash
...

Instalation and configuration

[root@localhost ~]# yum install -y nss-pam-ldapd 

Now, configure the PAM. Make sure that the following lines exist in these files
/etc/pam.d/system-auth:

auth        sufficient    pam_ldap.so use_first_pass
account     [default=bad success=ok user_unknown=ignore] pam_ldap.so
password    sufficient    pam_ldap.so use_authtok
session     optional      pam_ldap.so

/etc/pam.d/password-auth:

auth        sufficient    pam_ldap.so use_first_pass
account     [default=bad success=ok user_unknown=ignore] pam_ldap.so
password    sufficient    pam_ldap.so use_authtok
session     optional      pam_ldap.so
session     required      pam_mkhomedir.so umask=0027

That will make PAM use ldap users and create homedir when it does not exist.

Now, we need to configure the nslcd daemon. Use the following model.
/etc/nslcd.conf

uid nslcd
gid ldap

uri ldap://ldap1.example.com
ldap_version 3
base dc=example,dc=com

binddn LOCAL\Administrator
bindpw super$ecret

pagesize 1000
referrals off
idle_timelimit 800
filter passwd (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
map    passwd uid              sAMAccountName
map    passwd homeDirectory    unixHomeDirectory
map    passwd gecos            displayName
filter shadow (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
map    shadow uid              sAMAccountName
map    shadow shadowLastChange pwdLastSet
filter group  (objectClass=group)

ssl no
tls_cacertdir /etc/openldap/cacerts

Now, just enable and start the nslcd and nscd daemons:

[root@localhost ~]$ sudo systemctl enable nslcd
[root@localhost ~]$ sudo systemctl enable nscd
[root@localhost ~]$ sudo systemctl start nslcd
[root@localhost ~]$ sudo systemctl start nscd

Try to login.

oVirt host – iptables

When you add a new host to your oVirt Engine, your iptables rules are overwritten by oVirt deploy. The new rules might not meet your needs. But you can change this.

oVirt 3.4

Using engine-config command in Engine host, get the default rules:

sudo engine-config -g IPTablesConfig
 IPTablesConfig:
 # oVirt default firewall configuration. Automatically generated by vdsm bootstrap script.
 *filter
 :INPUT ACCEPT [0:0]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [0:0]
 -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

-A INPUT -i lo -j ACCEPT
 # vdsm
 -A INPUT -p tcp --dport @VDSM_PORT@ -j ACCEPT
 # SSH
 -A INPUT -p tcp --dport @SSH_PORT@ -j ACCEPT
 # snmp
 -A INPUT -p udp --dport 161 -j ACCEPT

@CUSTOM_RULES@

# Reject any other input traffic
 -A INPUT -j REJECT --reject-with icmp-host-prohibited
 -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited
 COMMIT

To set new rules, copy the lines returned above and add your rules just after @CUSTOM_RULES@, for example:

sudo engine-config -s IPTablesConfig="
 *filter
 :INPUT ACCEPT [0:0]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [0:0]
 -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

-A INPUT -i lo -j ACCEPT
 # vdsm
 -A INPUT -p tcp --dport @VDSM_PORT@ -j ACCEPT
 # SSH
 -A INPUT -p tcp --dport @SSH_PORT@ -j ACCEPT
 # snmp
 -A INPUT -p udp --dport 161 -j ACCEPT

@CUSTOM_RULES@
 -A INPUT -m comment --comment 'new rule '-j LOG --log-prefix='new rule '

# Reject any other input traffic
 -A INPUT -j REJECT --reject-with icmp-host-prohibited
 -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited
 COMMIT"

oVirt 3.5

New version has a proper variable for this. Follow the example:

sudo engine-config --set IPTablesConfigSiteCustom="
-A INPUT -m comment --comment 'new rule '-j LOG --log-prefix='new rule '
"

That new rule will be set in place of @CUSTOM_RULES@.

Source based routing

Most of network routing is based on the destination. But sometimes you may need to forward packets to different gateways depending on the source.

In Linux you can do this using the iproute2 package. It uses netlink socket interface in order to handle addressment, routing, queuing and scheduling of Linux network subsystem. Follow an example:

Define a lable for a table to be used:

echo "10 foo" >> /etc/iproute2/rt_tables

Insert a route into foo table:

ip route add default via 10.10.10.1 dev eth1 table foo

Insert a rule with low priority in order to a host consult the new table foo:

ip rule add prio 10 from 192.168.16.7 lookup foo

You can check the rules with:

ip rule show list

Use the man for more information.

Internal/Isolated networks on oVirt

For those who are accustomed with virt-manager administration and operation, create an isolated network among the VMs seems to be a very easy task. But oVirt haven’t so direct configuration. In fact, we need some commands on terminal. I must tell you this post is valid only when you have just one host hypervisior. With 2 or more, external connectivity is inevitable.

In order to create an internal network you can use dummy module. First of all, make sure your server loads dummy module at startup.
Create /etc/sysconfig/modules/dummy.modules:

modprobe dummy &gt; /dev/null 2&amp;1
exit 0

Manually, you can run modprobe to load in runtime. It will appear a dummy0 network interface. Done this, create /etc/sysconfig/network-scripts/ifcfg-dummy0 with this content:

DEVICE=dummy0
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
PROMISC=yes

Now comes the oVirt configuration. In webadmin portal, go to the ‘Network’ tab and click new:

New network
New network

The definition could be simple. Just give a name and match ‘VM network‘:

New network
New network

With the virtual switch created, we need to link our dummy interface on it. Go to the network configuration of host:

Configure network on host
Configure network on host
Configure network on host
Configure network on host

Drag internal network and drop in dummy0 interface

Configure network on host
Configure network on host

Check ‘Save network configuration’ and click OK.

Configure network on host
Configure network on host

Now, for each virtual machine you want to use internal network, you can create a virtual NIC and attach to internal virtual switch.

Configure network on guest
Configure network on guest

It was tested on oVirt 3.4 setup in all-in-one mode.

 

Shell tricks

There are many keys or commands that could turn easier our lives. That’s some:

Read text file inside tar.xz file:

 cat samba-4.0.9.tar.xz | tar -JxO samba-4.0.9/source4/scripting/bin/samba_backup | less 

Command correction:

nkdir -v /tmp/foo
bash: nkdir: command not found
^nkdir^mkdir
mkdir foo 

See the difference of file in remote machines:

 diff <(ssh server1 'cat file') <(ssh server2 'cat file') 

Or installed packages:

 diff <(ssh server1 'rpm -qa | sort') <(ssh server2 'rpm -qa | sort') 

You have an alias with the same name of a command, but you want to run the command, not alias:

 alias vi=vim
\vi 

You can see that ones and many others here.

Partition shrink

Several times we need re-size our storage area. Normally, we expand volumes, but never shrink. Although it’s not common, this is possible too. Surfing on the web, I found that great article.

My tests worked gracefully! I extended this article above re-sizing the virtual disk image file, with qemu-img.

qemu-img convert -f qcow2 -O raw resize.img resize_raw.img
qemu-img resize resize_raw.img 5360321024
qemu-img convert -f raw -O qcow2 resize_raw.img resize.img

5360321024 is exactly the size in bytes of the sum of all partitions.