How to install and setup LXC (Linux Container) on Fedora Linux 26

H

ow do I install, create and manage LXC (Linux Containers – an operating system-level virtualization) on Fedora Linux version 26 server?

 

LXC is an acronym for Linux Containers. It is nothing but an operating system-level virtualization technology for running multiple isolated Linux distros (systems containers) on a single Linux host. This tutorial shows you how to install and manage LXC containers on Fedora Linux server.

Our sample setup

 

The LXC often described as a lightweight virtualization technology. You can think LXC as chrooted jail on steroids. There is no guest operating system involved. You can only run Linux distros with LXC. You can not run MS-Windows or *BSD or any other operating system with LXC. You can run CentOS, Fedora, Ubuntu, Debian, Gentoo or any other Linux distro using LXC. Traditional virtualization such as KVM/XEN/VMWARE and paravirtualization need a full operating system image for each instance. You can run any operating system using traditional virtualization.

Installation

Type the following dnf command to install lxc and related packages on Fedora 26:

$ sudo dnf install lxc lxc-templates lxc-extra debootstrap libvirt perl gpg

 

Sample outputs:

Fig.01: LXC Installation on Fedora 26

Start and enable needed services

First start virtualization daemon named libvirtd and lxc using the systemctl command:

$ sudo systemctl start libvirtd.service

$ sudo systemctl start lxc.service

$ sudo systemctl enable lxc.service

 

Sample outputs:

Created symlink /etc/systemd/system/multi-user.target.wants/lxc.service ? /usr/lib/systemd/system/lxc.service.

Verify that services are running:

$ sudo systemctl status libvirtd.service

 

Sample outputs:

? libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)

Active: active (running) since Thu 2017-07-13 07:25:30 UTC; 40s ago

Docs: man:libvirtd(8)

http://libvirt.org

Main PID: 3688 (libvirtd)

CGroup: /system.slice/libvirtd.service

??3688 /usr/sbin/libvirtd

??3760 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/libexec/libvirt_leaseshelper

??3761 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/libexec/libvirt_leaseshelper

 

Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify

Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 — 192.168.122.254, lease time 1h

Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, sockets bound exclusively to interface virbr0

Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: reading /etc/resolv.conf

Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.11.5#53

Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.13.5#53

Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.14.5#53

Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /etc/hosts – 3 addresses

Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /var/lib/libvirt/dnsmasq/default.addnhosts – 0 addresses

Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: read /var/lib/libvirt/dnsmasq/default.hostsfile

And:

$ sudo systemctl status lxc.service

 

Sample outputs:

? lxc.service – LXC Container Initialization and Autoboot Code

Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled)

Active: active (exited) since Thu 2017-07-13 07:25:34 UTC; 1min 3s ago

Docs: man:lxc-autostart

man:lxc

Main PID: 3830 (code=exited, status=0/SUCCESS)

CPU: 9ms

 

Jul 13 07:25:34 nixcraft-f26 systemd[1]: Starting LXC Container Initialization and Autoboot Code…

Jul 13 07:25:34 nixcraft-f26 systemd[1]: Started LXC Container Initialization and Autoboot Code.

LXC networking

To view configured networking interface for lxc, run:

$ sudo brctl show

 

Sample outputs:

bridge name bridge id  STP enabled interfaces

virbr0  8000.525400293323 yes  virbr0-nic

You must set default bridge to virbr0 in the file /etc/lxc/default.conf:

$ sudo vi /etc/lxc/default.conf

 

Sample config (replace lxcbr0 with virbr0 for lxc.network.link):

lxc.network.type = veth

lxc.network.link = virbr0

lxc.network.flags = up

lxc.network.hwaddr = 00:16:3e:xx:xx:xx

Save and close the file. To see DHCP range used by containers, enter:

$ sudo systemctl status libvirtd.service | grep range

 

Sample outputs:

Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 — 192.168.122.254, lease time 1h

To check the current kernel for lxc support, enter:

$ lxc-checkconfig

 

Sample outputs:

Kernel configuration not found at /proc/config.gz; searching…

Kernel configuration found at /boot/config-4.11.9-300.fc26.x86_64

— Namespaces —

Namespaces: enabled

Utsname namespace: enabled

Ipc namespace: enabled

Pid namespace: enabled

User namespace: enabled

Network namespace: enabled

 

— Control groups —

Cgroup: enabled

Cgroup clone_children flag: enabled

Cgroup device: enabled

Cgroup sched: enabled

Cgroup cpu account: enabled

Cgroup memory controller: enabled

Cgroup cpuset: enabled

 

— Misc —

Veth pair device: enabled

Macvlan: enabled

Vlan: enabled

Bridges: enabled

Advanced netfilter: enabled

CONFIG_NF_NAT_IPV4: enabled

CONFIG_NF_NAT_IPV6: enabled

CONFIG_IP_NF_TARGET_MASQUERADE: enabled

CONFIG_IP6_NF_TARGET_MASQUERADE: enabled

CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled

FUSE (for use with lxcfs): enabled

 

— Checkpoint/Restore —

checkpoint restore: enabled

CONFIG_FHANDLE: enabled

CONFIG_EVENTFD: enabled

CONFIG_EPOLL: enabled

CONFIG_UNIX_DIAG: enabled

CONFIG_INET_DIAG: enabled

CONFIG_PACKET_DIAG: enabled

CONFIG_NETLINK_DIAG: enabled

File capabilities: enabled

 

Note : Before booting a new kernel, you can check its configuration

usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

How can I create a Ubuntu Linux container?

Type the following command to create Ubuntu 16.04 LTS container:

$ sudo lxc-create -t download -n ubuntu-c1 — -d ubuntu -r xenial -a amd64

 

Sample outputs:

Setting up the GPG keyring

Downloading the image index

Downloading the rootfs

Downloading the metadata

The image cache is now ready

Unpacking the rootfs

 


You just created an Ubuntu container (release=xenial, arch=amd64, variant=default)

 

To enable sshd, run: apt-get install openssh-server

 

For security reason, container images ship without user accounts

and without a root password.

 

Use lxc-attach or chroot directly into the rootfs to set a root password

or create user accounts.

To setup admin password, run:

$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd ubuntu

Enter new UNIX password:

Retype new UNIX password:

passwd: password updated successfully

Make sure root account is locked out:

$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd

 

To start container run:

$ sudo lxc-start -n ubuntu-c1

 

To login to the container named ubuntu-c1 use ubuntu user and password set earlier:

$ lxc-console -n ubuntu-c1

 

Sample outputs:

Fig.02: Launch a console for the specified container

 

You can now install packages and configure your server. For example, to enable sshd, run apt-get command/apt command:

ubuntu@ubuntu-c1:~$ sudo apt-get install openssh-server

 

To exit from lxc-console type Ctrl+a q to exit the console session and back to the host.

How do I create a Debain Linux container?

Type the following command to create Debian 9 (“stretch”) container:

$ sudo lxc-create -t download -n debian-c1 — -d debian -r stretch -a amd64

 

Sample outputs:

Setting up the GPG keyring

Downloading the image index

Downloading the rootfs

Downloading the metadata

The image cache is now ready

Unpacking the rootfs

 


You just created a Debian container (release=stretch, arch=amd64, variant=default)

 

To enable sshd, run: apt-get install openssh-server

 

For security reason, container images ship without user accounts

and without a root password.

 

Use lxc-attach or chroot directly into the rootfs to set a root password

or create user accounts.

Setup root account password, run:

$ sudo chroot /var/lib/lxc/debian-c1/rootfs/ passwd

 

Start the container and login into it for management purpose, run:

$ sudo lxc-start -n debian-c1

$ lxc-console -n debian-c1

How do I create a CentOS Linux container?

Type the following command to create CentOS 7 container:

$ sudo lxc-create -t download -n centos-c1 — -d centos -r 7 -a amd64

 

Sample outputs:

Setting up the GPG keyring

Downloading the image index

Downloading the rootfs

Downloading the metadata

The image cache is now ready

Unpacking the rootfs

 


You just created a CentOS container (release=7, arch=amd64, variant=default)

 

To enable sshd, run: yum install openssh-server

 

For security reason, container images ship without user accounts

and without a root password.

 

Use lxc-attach or chroot directly into the rootfs to set a root password

or create user accounts.

Set the root account password and start the container:

$ sudo chroot /var/lib/lxc/centos-c1/rootfs/ passwd

$ sudo lxc-start -n centos-c1

$ lxc-console -n centos-c1

How do I create a Fedora Linux container?

Type the following command to create Fedora 25 container:

$ sudo lxc-create -t download -n fedora-c1 — -d fedora -r 25 -a amd64

 

Sample outputs:

Setting up the GPG keyring

Downloading the image index

Downloading the rootfs

Downloading the metadata

The image cache is now ready

Unpacking the rootfs

 


You just created a Fedora container (release=25, arch=amd64, variant=default)

 

To enable sshd, run: dnf install openssh-server

 

For security reason, container images ship without user accounts

and without a root password.

 

Use lxc-attach or chroot directly into the rootfs to set a root password

or create user accounts.

Set the root account password and start the container:

$ sudo chroot /var/lib/lxc/fedora-c1/rootfs/ passwd

$ sudo lxc-start -n fedora-c1

$ lxc-console -n fedora-c1

How do I create a CentOS 6 Linux container and store it in btrfs?

You need to create or format hard disk as btrfs and use that one:

# mkfs.btrfs /dev/sdb

# mount /dev/sdb /mnt/btrfs/

 

If you do not have /dev/sdb create an image using the dd or fallocate command as follows:

# fallocate -l 10G /nixcraft-btrfs.img

# losetup /dev/loop0 /nixcraft-btrfs.img

# mkfs.btrfs /dev/loop0

# mount /dev/loop0 /mnt/btrfs/

# btrfs filesystem show

 

Sample outputs:

Label: none  uuid: 4deee098-94ca-472a-a0b5-0cd36a205c35

Total devices 1 FS bytes used 361.53MiB

devid    1 size 10.00GiB used 3.02GiB path /dev/loop0

Now create a CentOS 6 LXC:

# lxc-create -B btrfs -P /mnt/btrfs/ -t download -n centos6-c1 — -d centos -r 6 -a amd64

# chroot /mnt/btrfs/centos6-c1/rootfs/ passwd

# lxc-start -P /mnt/btrfs/ -n centos6-c1

# lxc-console -P /mnt/btrfs -n centos6-c1

# lxc-ls -P /mnt/btrfs/ -f

 

Sample outputs:

NAME       STATE   AUTOSTART GROUPS IPV4            IPV6

centos6-c1 RUNNING 0         –      192.168.122.145 –

How do I see a list of all available images?

Type the following command:

$ lxc-create -t download -n NULL — –list

 

Sample outputs:

Setting up the GPG keyring

Downloading the image index

 


DIST RELEASE ARCH VARIANT BUILD


alpine 3.1 amd64 default 20170319_17:50

alpine 3.1 armhf default 20161230_08:09

alpine 3.1 i386 default 20170319_17:50

alpine 3.2 amd64 default 20170504_18:43

alpine 3.2 armhf default 20161230_08:09

alpine 3.2 i386 default 20170504_17:50

alpine 3.3 amd64 default 20170712_17:50

alpine 3.3 armhf default 20170103_17:50

alpine 3.3 i386 default 20170712_17:50

alpine 3.4 amd64 default 20170712_17:50

alpine 3.4 armhf default 20170111_20:27

alpine 3.4 i386 default 20170712_17:50

alpine 3.5 amd64 default 20170712_17:50

alpine 3.5 i386 default 20170712_17:50

alpine 3.6 amd64 default 20170712_17:50

alpine 3.6 i386 default 20170712_17:50

alpine edge amd64 default 20170712_17:50

alpine edge armhf default 20170111_20:27

alpine edge i386 default 20170712_17:50

archlinux current amd64 default 20170529_01:27

archlinux current i386 default 20170529_01:27

centos 6 amd64 default 20170713_02:16

centos 6 i386 default 20170713_02:16

centos 7 amd64 default 20170713_02:16

debian jessie amd64 default 20170712_22:42

debian jessie arm64 default 20170712_22:42

debian jessie armel default 20170711_22:42

debian jessie armhf default 20170712_22:42

debian jessie i386 default 20170712_22:42

debian jessie powerpc default 20170712_22:42

debian jessie ppc64el default 20170712_22:42

debian jessie s390x default 20170712_22:42

debian sid amd64 default 20170712_22:42

debian sid arm64 default 20170712_22:42

debian sid armel default 20170712_22:42

debian sid armhf default 20170711_22:42

debian sid i386 default 20170712_22:42

debian sid powerpc default 20170712_22:42

debian sid ppc64el default 20170712_22:42

debian sid s390x default 20170712_22:42

debian stretch amd64 default 20170712_22:42

debian stretch arm64 default 20170712_22:42

debian stretch armel default 20170711_22:42

debian stretch armhf default 20170712_22:42

debian stretch i386 default 20170712_22:42

debian stretch powerpc default 20161104_22:42

debian stretch ppc64el default 20170712_22:42

debian stretch s390x default 20170712_22:42

debian wheezy amd64 default 20170712_22:42

debian wheezy armel default 20170712_22:42

debian wheezy armhf default 20170712_22:42

debian wheezy i386 default 20170712_22:42

debian wheezy powerpc default 20170712_22:42

debian wheezy s390x default 20170712_22:42

fedora 22 amd64 default 20170216_01:27

fedora 22 i386 default 20170216_02:15

fedora 23 amd64 default 20170215_03:33

fedora 23 i386 default 20170215_01:27

fedora 24 amd64 default 20170713_01:27

fedora 24 i386 default 20170713_01:27

fedora 25 amd64 default 20170713_01:27

fedora 25 i386 default 20170713_01:27

gentoo current amd64 default 20170712_14:12

gentoo current i386 default 20170712_14:12

opensuse 13.2 amd64 default 20170320_00:53

opensuse 42.2 amd64 default 20170713_00:53

oracle 6 amd64 default 20170712_11:40

oracle 6 i386 default 20170712_11:40

oracle 7 amd64 default 20170712_11:40

plamo 5.x amd64 default 20170712_21:36

plamo 5.x i386 default 20170712_21:36

plamo 6.x amd64 default 20170712_21:36

plamo 6.x i386 default 20170712_21:36

ubuntu artful amd64 default 20170713_03:49

ubuntu artful arm64 default 20170713_03:49

ubuntu artful armhf default 20170713_03:49

ubuntu artful i386 default 20170713_03:49

ubuntu artful ppc64el default 20170713_03:49

ubuntu artful s390x default 20170713_03:49

ubuntu precise amd64 default 20170713_03:49

ubuntu precise armel default 20170713_03:49

ubuntu precise armhf default 20170713_03:49

ubuntu precise i386 default 20170713_03:49

ubuntu precise powerpc default 20170713_03:49

ubuntu trusty amd64 default 20170713_03:49

ubuntu trusty arm64 default 20170713_03:49

ubuntu trusty armhf default 20170713_03:49

ubuntu trusty i386 default 20170713_03:49

ubuntu trusty powerpc default 20170713_03:49

ubuntu trusty ppc64el default 20170713_03:49

ubuntu xenial amd64 default 20170713_03:49

ubuntu xenial arm64 default 20170713_03:49

ubuntu xenial armhf default 20170713_03:49

ubuntu xenial i386 default 20170713_03:49

ubuntu xenial powerpc default 20170713_03:49

ubuntu xenial ppc64el default 20170713_03:49

ubuntu xenial s390x default 20170713_03:49

ubuntu yakkety amd64 default 20170713_03:49

ubuntu yakkety arm64 default 20170713_03:49

ubuntu yakkety armhf default 20170713_03:49

ubuntu yakkety i386 default 20170713_03:49

ubuntu yakkety powerpc default 20170713_03:49

ubuntu yakkety ppc64el default 20170713_03:49

ubuntu yakkety s390x default 20170713_03:49

ubuntu zesty amd64 default 20170713_03:49

ubuntu zesty arm64 default 20170713_03:49

ubuntu zesty armhf default 20170713_03:49

ubuntu zesty i386 default 20170713_03:49

ubuntu zesty powerpc default 20170317_03:49

ubuntu zesty ppc64el default 20170713_03:49

ubuntu zesty s390x default 20170713_03:49


How do I list the containers existing on the system?

Type the following command:

$ lxc-ls -f

 

Sample outputs:

NAME      STATE   AUTOSTART GROUPS IPV4            IPV6

centos-c1 RUNNING 0         –      192.168.122.174 –

debian-c1 RUNNING 0         –      192.168.122.241 –

fedora-c1 RUNNING 0         –      192.168.122.176 –

ubuntu-c1 RUNNING 0         –      192.168.122.56  –

How do I query information about a container?

The syntax is:

$ lxc-info -n {container}

$ lxc-info -n centos-c1

 

Sample outputs:

Name:           centos-c1

State:          RUNNING

PID:            5749

IP:             192.168.122.174

CPU use:        0.87 seconds

BlkIO use:      6.51 MiB

Memory use:     31.66 MiB

KMem use:       3.01 MiB

Link:           vethQIP1US

TX bytes:      2.04 KiB

RX bytes:      8.77 KiB

Total bytes:   10.81 KiB

How do I stop/start/restart a container?

The syntax is:

$ sudo lxc-start -n {container}

$ sudo lxc-start -n fedora-c1

$ sudo lxc-stop -n {container}

$ sudo lxc-stop -n fedora-c1

How do I monitor container statistics?

To display containers, updating every second, sorted by memory use:

$ lxc-top –delay 1 –sort m

 

To display containers, updating every second, sorted by cpu use:

$ lxc-top –delay 1 –sort c

 

To display containers, updating every second, sorted by block I/O use:

$ lxc-top –delay 1 –sort b

 

Sample outputs:

Fig.03: Shows container statistics with lxc-top

How do I destroy/delete a container?

The syntax is:

$ sudo lxc-destroy -n {container}

$ sudo lxc-stop -n fedora-c2

$ sudo lxc-destroy -n fedora-c2

 

If a container is running, stop it first and destroy it:

$ sudo lxc-destroy -f -n fedora-c2

How do I creates, lists, and restores container snapshots?

The syntax is as follows as per snapshots operation. Please note that you must use snapshot aware file system such as BTRFS/ZFS or LVM.

Create snapshot for a container

$ sudo lxc-snapshot -n {container} -c  comment for snapshot

$ sudo lxc-snapshot -n centos-c1 -c  13/July/17 before applying patches

List snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -L -C

Restore snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -r snap0

Destroy/Delete snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -d snap0

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *