If you want to understand in more detail the processes of setting up and providing comprehensive security for the local and network infrastructure, built on the basis of the Linux OS, I recommend that you get acquainted with Linux Security Online Course in OTUS. The course is not for beginners, for admission you need to go.
Introduction
I warn you right away that this article is my introduction to LXC containers. I do not have practical experience working with them, so mindlessly taking and applying my recommendations is not worth it.
I will not write much about what containers are, how they differ from virtual machines, and how container systems (LXC, docker, openvz, etc.) differ from each other. All this can be found on the Internet on the sites of the products themselves. I’ll tell you everything in my own words.
- The main difference between a container and a virtual machine is that it runs at the same hardware level as the host server. No need for virtual devices. The container sees the source iron. This is a plus for performance. The core of the source system and the isolation of resources within the system are used.
- A container can scale resources to an entire physical server. For example, a container can use the same drive and file system as the host machine. There is no need to break a disk into pieces, as with virtual machines and give each a piece. As a result, someone may not use their drive at all, but someone will not have enough. With containers, you can do it easier – everyone uses a shared drive with the host. However, you can still set restrictions for the drive in a virtual machine. The same thing can be done with the processor and memory.
I almost did not work with containers except docker. You have to work with it only in conjunction with the developers. For my purposes, I do not use it, because for my tasks it seems to be uncomfortable. I don’t want to discuss this topic here now, maybe another time in an article about docker, if any. But in general, I do not like docker, especially in production.
I’ll tell you why I wanted to look at containers instead of virtual machines that I have been using everywhere for many years.
- As already written earlier, it attracts the ability to use the resources of the host machine directly. I took a system disk with a root of 1TB and all the containers use it while there is space.
- Easy backup and access to files in containers. You can view files in the container simply by going to the container directory from the host machine. They are all kept open. So they are very convenient to backup using rsync, or in some other way.
- Easy to copy, expand, manage containers. They take up little space, you can use your host to fix some kind of config in the container system.
For example, I have quite powerful VDS servers from ihor. 2 cores, 8 gigs, 150GB disk. Sometimes what is hosted there does not fully utilize the resources of the virtual machine. I would like to somehow occupy them, but at the same time not affect the main projects in the virtual machine. Sometimes you want to create some kind of test environment for sites and try new versions of software. To do this, you need to take a separate virtualka. Instead, I decided to try LXC containers.
There are two ways to use LXC containers in a network plan:
- Order a separate external IP for each container, configure the bridge container on a real network interface and release it directly to the Internet. This option is suitable if there are ip addresses or do not mind the money for them. This is the most convenient way to work.
- Configure virtual bridge, configure NAT and port forwarding from the host machine inside containers. Not very convenient, but nevertheless quite a working option.
I will talk about both methods, as I checked both of them. We will configure everything on CentOS 7.
If you do not have your server with CentOS 7 yet, then I recommend my materials on this topic:
- Install CentOS 7.
- Configuring CentOS 7.
Network setup for LXC containers
Let’s start by setting up a network for containers. We will need the bridge-utils package. Install it:
# yum install bridge-utils
Set up a virtual bridge that only containers inside their virtual network will use – 10.1.1.1/24. To do this, create in the directory / etc / sysconfig / network-scripts file ifcfg-virbr0 the following content:
# mcedit /etc/sysconfig/network-scripts/ifcfg-virbr0
DEVICE=virbr0 BOOTPROTO=static IPADDR=10.1.1.1 NETMASK=255.255.255.0 ONBOOT=yes TYPE=Bridge NM_CONTROLLED=no
After changing the network settings, it is better to reboot. Check what happened:
# ip a
Further, the article on configuring the gateway on centos can help us, since we will need to implement part of the gateway functionality on the host. Namely:
- Enable packet routing between network interfaces
- Configure NAT for container virtual network
- Configure port forwarding to containers
Enable packet routing. To do this in a file /etc/sysctl.conf add the line to the very end:
net.ipv4.ip_forward = 1
Apply the change:
# sysctl -p
Now the main thing is to configure iptables. In general, it is not immediately taken and most often people have questions about work if they are setting up for the first time. On CentOS 7, firewalld is installed by default. I do not use it and always turn it off. Not because it is bad and uncomfortable, but because I’m used to working with iptables directly, I have many ready-made configurations for it.
Disable firewalld:
# systemctl stop firewalld # systemctl disable firewalld
Install iptables services:
# yum install iptables-services
We draw a config for iptables. I took an example from an article about configuring a gateway for a local network, only changed the virtual network address and interface name. In fact, we need the same thing. I bring a config from the working server:
# mcedit /etc/iptables.sh
#!/bin/bash # # Объявление переменных export IPT="iptables" # Интерфейс который смотрит в интернет export WAN=ens18 export WAN_IP=95.169.190.64 # LXC сеть export LAN1=virbr0 export LAN1_IP_RANGE=10.1.1.1/24 # Очистка всех цепочек iptables $IPT -F $IPT -F -t nat $IPT -F -t mangle $IPT -X $IPT -t nat -X $IPT -t mangle -X # Установим политики по умолчанию для трафика, не соответствующего ни одному из правил $IPT -P INPUT DROP $IPT -P OUTPUT DROP $IPT -P FORWARD DROP # разрешаем локальный траффик для loopback $IPT -A INPUT -i lo -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT # Разрешаем исходящие соединения самого сервера $IPT -A OUTPUT -o $WAN -j ACCEPT # Разрешаем доступ из LXC наружу и обратно $IPT -A FORWARD -i $LAN1 -o $WAN -j ACCEPT $IPT -A FORWARD -i $WAN -o $LAN1 -j ACCEPT $IPT -A INPUT -i $LAN1 -j ACCEPT $IPT -A OUTPUT -o $LAN1 -j ACCEPT # Включаем NAT $IPT -t nat -A POSTROUTING -o $WAN -s $LAN1_IP_RANGE -j MASQUERADE # Пробрасываем порты в контейнер LXC_centos $IPT -t nat -A PREROUTING -p tcp --dport 23543 -i ${WAN} -j DNAT --to 10.1.1.2:22 $IPT -t nat -A PREROUTING -p tcp --dport 80 -i ${WAN} -j DNAT --to 10.1.1.2:80 $IPT -t nat -A PREROUTING -p tcp --dport 443 -i ${WAN} -j DNAT --to 10.1.1.2:443 # Состояние ESTABLISHED говорит о том, что это не первый пакет в соединении. # Пропускать все уже инициированные соединения, а также дочерние от них $IPT -A INPUT -p all -m state --state ESTABLISHED,RELATED -j ACCEPT # Пропускать новые, а так же уже инициированные и их дочерние соединения $IPT -A OUTPUT -p all -m state --state ESTABLISHED,RELATED -j ACCEPT # Разрешить форвардинг для уже инициированных и их дочерних соединений $IPT -A FORWARD -p all -m state --state ESTABLISHED,RELATED -j ACCEPT # Включаем фрагментацию пакетов. Необходимо из за разных значений MTU $IPT -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu # Отбрасывать все пакеты, которые не могут быть идентифицированы # и поэтому не могут иметь определенного статуса. $IPT -A INPUT -m state --state INVALID -j DROP $IPT -A FORWARD -m state --state INVALID -j DROP # Приводит к связыванию системных ресурсов, так что реальный # обмен данными становится не возможным, обрубаем $IPT -A INPUT -p tcp ! --syn -m state --state NEW -j DROP $IPT -A OUTPUT -p tcp ! --syn -m state --state NEW -j DROP # Рзрешаем пинги $IPT -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT $IPT -A INPUT -p icmp --icmp-type destination-unreachable -j ACCEPT $IPT -A INPUT -p icmp --icmp-type time-exceeded -j ACCEPT $IPT -A INPUT -p icmp --icmp-type echo-request -j ACCEPT # Открываем порт для ssh $IPT -A INPUT -i $WAN -p tcp --dport 22 -j ACCEPT # Записываем правила /sbin/iptables-save > /etc/sysconfig/iptables
Do not forget to change the names of network interfaces and ip addresses. I do not recommend setting up a firewall if you do not have access to the server console. So you can lose server management.
In my example, forwarding the ssh, http and https port inside the container with the ip address 10.1.1.2 is shown. Further in the examples I will create it.
Make a script /etc/iptables.sh executable:
# chmod 0740 /etc/iptables.sh
Launch iptables and add to startup:
# systemctl start iptables.service # systemctl enable iptables.service
We execute the script with the rules:
# /etc/iptables.sh
We check the established rules:
# iptables -L -v -n
Check NAT and port forwarding:
# iptables -L -v -n -t nat
This I examined an example when containers have their own virtual network, without direct access to an external one. If they will go out with a bridge with direct ip, then iptables do not need to be touched at all. It is enough to enable packet routing between the interfaces, create a bridge and add the real server interface there. Containers connect this bridge. It works the same way as bridge in proxmox.
Create a config for the new bridge:
# mcedit /etc/sysconfig/network-scripts/ifcfg-virbr1
DEVICE=virbr1 BOOTPROTO=static IPADDR=192.168.13.25 NETMASK=255.255.255.0 GATEWAY=192.168.13.1 DNS1=192.168.13.1 ONBOOT=yes TYPE=Bridge NM_CONTROLLED=no
And we bring the main network interface config to this form:
# mcedit /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet BOOTPROTO=none DEVICE=eth0 ONBOOT=yes NM_CONTROLLED=no BRIDGE=virbr1
After changing the network settings, it is better to reboot the server. We assume that we sorted out the network and set everything up. I remind you again:
- virbr0 – A virtual bridge that creates a virtual LAN for containers. They access the external network using the host where NAT is configured and port forwarding using iptables.
- virbr1 – A bridge that includes a real physical host interface. With this bridge, containers get direct access to an external network.
Install LXC on CentOS 7
First, we connect the epel repository:
# yum install epel-release
Now install LXC itself:
# yum install debootstrap LXC LXC-templates LXC-extra
Check the system’s availability for LXC:
# LXC-checkconfig
Everything should be enable, except for two lines:
newuidmap is not installed newgidmap is not installed
Launch LXC and add it to startup:
# systemctl start LXC # systemctl enable LXC
We check:
# systemctl status LXC
Everything is in order, the LXC service is installed and running. We proceed to the creation and configuration of containers.
Create and configure LXC containers
Create a new container called LXC_centos under the control of the centos system.
# LXC-create -n LXC_centos -t centos
After the -t switch, the name of the template is indicated. A list of available templates for installation can be found in / usr / share / LXC / templates /
# ll /usr/share/LXC/templates
After installing the container, we need to specify our root password to it.
To do this, run the command in the console and specify a new password:
# chroot /var/lib/LXC/LXC_centos/rootfs passwd
The new container is located at / var / lib / LXC / LXC_centos. This directory contains the config file. We bring it to the following form:
LXC.network.type = veth LXC.network.flags = up LXC.network.link = virbr0 LXC.network.hwaddr = fe:89:c3:04:aa:38 LXC.rootfs = /var/lib/LXC/LXC_centos/rootfs LXC.network.name = eth0 LXC.network.ipv4 = 10.1.1.2/24 LXC.network.ipv4.gateway = 10.1.1.1 LXC.include = /usr/share/LXC/config/centos.common.conf LXC.arch = x86_64 LXC.utsname = LXC_centos LXC.autodev = 1
Let’s set some network settings in the container itself. Add dns servers to /etc/resolv.conf:
# mcedit /var/lib/LXC/LXC_centos/rootfs/etc/resolv.conf
nameserver 77.88.8.1 nameserver 8.8.4.4
We configure the network interface config as follows:
# mcedit /var/lib/LXC/LXC_centos/rootfs/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0 BOOTPROTO=none ONBOOT=yes HOSTNAME=LXC_centos NM_CONTROLLED=no TYPE=Ethernet
Configuring the LXC container is complete. Run it:
# LXC-start -n LXC_centos -d
Let’s see the state of the container:
# LXC-info -n LXC_centos
Connect to the container console:
# LXC-console -n LXC_centos -t 0
I draw attention to the -t 0 switch. If you do not specify it, then when connecting to the console, you will try to connect to tty 1, on which there will be no answer. You will see on the screen:
Connected to tty 1 Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
You can’t do anything else, except to disconnect by first pressing Ctrl + a, then separately q. I draw your attention to this, since it is not obvious how to type this combination. With the -t switch, we specify the zero console and successfully connect to it. To return to the host system, you need to press Ctrl + a, then separately q.
If after connecting to the container console you do not see the welcome screen, press Enter on the keyboard.
This completes the configuration of the LXC container with centos 7. Check the network, everything should be in order. To connect via ssh to the container, you must connect to port 23543 of the host. Provided you took my iptables configuration example at the very beginning.
Above, I gave an example of using the virbr0 virtual bridge for a container. If you use a bridge with a physical interface for direct container access to an external network, the container settings will be as follows:
LXC.rootfs = /var/lib/LXC/LXC_centos/rootfs LXC.network.type = veth LXC.network.flags = up LXC.network.link = virbr1 LXC.network.hwaddr = fe:89:c3:04:aa:38 LXC.network.name = eth0 LXC.network.ipv4 = 192.168.13.44/24 LXC.network.ipv4.gateway = 192.168.13.1 LXC.network.veth.pair = veth-01 LXC.include = /usr/share/LXC/config/centos.common.conf LXC.arch = x86_64 LXC.utsname = LXC_centos LXC.autodev = 1
In this case, 192.168.13.0/24 is the external network for the host. In my case, this is the local network of the test environment. If you have a real ip address on the bridge, then the container will need another real external ip address.
I draw attention to a very important nuance that cost me several hours of proceedings. To write this article, I used a virtual machine on hyper-v, in which the only network interface was a bridge to the external network. When I forwarded the container through this interface using virbr1, nothing worked for me. The container was seen only by the host and nothing beyond. I checked the settings many times, but could not understand why it did not work.
It turned out that by default, the hypervisor only releases devices on the external network whose mac addresses it knows. In this case, this is just the poppy address of the virtual machine. The containers on this machine have other poppy addresses that are unknown to the hypervisor, and he did not release their packets to the external network. Specifically, in hyper-v, this behavior can be changed in the properties of the network adapter of the virtual machine:
After that, the bridge to the external network worked fine and the containers got access to it. The same behavior will be on proxmox virtual machines. By default, containers will not gain access to the external network. I did not have at hand access to the proxmox hypervisor settings on the machine where I checked. I did not understand in detail, but keep in mind this point. C vmware, by the way, will be the same. Containers will not go to an external network.
Next, I’ll talk about the problems that I encountered while working with LXC containers.
Problems and errors
Httpd is not installed
I must say right away that I used only centos as the host system and templates for LXC containers. Perhaps in other systems the errors indicated by me will not be. The first thing I immediately faced was the inability to install the httpd package. There was such an error:
# yum install httpd
Running transaction Installing : httpd-2.4.6-67.el7.centos.6.x86_64 1/1 Error unpacking rpm package httpd-2.4.6-67.el7.centos.6.x86_64 error: unpacking of archive failed on file /usr/sbin/suexec;5a8adbd2: cpio: cap_set_file Verifying : httpd-2.4.6-67.el7.centos.6.x86_64 1/1 Failed: httpd.x86_64 0:2.4.6-67.el7.centos.6 Complete!
The Internet is full of information on a similar error in centos containers. It is found not only in LXC, but also in docker. In docker, it was somehow fixed at some point, LXC is still playing, and I’m not sure that there will be a fix.
The essence of the error is that there are some kernel limitations for the operation of file capabilities. I did not delve into these file capabilities in detail, I do not fully understand the error, only superficially. The error has been parsed in detail here – https://github.com/LXC/lxd/issues/1245. Since the solution to the problem is to transfer the container to privileged mode, when the host root and container have the same system id, it is approximately clear what the essence of the error is.
In general, I did not begin to transfer the container to privileged mode, but acted as follows. I wound up from the host machine into the container and installed httpd from there. We execute on the host:
# chroot /var/lib/LXC/LXC_centos/rootfs # yum install httpd
Now you can go into the container and verify that httpd is installed and working properly. This is a working solution when you are the administrator of both the host and the containers. But if you give containers to someone for management, then either you have to solve the mistakes of the container owners yourself, or look for some other solution.
Container hangs and loads host cpu
The next unpleasant error that I encountered immediately after the start of testing LXC containers. The container hung up a few minutes after starting. I could neither stop him nor remove him. At the same time, on the host itself, the process / usr / lib / systemd / systemd-journald was 100% loaded with one cpu core.
The solution to the problem is as follows. Add the parameter to the container config:
LXC.kmsg = 0
Restart the container. We go into it and delete / dev / kmsg (in the container, not on the host !!!)
# rm -f /dev/kmsg
After that, the containers began to work stably and stopped hanging. I installed bitrix-env and deployed the site. Everything worked without problems with normal speed.
Chronyd does not work
After installing and running chronyd in the LXC container with centos 7, we get the error:
ConditionCapability=CAP_SYS_TIME was not met
Then I got a little tired of picking LXC errors and realized that I did not want to use these containers in my work. But nevertheless he gathered his strength and googled a little more. As it turned out, this is not an error, it is a limitation of the work in the container. The condition for chronyd to work is to access the adjtimex () system call. The container in non privileged mode does not have this access, and therefore it does not start.
Parameter controls this situation.
ConditionCapability=CAP_SYS_TIME
in the systemd config of the chronyd service in the container – /etc/systemd/system/multi-user.target.wants/chronyd.service. If we remove this parameter and start the service, we get an error:
adjtimex(0x8001) failed : Operation not permitted
In general, a container without privileged mode cannot manage time. This is an architectural feature of the work of containers, there is nothing to be done about it. It is necessary to monitor the time on the host.
Conclusion
Do not like the article and want to teach me how to administer? Please, I like to study. Comments are at your disposal. Tell me how to do it right!
I did a little review of the capabilities of LXC containers. I did not touch upon such issues as resource limitation, backup and container recovery. I have not tested this yet, so I do not want to write about what I have not tried yet, but I want to fix the knowledge already gained.
In general, the conclusion regarding LXC is mixed. On the one hand, everything seems to be convenient and even works. But on the other hand, such unresolved issues as container hangs are not clear to me. Why it did not work out of the box is not clear. In addition, from time to time I received freezes of commands such as LXC-info. Do you want to get info on the container, and in response there is silence. The command just hangs in the console and does not display anything. Left for an hour – unchanged. After restarting the container shows normal. Such obvious errors are very alarming and caution against use in production.
I will test further on my small projects. If everything goes well, I will supplement the article with new material. In principle, if it continues to work stably, you can use it in some places. In general, containers are a very interesting technology that will be a great addition to virtual machines, especially if the entire host is used for the same type of load, for example, for web sites. You can safely isolate them with a minimal overhead for resources, which can not be said about virtual machines even in a minimal configuration.
If you have experience with LXC or some other containers, but not a docker, I have worked with him a lot and are generally familiar, please share in the comments.
Comments