The container we use here is LXC,LXC is a good choice when we need a system oriented container.I got some problem with lxc 1.0 when I booted a centos container ,so I choose a LXC 2.0 stable version,which turns out to be good and stable.
Often we define a domain for every container which is gonna use dpdk oriented data plane , and then we could allocate virtual links through dpdk --vdev command directives there is a script which can do most of the work ,here we call it vecutil.sh ,you can find it at github:[------------------]:
all this runs on Host:
#create a domain with name:demo
[root@localhost dpdk-16.07-vecring]# ./vecutils.sh dom_alloc demo
#list all available domains
[root@localhost dpdk-16.07-vecring]# ./vecutils.sh dom_ls
0:domain:demo huge-dir:mounted
1:domain:testcontainer huge-dir:mounted
2:domain:vnf1 huge-dir:mounted
#and then you will find new directory is mounted as hugetblfs by
[root@localhost mycentos1]# mount|grep vecring
hugetlbfs on /var/vecring/testcontainer/huge type hugetlbfs (rw,relatime,seclabel,pagesize=2m)
hugetlbfs on /var/vecring/vnf1/huge type hugetlbfs (rw,relatime,seclabel,pagesize=2m)
hugetlbfs on /var/vecring/demo/huge type hugetlbfs (rw,relatime,seclabel,pagesize=2m)
next we will map these directories into container ,by adding extra lines to config file of target container
,a possible config looks like this below:
by doing so ,the host and container can refer to these mounted directories by same path.
you still can check it insider container
[root@mycentos1 ~]# mount |grep vecring
/dev/sda3 on /var/vecring/testcontainer type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
hugetlbfs on /var/vecring/testcontainer/huge type hugetlbfs (rw,relatime,seclabel,pagesize=2m)
2.2 dpdk env setup
clone dpdk with eth_vecring driver from git:[------------------------],and then compile it as usual,then execute dpdk application(here we take L2FWD as an example),
--vdev will specify a virtual dpdk pmd device which is normal as pci device is ,domain sub-parameter is the domain we created before,link is the the name of virtual link ,mac shoud be self explanatory, master indicates whether the pmd is master ,usually the host side of the virtual link is master ,and container eal parameter should never speficy this parameter ,socket is the prefered node on which hugepage memory is gonna be allocated ,but it is not mandatory because the pmd will automatically allocate from other nodes when it find there is no memory available on prefered node at all .
additionally, --huge-dir --huge-unlink will automaticall release hugepage allocated at a specific mount point previously,--socket-mem just make sure this dpdk process will not consume all available hugepage memory ,which is critical because other dpdk process residing in container will need hugepage too.
by far you will see some huge file under /var/vecring/{domain} where {domain} is testcontainer ,here we check it:
next we will switch into lxc container by lxc-attach -n mycentos1 -- bash where mycentos1 is the name of container:
[root@mycentos1 mywork]# mount |grep vecring
/dev/sda3 on /var/vecring/testcontainer type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
hugetlbfs on /var/vecring/testcontainer/huge type hugetlbfs (rw,relatime,seclabel,pagesize=2m)
[root@mycentos1 mywork]# ls -l /var/vecring/testcontainer/huge/
total 8192
-rwxr-xr-x. 1 root root 2097152 Dec 20 15:49 vecring-tap12345.inbound-0
-rwxr-xr-x. 1 root root 2097152 Dec 20 15:49 vecring-tap12345.inbound-1
-rwxr-xr-x. 1 root root 2097152 Dec 20 15:49 vecring-tap12345.outbound-0
-rwxr-xr-x. 1 root root 2097152 Dec 20 15:49 vecring-tap12345.outbound-1
[root@mycentos1 mywork]# ls -l /var/vecring/testcontainer/
total 4
drwxr-xr-x. 2 root root 0 Dec 20 15:49 huge
-rw-r--r--. 1 root root 242 Dec 20 15:49 tap12345.metadata
you will see that container and host have the same path of hugetlbfs mapping. within the container ,the same metadata is there ,dpdk application inside the container still can access these files.