1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
|
<!--
lxc: linux Container library
(C) Copyright IBM Corp. 2007, 2008
Authors:
Daniel Lezcano <dlezcano at fr.ibm.com>
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-->
<!DOCTYPE refentry PUBLIC "-//Davenport//DTD DocBook V3.0//EN" [
<!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
]>
<refentry>
<docinfo>
<date>@LXC_GENERATE_DATE@</date>
</docinfo>
<refmeta>
<refentrytitle>lxc</refentrytitle>
<manvolnum>7</manvolnum>
<refmiscinfo>
Version @PACKAGE_VERSION@
</refmiscinfo>
</refmeta>
<refnamediv>
<refname>lxc</refname>
<refpurpose>
linux containers
</refpurpose>
</refnamediv>
<refsect1>
<title>Quick start</title>
<para>
You are in a hurry, and you don't want to read this man page. Ok,
without warranty, here are the commands to launch a shell inside
a container with a predefined configuration template, it may
work.
<command>@BINDIR@/lxc-execute -n foo -f
@DOCDIR@/examples/lxc-macvlan.conf /bin/bash</command>
</para>
</refsect1>
<refsect1>
<title>Overview</title>
<para>
The container technology is actively being pushed into the
mainstream linux kernel. It provides the resource management
through the control groups aka process containers and resource
isolation through the namespaces.
</para>
<para>
The linux containers, <command>lxc</command>, aims to use these
new functionalities to provide an userspace container object
which provides full resource isolation and resource control for
an applications or a system.
</para>
<para>
The first objective of this project is to make the life easier
for the kernel developers involved in the containers project and
especially to continue working on the Checkpoint/Restart new
features. The <command>lxc</command> is small enough to easily
manage a container with simple command lines and complete enough
to be used for other purposes.
</para>
</refsect1>
<refsect1>
<title>Requirements</title>
<para>
The <command>lxc</command> relies on a set of functionalities
provided by the kernel which needs to be active. Depending of
the missing functionalities the <command>lxc</command> will
work with a restricted number of functionalities or will simply
fails.
</para>
<para>
The following list gives the kernel features to be enabled in
the kernel to have the full features container:
</para>
<programlisting>
* General setup
* Control Group support
-> Namespace cgroup subsystem
-> Freezer cgroup subsystem
-> Cpuset support
-> Simple CPU accounting cgroup subsystem
-> Resource counters
-> Memory resource controllers for Control Groups
* Group CPU scheduler
-> Basis for grouping tasks (Control Groups)
* Namespaces support
-> UTS namespace
-> IPC namespace
-> User namespace
-> Pid namespace
-> Network namespace
* Device Drivers
* Character devices
-> Support multiple instances of devpts
* Network device support
-> MAC-VLAN support
-> Virtual ethernet pair device
* Networking
* Networking options
-> 802.1d Ethernet Bridging
* Security options
-> File POSIX Capabilities
</programlisting>
<para>
The kernel version >= 2.6.27 shipped with the distros, will
work with <command>lxc</command>, this one will have less
functionalities but enough to be interesting.
With the kernel 2.6.29, <command>lxc</command> is fully
functional.
The helper script <command>lxc-checkconfig</command> will give
you information about your kernel configuration.
</para>
<para>
Before using the <command>lxc</command>, your system should be
configured with the file capabilities, otherwise you will need
to run the <command>lxc</command> commands as root.
</para>
<para>
The control group can be mounted anywhere, eg:
<command>mount -t cgroup cgroup /cgroup</command>.
If you want to dedicate a specific cgroup mount point
for <command>lxc</command>, that is to have different cgroups
mounted at different places with different options but
let <command>lxc</command> to use one location, you can bind
the mount point with the <option>lxc</option> name, eg:
<command>mount -t cgroup lxc /cgroup4lxc</command> or
<command>mount -t cgroup -ons,cpuset,freezer,devices
lxc /cgroup4lxc</command>
</para>
</refsect1>
<refsect1>
<title>Functional specification</title>
<para>
A container is an object isolating some resources of the host,
for the application or system running in it.
</para>
<para>
The application / system will be launched inside a
container specified by a configuration that is either
initially created or passed as parameter of the starting commands.
</para>
<para>How to run an application in a container ?</para>
<para>
Before running an application, you should know what are the
resources you want to isolate. The default configuration is to
isolate the pids, the sysv ipc and the mount points. If you want
to run a simple shell inside a container, a basic configuration
is needed, especially if you want to share the rootfs. If you
want to run an application like <command>sshd</command>, you
should provide a new network stack and a new hostname. If you
want to avoid conflicts with some files
eg. <filename>/var/run/httpd.pid</filename>, you should
remount <filename>/var/run</filename> with an empty
directory. If you want to avoid the conflicts in all the cases,
you can specify a rootfs for the container. The rootfs can be a
directory tree, previously bind mounted with the initial rootfs,
so you can still use your distro but with your
own <filename>/etc</filename> and <filename>/home</filename>
</para>
<para>
Here is an example of directory tree
for <command>sshd</command>:
<programlisting>
[root@lxc sshd]$ tree -d rootfs
rootfs
|-- bin
|-- dev
| |-- pts
| `-- shm
| `-- network
|-- etc
| `-- ssh
|-- lib
|-- proc
|-- root
|-- sbin
|-- sys
|-- usr
`-- var
|-- empty
| `-- sshd
|-- lib
| `-- empty
| `-- sshd
`-- run
`-- sshd
</programlisting>
and the mount points file associated with it:
<programlisting>
[root@lxc sshd]$ cat fstab
/lib /home/root/sshd/rootfs/lib none ro,bind 0 0
/bin /home/root/sshd/rootfs/bin none ro,bind 0 0
/usr /home/root/sshd/rootfs/usr none ro,bind 0 0
/sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
</programlisting>
</para>
<para>How to run a system in a container ?</para>
<para>Running a system inside a container is paradoxically easier
than running an application. Why ? Because you don't have to care
about the resources to be isolated, everything need to be
isolated, the other resources are specified as being isolated but
without configuration because the container will set them
up. eg. the ipv4 address will be setup by the system container
init scripts. Here is an example of the mount points file:
<programlisting>
[root@lxc debian]$ cat fstab
/dev /home/root/debian/rootfs/dev none bind 0 0
/dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
</programlisting>
More information can be added to the container to facilitate the
configuration. For example, make accessible from the container
the resolv.conf file belonging to the host.
<programlisting>
/etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
</programlisting>
</para>
<refsect2>
<title>Container life cycle</title>
<para>
When the container is created, it contains the configuration
information. When a process is launched, the container will be
starting and running. When the last process running inside the
container exits, the container is stopped.
</para>
<para>
In case of failure when the container is initialized, it will
pass through the aborting state.
</para>
<programlisting>
---------
| STOPPED |<---------------
--------- |
| |
start |
| |
V |
---------- |
| STARTING |--error- |
---------- | |
| | |
V V |
--------- ---------- |
| RUNNING | | ABORTING | |
--------- ---------- |
| | |
no process | |
| | |
V | |
---------- | |
| STOPPING |<------- |
---------- |
| |
---------------------
</programlisting>
</refsect2>
<refsect2>
<title>Configuration</title>
<para>The container is configured through a configuration
file, the format of the configuration file is described in
<citerefentry>
<refentrytitle><filename>lxc.conf</filename></refentrytitle>
<manvolnum>5</manvolnum>
</citerefentry>
</para>
</refsect2>
<refsect2>
<title>Creating / Destroying container
(persistent container)</title>
<para>
A persistent container object can be
created via the <command>lxc-create</command>
command. It takes a container name as parameter and
optional configuration file and template.
The name is used by the different
commands to refer to this
container. The <command>lxc-destroy</command> command will
destroy the container object.
<programlisting>
lxc-create -n foo
lxc-destroy -n foo
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Volatile container</title>
<para>It is not mandatory to create a container object
before to start it.
The container can be directly started with a
configuration file as parameter.
</para>
</refsect2>
<refsect2>
<title>Starting / Stopping container</title>
<para>When the container has been created, it is ready to run an
application / system.
This is the purpose of the <command>lxc-execute</command> and
<command>lxc-start</command> commands.
If the container was not created before
starting the application, the container will use the
configuration file passed as parameter to the command,
and if there is no such parameter either, then
it will use a default isolation.
If the application is ended, the container will be stopped also,
but if needed the <command>lxc-stop</command> command can
be used to kill the still running application.
</para>
<para>
Running an application inside a container is not exactly the
same thing as running a system. For this reason, there are two
different commands to run an application into a container:
<programlisting>
lxc-execute -n foo [-f config] /bin/bash
lxc-start -n foo [-f config] [/bin/bash]
</programlisting>
</para>
<para>
<command>lxc-execute</command> command will run the
specified command into the container via an intermediate
process, <command>lxc-init</command>.
This lxc-init after launching the specified command,
will wait for its end and all other reparented processes.
(that allows to support daemons in the container).
In other words, in the
container, <command>lxc-init</command> has the pid 1 and the
first process of the application has the pid 2.
</para>
<para>
<command>lxc-start</command> command will run directly the specified
command into the container.
The pid of the first process is 1. If no command is
specified <command>lxc-start</command> will
run <filename>/sbin/init</filename>.
</para>
<para>
To summarize, <command>lxc-execute</command> is for running
an application and <command>lxc-start</command> is better suited for
running a system.
</para>
<para>
If the application is no longer responding, is inaccessible or is
not able to finish by itself, a
wild <command>lxc-stop</command> command will kill all the
processes in the container without pity.
<programlisting>
lxc-stop -n foo
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Connect to an available tty</title>
<para>
If the container is configured with the ttys, it is possible
to access it through them. It is up to the container to
provide a set of available tty to be used by the following
command. When the tty is lost, it is possible to reconnect it
without login again.
<programlisting>
lxc-console -n foo -t 3
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Freeze / Unfreeze container</title>
<para>
Sometime, it is useful to stop all the processes belonging to
a container, eg. for job scheduling. The commands:
<programlisting>
lxc-freeze -n foo
</programlisting>
will put all the processes in an uninteruptible state and
<programlisting>
lxc-unfreeze -n foo
</programlisting>
will resume them.
</para>
<para>
This feature is enabled if the cgroup freezer is enabled in the
kernel.
</para>
</refsect2>
<refsect2>
<title>Getting information about container</title>
<para>When there are a lot of containers, it is hard to follow
what has been created or destroyed, what is running or what are
the pids running into a specific container. For this reason, the
following commands may be usefull:
<programlisting>
lxc-ls
lxc-ps --name foo
lxc-info -n foo
</programlisting>
</para>
<para>
<command>lxc-ls</command> lists the containers of the
system. The command is a script built on top
of <command>ls</command>, so it accepts the options of the ls
commands, eg:
<programlisting>
lxc-ls -C1
</programlisting>
will display the containers list in one column or:
<programlisting>
lxc-ls -l
</programlisting>
will display the containers list and their permissions.
</para>
<para>
<command>lxc-ps</command> will display the pids for a specific
container. Like <command>lxc-ls</command>, <command>lxc-ps</command>
is built on top of <command>ps</command> and accepts the same
options, eg:
<programlisting>lxc-ps --name foo --forest</programlisting>
will display the processes hierarchy for the processes
belonging the 'foo' container.
<programlisting>lxc-ps --lxc</programlisting>
will display all the containers and their processes.
</para>
<para>
<command>lxc-info</command> gives informations for a specific
container, at present time, only the state of the container is
displayed.
</para>
<para>
Here is an example on how the combination of these commands
allow to list all the containers and retrieve their state.
<programlisting>
for i in $(lxc-ls -1); do
lxc-info -n $i
done
</programlisting>
And displaying all the pids of all the containers:
<programlisting>
for i in $(lxc-ls -1); do
lxc-ps --name $i --forest
done
</programlisting>
</para>
<para>
<command>lxc-netstat</command> display network information for
a specific container. This command is built on top of
the <command>netstat</command> command and will accept its
options
</para>
<para>
The following command will display the socket informations for
the container 'foo'.
<programlisting>
lxc-netstat -n foo -tano
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Monitoring container</title>
<para>It is sometime useful to track the states of a container,
for example to monitor it or just to wait for a specific
state in a script.
</para>
<para>
<command>lxc-monitor</command> command will monitor one or
several containers. The parameter of this command accept a
regular expression for example:
<programlisting>
lxc-monitor -n "foo|bar"
</programlisting>
will monitor the states of containers named 'foo' and 'bar', and:
<programlisting>
lxc-monitor -n ".*"
</programlisting>
will monitor all the containers.
</para>
<para>
For a container 'foo' starting, doing some work and exiting,
the output will be in the form:
<programlisting>
'foo' changed state to [STARTING]
'foo' changed state to [RUNNING]
'foo' changed state to [STOPPING]
'foo' changed state to [STOPPED]
</programlisting>
</para>
<para>
<command>lxc-wait</command> command will wait for a specific
state change and exit. This is useful for scripting to
synchronize the launch of a container or the end. The
parameter is an ORed combination of different states. The
following example shows how to wait for a container if he went
to the background.
<programlisting>
# launch lxc-wait in background
lxc-wait -n foo -s STOPPED &
LXC_WAIT_PID=$!
# this command goes in background
lxc-execute -n foo mydaemon &
# block until the lxc-wait exits
# and lxc-wait exits when the container
# is STOPPED
wait $LXC_WAIT_PID
echo "'foo' is finished"
</programlisting>
</para>
</refsect2>
<refsect2>
<title>Setting the control group for container</title>
<para>The container is tied with the control groups, when a
container is started a control group is created and associated
with it. The control group properties can be read and modified
when the container is running by using the lxc-cgroup command.
</para>
<para>
<command>lxc-cgroup</command> command is used to set or get a
control group subsystem which is associated with a
container. The subsystem name is handled by the user, the
command won't do any syntax checking on the subsystem name, if
the subsystem name does not exists, the command will fail.
</para>
<para>
<programlisting>
lxc-cgroup -n foo cpuset.cpus
</programlisting>
will display the content of this subsystem.
<programlisting>
lxc-cgroup -n foo cpu.shares 512
</programlisting>
will set the subsystem to the specified value.
</para>
</refsect2>
</refsect1>
<refsect1>
<title>Bugs</title>
<para>The <command>lxc</command> is still in development, so the
command syntax and the API can change. The version 1.0.0 will be
the frozen version.</para>
</refsect1>
&seealso;
<refsect1>
<title>Author</title>
<para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
</refsect1>
</refentry>
<!-- Keep this comment at the end of the file Local variables: mode:
sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
sgml-parent-document:nil sgml-default-dtd-file:nil
sgml-exposed-tags:nil sgml-local-catalogs:nil
sgml-local-ecat-files:nil End: -->
|