The following features are available:
vmm-firmware
package.
Processor compatibility can be checked with the following command:
$ dmesg | egrep '(VMX/EPT|SVM/RVI)'Before going further, enable and start the vmd(8) service.
# rcctl enable vmd # rcctl start vmd
install73.iso
image file.
# vmctl create -s 50G disk.qcow2 vmctl: qcow2 imagefile created # vmctl start -m 1G -L -i 1 -r install73.iso -d disk.qcow2 example vmctl: started vm 1 successfully, tty /dev/ttyp8 # vmctl show ID PID VCPUS MAXMEM CURMEM TTY OWNER NAME 1 72118 1 1.0G 88.1M ttyp8 root exampleTo view the console of the newly created VM, attach to its serial console:
# vmctl console example Connected to /dev/ttyp8 (speed 115200)The escape sequence
~.
is needed to leave the serial console.
See the cu(1) man page for more info.
When using a vmctl
serial console over SSH, the ~ (tilde)
character must be escaped to prevent
ssh(1) from dropping the connection.
To exit a serial console over SSH, use ~~.
instead.
The VM can be stopped using vmctl(8).
# vmctl stop example stopping vm: requested to shutdown vm 1Virtual machines can be started with or without a vm.conf(5) file in place. The following
/etc/vm.conf
example would replicate the above
configuration:
vm "example" { memory 1G enable disk /home/user/disk.qcow2 local interface }Some configuration properties in vm.conf(5) can be reloaded by vmd(8) on the fly. Other changes, like adjusting the amount of RAM or disk space, require the VM to be restarted.
In the examples below, various IPv4 address ranges will be mentioned for different use cases:
10.0.0.0/8
,
172.16.0.0/12
, and 192.168.0.0/16
are not
globally routable.
100.64.0.0/10
.
Using vmctl(8)'s -L
flag creates a local interface in the guest which will receive an address
from vmd via DHCP.
This essentially creates two interfaces: one for the host and the other
for the VM.
The following line in /etc/pf.conf
will enable
Network Address Translation and redirect DNS requests
to the specified server:
match out on egress from 100.64.0.0/10 to any nat-to (egress) pass in proto { udp tcp } from 100.64.0.0/10 to any port domain \ rdr-to $dns_server port domainReload the pf ruleset and the VM(s) can now connect to the internet.
Create a vether0
interface that will have a private IPv4 address
as defined above.
In this example, we'll use the 10.0.0.0/8
subnet.
# echo 'inet 10.0.0.1 255.255.255.0' > /etc/hostname.vether0 # sh /etc/netstart vether0Create the
bridge0
interface with the vether0
interface as a bridge port:
# echo 'add vether0' > /etc/hostname.bridge0 # sh /etc/netstart bridge0Ensure that NAT is set up properly if the guests on the virtual network need access beyond the physical machine. An adjusted NAT line in
/etc/pf.conf
might look like this:
match out on egress from vether0:network to any nat-to (egress)The following lines in vm.conf(5) can be used to ensure that a virtual switch is defined:
switch "my_switch" { interface bridge0 } vm "my_vm" { ... interface { switch "my_switch" } }Inside the
my_vm
guest, it's now possible to assign
vio0
an address on the 10.0.0.0/24
network and set the default route to
10.0.0.1
.
For convenience, you may wish to set up a
DHCP server on vether0
.
Create the bridge0
interface with the host network interface as a
bridge port.
In this example, the host network interface is em0
- you should
substitute the interface name that you wish to connect the VM to:
# echo 'add em0' > /etc/hostname.bridge0 # sh /etc/netstart bridge0As done in the previous example, create or modify the vm.conf(5) file to ensure that a virtual switch is defined:
switch "my_switch" { interface bridge0 } vm "my_vm" { ... interface { switch "my_switch" } }The
my_vm
guest can now participate in the host network as if it
were physically connected.
Note: If the host interface (em0
in the above example) is
also configured using DHCP,
dhcpleased(8) running on that
interface may block DHCP requests from reaching guest VMs.
In this case, you should select a different host interface not using DHCP,
or terminate any dhcpleased(8)
processes assigned to that interface before starting VMs, or use static IP
addresses for the VMs.