Skip to content
This repository has been archived by the owner on Sep 18, 2020. It is now read-only.

Add support for additional disks, big nodes, update channel and Parallels #309

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

dab-q
Copy link

@dab-q dab-q commented Oct 17, 2017

Sometimes it can be desirable to configure additional disks,
more memory and more CPUs to the nodes. In a constrained
environment with a multi-node cluster, there might not be enough
resources to assign larger memory/cpu resources to all the nodes.
These changes support two node configurations, as well as
configuring all nodes with additional virtual disks.

Create big nodes

  • In addition to the normal nodes, you can
    also add additional nodes with a larger
    memory/cpu configuration.

    $num_big_instances = 1
    $vm_big_memory = 8192
    $vm_big_cpus = 2

Additional disks

  • Each node can be configured with additional disks.
    $num_data_disks = 3
    $data_disk_size = 10 # GBytes

Specify the coreos update channel:
$update_channel = "alpha"

Add .virtualbox/ to .gitignore

Add Parallels support

  • Set the memory size and number of cpus for Parallels,
    as well as setting the download path.

  • Change Vagrant file config.vm.provider order
    so that VirtualBox is first, which makes it
    the default provider (assuming it works), and
    the util.rb reflects that assumption. In general,
    setting VAGRANT_DEFAULT_PROVIDER is the best way
    to be deterministic about what provider is used.

Create big nodes
 - In addition to the normal nodes, you can
   also add additional nodes with a larger
   memory/cpu configuration.

     $num_big_instances = 1
     $vm_big_memory = 8192
     $vm_big_cpus = 2

Additional disks
 - Each node can be configured with additional disks.
     $num_data_disks = 3
     $data_disk_size = 10 # GBytes

Specify the coreos update channel:
  $update_channel = "alpha"

Also add .virtualbox/ to .gitignore

Add Parallels support

 - Set the memory size and number of cpus for Parallels,
   as well as setting the download path.

 - Change Vagrant file config.vm.provider order
   so that VirtualBox is first, which makes it
   the default provider (assuming it works), and
   the util.rb reflects that assumption.  In general,
   setting VAGRANT_DEFAULT_PROVIDER is the best way
   to be deterministic about what provider is used.
@dab-q
Copy link
Author

dab-q commented Oct 17, 2017

We are using coreos-vagrant for a developer environment for a https://github.com/rook/rook installation, and we need additional disks on each VM for Rook/Ceph to consume. We are also running some additional software that has larger memory and CPU requirements, but we only need one node in the cluster to be able to support it, so having several 1CPU/2GB nodes and one 2CPU/6GB nodes works well in a constrained resource environment like a MacBook Pro laptop.

David Borman added 4 commits December 5, 2017 10:56
We've seen some instances of the kernel rebooting due
to a kenel paging request in the network RX code, coming
from the virtio_net module.  Switch the second nic, which
is the private network, to be an emulation of an Intel NIC,
avoiding the virtio_net code.

Only the second NIC is changed, the first NIC is left as
a virtio_net NIC.  That NIC is only used for host<->VM
communications, and changing it to an Intel NIC messes up
the NIC configuration, and leaving it as a virtio_net driver
keeps things working.  (The NIC order was getting changed,
causing CoreOS to configure the wrong interface for the
private network.)
This reverts commit 6181371.

This change has been causing problems and unintended
consequences, such as affecting coreos NIC configuration
when running with parallels, even though this should have
only changed things with Virtualbox.  And while it
seems to have fixed the stability issues when running
with linux/virtualbox, it is not working the same
with mac/virtualbox.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant