Home » Virtualisation
Proxmox / LXC – Running docker inside a container
In relation to Debian / Proxmox – Install Docker with Rancher and DockerUI webgui on a Debian / Proxmox Server
I thought that it actually may make more sense to run Rancher and my docker inside an LXC container rather than on the initial host itself.
I went back to utilize an old Machine with Proxmox for containers but also wanted to have a platform to play with Docker. So I though pimping the Promox server is the best solution.
The Steps are easy to get docker running but since Proxmox offers the best GUI for lxc I needed something similar for the docker containers.
Note: This DOES NOT add Docker into the Proxmox GUI itself. I’m adding a separate web page for docker running in a container by itself.
Inspired by the Xen Orchestra Team https://xen-orchestra.com/blog/migrate-from-xenserver-6-2-to-6-5/ I wanted to upgrade my hosts to the latest Alpha for testing, but I didn’t want to mess around with burning a DVD and boot off it. Now this solution is not necessary faster than booting off a local DVD depending on network speed and the NFS server, and also that the upgrade is running sequential host by host. However I could kick it off and leave it alone for some while and when I came back it was all done. NOTE: This is not a strict online Upgrade. If you have HA configured the upgrade will try to move the VMs around to the available hosts before upgrading and rebooting, if not several weird behavior will occur. What do you need: From http://xenserver.org/overview-xenserver-open-source-virtualization/prerelease.html The XenServer 6.6 Alpha3 ISO (or latest) XenCenter Windows Management Console (The latest version belonging to the Xenserver version to be […]
Update 02.06.15: added Tahoma2 fonts to make it look better Update 05.12.15: Some Issues with disconnects from the server seem to be resolved with Wine 1.8 RC1.staging. Using it now for more than a week without any issues anymore. This bothered me for quite some while now. As mentioned before there is no native client GUI for XenServer on Linux. XOA is nice as an appliance, but if that appliance either doesn’t start or you need some features that are not in there you are either stuck to the command line or to XenCenter on Windows. I spend some time now to get it running under wine, and here is the howto. You need PlayonLinux for this. You could either install the maintainer version or download the latest and greatest version from their webpage and install it. apt-get install playonlinux Once opening Play on Linux you can install under Tools – Manage Wine Version the […]
Update 02.03.2015: added (modified) Centos / Redhat: A successor to compcache is zram which is fully integrated in the Linux kernel since 18.104.22.168 and uses lzo compression. The idea behind it is to create swap devices made of chunks of the ram and to compress those chunks on the fly to increase the available space used and ideally reduce the need of swapping to slow disks. It uses a small extra amount of the CPU, however, the reduced i/o usage should more than make up for this. This is primarily interesting for a small scaled VPS, Netbooks or low memory devices. Also virtualisation hosts should benefit of compressed memory. Unfortunatly the zram-config script is currently not part of the Debian and Centos distributions. I will run some further tests and update here. In Ubuntu, from 12.04 onwards, the install script is included and it takes only a minute to setup zram. How to […]
Scenario, a 2 node XenServer Cluster with HA. In general, one node is assigned a pool master to which for example the XenCenter is going to connect to. Now exactly that node goes down. It could be the other one but you know it never is. I have had configured HA and assigned which guests I want to restart when going down. While the guest restarted fine on the other node I’m still unable to connect to the XenServer Cluster using my XenCenter. Now XenCenter is trying to connect to the IP of Node00 which just happened to go down. I could try to use the IP of the remaining pool member. But that would be too easy and has resulted into some: Server ‘xxx-pool’ in in a pool. To connect to a pool, you must connect to the pool master. Do you want to connect to the pool master […]
Situation: A move of a VM failed and left me with the VDI (Virtual Disk) and the VM on the old “location” and a VDI on the designated new location (something like this). When you try to delete the new failed location the delete field is grayed out with a note “This Virtual Disk is active on VM Control Domain on VM. Deactivate the Virtual Disk before deleting.” Firstly try to identify the uuid of the VDI: (I used the XenCenter and changed the description to “delete” to have an indicator that I picked the right VDI). xe vdi-list This will come back with all Vdisks and their uuids. Pick the one you have the issue with. Example: uuid ( RO) : bc67a8c9-5bf5-4eb0-b643-31d4f4f0de2f name-label ( RW): XPVM-Spiceworks-Disk0 name-description ( RW): delete sr-uuid ( RO): cf2b964c-baf4-5283-7a46-5b3f1170a321 virtual-size ( RO): 21460156416 sharable ( RO): false read-only ( RO): false That’s my disk and […]
This is going to cover 2 error messages when adding a new / additional Server to a XenServer pool. 1) This server’s hardware is incompatible with the masters. 2) The CPU does not support masking of features. details: CPU does not have FlexMigration Issue 1) You want to add a new server to an existing pool or want to merge 2 or more servers to one pool. This server’s hardware is incompatible with the masters. This error tries to tell you have most likely have different CPU Hardware on the servers and that it cannot merge them to a pool. There is a detailed description on why and how https://support.citrix.com/article/CTX127059 You can get the CPU features by running xe host-cpu-info See on the below examples how it went out on my 2 boxes. [[email protected] ~]# xe host-cpu-info cpu_count : 4 socket_count: 1 vendor: GenuineIntel speed: 2825.956 modelname: Intel(R) Core(TM)2 Quad CPU Q9550 @ […]
Doing some cleanup and had to rename a nodename of an existing standalone Proxmox Server. Found that it wasn’t too straight forward because it detected the old nodename and placed the vz underneath it. So here is a simple way to rename the Proxmox Node. Firstly the normal steps: vi /etc/hosts vi /etc/hostname Replace the entries with your intended hostname. You will now need to restart pve in order to create the new hostname in the system. service pve-cluster restart This should now create a new host entry in Proxmox and a new folder under /etc/pve/nodes/[newhostname] You now need to copy your old openvz data from the old folder to the new. mv /etc/pve/nodes/[oldhostname]/openvz/* /etc/pve/nodes/[oldhostname]/openvz/ Example: mv /etc/pve/nodes/proxmox-portable/openvz/* /etc/pve/nodes/proxmox-Xen/openvz/ I would advise to restart the Proxmox host in order to finish the renaming. Related posts: Linux: Install Proxmox Virtual Environment on Debian 6.0 Squeeze Distro (Kanopix) Debian / Kanotix / […]
A new virtualization product, new issues. I started to look into XenServer. http://xenserver.org/ Please note do not go for the new Creedence (6.4 Beta) because there is no upgrade path afterwards. It actually looks and performs promising. However one downturn is, there is only a native Xen-Management tool named XenCenter which is solely for Windows. Seriously…. But there is hope. Another project named XOA or Xen-Orchestra https://xen-orchestra.com/ It’s an appliance to be imported as a VM guest which offers a nice and useable webgui. Now there is no direct way in XenServer to make a VM to autostart when booting a single host or a XEN Cluster, but I’d like to have the appliance available when booting the XenServer. This is based on https://support.citrix.com/article/CTX133910 which tells you why it may interfere with any HA setup Here we go. To make a Guest VM autostart in XenServer do the following: Login to the XenServer Host as root. Firstly […]