Enable Snapshot with KVM on CloudStack

How to enable Snapshot with KVM on CloudStack

If you have chosen KVM as your hypervisor in CloudStack then you probably know that Snapshot is not supported by KVM, only VMware and XenServer. This is refering to Snapshots which capture state of the machine much like a VMware Snapshot. What is supported is Volume Snapshot which is basically a snapshot of individual volumes. A copy of your volume is backed up to secondary storage. It does have some limitations of course, you can not just roll back to this backup. But before you can use Volume Snapshot you will have to do a couple of things. Logon to CloudStack and from Global settings select kvm.snapshot.enabled set it to true and restart the management server (kvm.snapshot.enabled = true) Enable Snapshot with KVM on CloudStack This alone won't do it. Because i am running CentOS 6.6 i have to install an older version of qemu-img, qemu-img-0.12.1.2-2.355.e16_4_4.1.x86_64.rpm. I found a great article on http://www.nux.ro/archive/2014/01/Taking_KVM_volume_snapshots_with_Cloudstack_4_2_on_CentOS_6_5.html I had to install wget by: yum install wget yum install wget Then: mkdir cloud-qemu-img cd cloud-qemu-img wget http://vault.centos.org/6.4/updates/x86_64/Packages/qemu-img-0.12.1.2-2.355.el6_4_4.1.x86_64.rpm rpm2cpio qemu-img-0.12.1.2-2.355.el6_4_4.1.x86_64.rpm |cpio -idmv cp ./usr/bin/qemu-img /usr/bin/cloud-qemu-img mkdir cloud-qemu-img Once you have copied down the older version of qemu-img to all your KVM nodes you might want to tweak a few of the Snapshot settings available within the CloudStack console. In Global Settings throttle the number of volume snapshots on each host by modifying the global setting: concurrent.snapshots.threshold.perhost. You can also set snapshot timeout periods and maximum number of snapshots per account, per project or limit snapshots by the hour, day, week or month. max.account.snapshots max.project.snapshots snapshot.max.hourly snapshot.max.daily snapshot.max.weekly snapshot.max.monthly Limiting snapshot settings will enable you to manage your secondary storage usage and help with performance of your storage as running multiple snapshots concurrently have cause performance problems on your storage arrays. You might also want to set job.expire.minutes so jobs wont stayed queued and will error out.   References: http://www.nux.ro/archive/2014/01/Taking_KVM_volume_snapshots_with_Cloudstack_4_2_on_CentOS_6_5.html  

Installing Open vSwitch on CentOS for CloudStack

View all

Installing Open vSwitch on CentOS 6.6

I would like to use Open vSwitch on my KVM nodes in my CloudStack deployment. I will download and build the Open vSwitch rpms on my deployment server, which is just a CentOS VM which i use for deploying packages etc. Create a directory to download the Open vSwitch tar to (You can find the most recent OVS here: http://openvswitch.org/download/): cd ~ mkdir -p rpmbuild/SOURCES wget http://openvswitch.org/releases/openvswitch–2.3.1.tar.gz mkdir -p rpmbuild SOURCES tar xvfz openvswitch–2.3.1.tar.gz tar openvswitch cd openvswitch–2.3.1/ cp ../openvswitch–2.3.1.tar.gz ~/rpmbuild/SOURCES/ cp rhel/openvswitch-kmod.files ~/rpmbuild/SOURCES/ cd openvswitch Then to build the rpms. This can take a few minutes: rpmbuild -bb rhel/openvswitch.spec rpmbuild -bb rhel/openvswitch-kmod-rhel6.spec Now the rpms are sitting on my deployment server ready to be copied to my KVM nodes. openvswitch files on sgdeploy Copy the Open vSwitch rpms to the KVM node, using SCP. scp -r root@sgdeploy:/root/rpmbuild/RPMS/x86_64 . Copy open vswitch files to KVM node Change directory to x86_64/ by cd x86_64/ Then install kmod-openvswitch-2.3.1-1.el16.x86_64.rpm using yum -y localinstall kmod-openvswitch-2.3.1-1.el16.x86_64.rpm install kmod-open vswitch-2.3.1-1.e16.x86_64.rpm Install openvswitch-2.3.1-1.x86_64.rpm using yum -y localinstall openvswitch-2.3.1-1.x86_64.rpm install open vswitch -2.3.1-1.x86_64.rpm Once both rpms are installed reboot the host. Then to verify openvswitch is installed run ovs-vsctl -V and ovs-vsctl show. Verify open vswitch Once verified that Open vSwitch is installed we need to configure the network interfaces for use with CloudStack (This KVM node has 4 physical network adapters but i am only configuring two: rm -f /etc/sysconfig/network-scripts/ifcfg-eth0 echo "DEVICE=eth0" >> /etc/sysconfig/network-scripts/ifcfg-eth0 echo "BOOTPROTO=none" >> /etc/sysconfig/network-scripts/ifcfg-eth0 echo "IPV6INIT=no" >> /etc/sysconfig/network-scripts/ifcfg-eth0 echo "NM_CONTROLLED=no" >> /etc/sysconfig/network-scripts/ifcfg-eth0 echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-eth0 echo "TYPE=OVSPort" >> /etc/sysconfig/network-scripts/ifcfg-eth0 echo "DEVICETYPE=ovs" >> /etc/sysconfig/network-scripts/ifcfg-eth0 echo "OVS_BRIDGE=cloudbr0" >> /etc/sysconfig/network-scripts/ifcfg-eth0 rm -f /etc/sysconfig/network-scripts/ifcfg-eth1 echo "DEVICE=eth1" >> /etc/sysconfig/network-scripts/ifcfg-eth1 echo "BOOTPROTO=none" >> /etc/sysconfig/network-scripts/ifcfg-eth1 echo "IPV6INIT=no" >> /etc/sysconfig/network-scripts/ifcfg-eth1 echo "NM_CONTROLLED=no" >> /etc/sysconfig/network-scripts/ifcfg-eth1 echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-eth1 echo "TYPE=OVSPort" >> /etc/sysconfig/network-scripts/ifcfg-eth1 echo "DEVICETYPE=ovs" >> /etc/sysconfig/network-scripts/ifcfg-eth1 echo "OVS_BRIDGE=cloudbr1" >> /etc/sysconfig/network-scripts/ifcfg-eth1 rm -f /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "DEVICE=cloudbr0" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "DEVICETYPE=ovs" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "TYPE=OVSBridge" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "BOOTPROTO=static" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "IPADDR=10.20.28.181" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "GATEWAY=10.20.28.254" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "NETMASK=255.255.255.0" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "DNS1=10.20.16.15" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "DNS2=10.20.16.16" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 echo "HOTPLUG=no" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr0 rm -f /etc/sysconfig/network-scripts/ifcfg-cloudbr1 echo "DEVICE=cloudbr1" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr1 echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr1 echo "DEVICETYPE=ovs" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr1 echo "TYPE=OVSBridge" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr1 echo "BOOTPROTO=none" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr1 echo "HOTPLUG=no" >> /etc/sysconfig/network-scripts/ifcfg-cloudbr1 echo 'blacklist bridge' >> /etc/modprobe.d/blacklist.conf echo "network.bridge.type=openvswitch" >> /etc/cloudstack/agent/agent.properties echo "libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver" >> /etc/cloudstack/agent/agent.properties References: https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+with+OpenVSwitch  

Cloning a CentOS VM requires network modification

View all

When cloning a CentOS VM the networking will need to be reconfigured. First of all you will probably need to edit the IP settings in ifcfg-eth0: vi /etc/sysconfig/network-scripts/ifcfg-eth0 But even after this you won't be able to ping anything on the network. We have made a change to the network so restart the network service. Restarting the network service will return the following error:  "Bringing up interface eth0: Device eth0 does not seem to be present, delaying initialization." Device Eth0 does not exist At this point we need to do two things, You need to check the MAC Address of the CentOS VM by editing the settings of the VM in VMware Virtual Center. Highlight the Network Adapter and take note of the MAC Address. This is the MAC Address VMware has allocated to the VM. Find the MAC address of VMware VM Now, because this CentOS VM was cloned it still has the MAC Address of the VMware assigned MAC prior to the Clone. So, edit ifcfg-eth0 again and change the MAC address to the actual MAC address displayed in VMware vCenter VM properties. HWADDR Next you have to modify the following file: vi /etc/udev/rules.d/70-persistent-net.rules you will notice there are two PCI devices, both are vmxnet3 adapters but we only have on vmxnet3 adapter attached in VMware VM properties. Notice they both have different MAC addresses and one is names "eth0" and the other is "eth1" The highlighted vmxnet3 device has the correct MAC address but is named "eth1". So modify this file by removing the first device and changing the last device to "eth0" 70-persistent-net.rules Once modified, /etc/udev/rules.d/70-persistent-net.rules should look like below: 70-persistent-net.rules Now reboot the VM and your networking should be good now.

Mobile-first Cloud-first

Mobile-first Cloud-first

Microsoft Worldwide Partner Conference – Washington DC July 2014

[caption id="attachment_60" align="alignnone" width="648"]Mobile and Table Surface 3 and Cloud is Azure CEO Satya Nadella Keynote.[/caption] The overall theme of the Microsoft WPC was “Mobile-first, Cloud-first”. The message is a combination of Microsoft’s acquisition of Nokia and the push for Azure and O365 consumption. The acquisition of Nokia plus the launch of the Surface 3 puts them in a good position to take some market share of the mobility market. A smart move by Microsoft, as we speed towards an App-centric world where business is done on the mobile device. The Cloud-first is a pure market share grab of the public cloud market. The message is to drive the deployment, usage and consumption of Office 365, Azure and CRM. I get the feeling that alot of Azure credits have been sold but aren't being used. Microsoft have a Hybrid cloud strategy leveraging the existing HyperV install based and driving them to Azure. Microsoft have a great well rounded offering which will appeal to the Enterprise with an existing EA Agreement. Microsoft are making it easy to see cost benefits with the licensing structures they have for HyperV too. I think we are well past comparing hypervisor's now, so although Microsoft HyperV hypervisor may not be as good as VMware vSphere for example, my feeling is that its good enough. Microsoft have made it extremely easy to move workloads from your on-prem HyperV to Azure. A highlight of the conference was Kevin Turner, he delivered one of the best Keynotes discussing the mega trends and how Microsoft plays in these areas. [caption id="attachment_62" align="alignnone" width="648"]Cloud Azure, Mobility Surface, Social Yammer, Big Data & BI SQL COO Kevin Turner Keynote.[/caption] COO Kevin Turner Keynote. I came away from Microsoft WPC thinking that Microsoft was taking over the world again. They are so dominant that it is hard bet against them. When ever their competitors seem to have the upper hand, Microsoft throw money at R&D or buy capability and a year later they are right back there again! Closer to home, there are definitely huge opportunities in Cloud and Mobility. APAC Joint Opportunity - Azure Cloud 8.6 Bn With Azure coming to Australia by the end of the year, this should see the end to the data sovereignty barriers. It wasn't clear if Office 365 would be in Australia any time soon, Its currently delivered out of Singapore but that doesn't seem to bother anyone anyway.

CONTACT US

Square Peg in a Round Hole

View all

Are Windows workloads suitable for AWS?

With so much buzz about moving workloads to the public cloud, to save money and improve scalability I think we have forgotten something. The public cloud is not the same thing as a private cloud. I recently attended AWS training, Architecting for high availability and the instructor said something that I had to think about. He said that hosts will fail and bring down your instances, as a matter of fact. And that we should be architecting to allow for this type of outage. I’ve been working with VMware for so long that I don’t need to architect my instances or VMs for Host failure. With HA turned on I can be fairly certain that if a Host does fail, the VMs will power up on another host and service will resume. And VMware vSphere is a mature product that Hosts only fail when there is a hardware problem. So this raises the question, as an example, can I move my Windows File Servers to AWS with confidence? What happens when a Host fails, my file server instance goes down and I have to restore from S3? Sure I could add redundancy at the application level, using Microsoft clustering but that is just adding complexity and cost. There are some cases that Windows workloads can fit AWS, for example an RDS or a Citrix Farm, as there is some application layer redundancy. I guess there are solutions to replace file servers, but in the rush to save money and move workloads to the public cloud we are just lifting and shifting. This isn’t taking advantage of the public clouds capabilities and is possibly costing us more for an inferior product. Public cloud makes sense for applications written for the public cloud. Applications architected with the “Everything fails all the time” mentality. But is it a better solution for our existing Windows workloads? Maybe we should delineate the two by just maintaining the old and nurturing the new.

Mobile-first Cloud-first

Mobile-first Cloud-first

Microsoft Worldwide Partner Conference – Washington DC July 2014

[caption id="attachment_60" align="alignnone" width="648"]CEO Satya Nadella Keynote. CEO Satya Nadella Keynote.[/caption] The overall theme of the Microsoft WPC was “Mobile-first, Cloud-first”. The message is a combination of Microsoft’s acquisition of Nokia and the push for Azure and O365 consumption. The acquisition of Nokia plus the launch of the Surface 3 puts them in a good position to take some market share of the mobility market. A smart move by Microsoft, as we speed towards an App-centric world where business is done on the mobile device. The Cloud-first is a pure market share grab of the public cloud market. The message is to drive the deployment, usage and consumption of Office 365, Azure and CRM. I get the feeling that alot of Azure credits have been sold but aren't being used. Microsoft have a Hybrid cloud strategy leveraging the existing HyperV install based and driving them to Azure. Microsoft have a great well rounded offering which will appeal to the Enterprise with an existing EA Agreement. Microsoft are making it easy to see cost benefits with the licensing structures they have for HyperV too. I think we are well past comparing hypervisor's now, so although Microsoft HyperV hypervisor may not be as good as VMware vSphere for example, my feeling is that its good enough. Microsoft have made it extremely easy to move workloads from your on-prem HyperV to Azure. A highlight of the conference was Kevin Turner, he delivered one of the best Keynotes discussing the mega trends and how Microsoft plays in these areas. COO Kevin Turner Keynote. COO Kevin Turner Keynote. I came away from Microsoft WPC thinking that Microsoft was taking over the world again. They are so dominant that it is hard bet against them. When ever their competitors seem to have the upper hand, Microsoft throw money at R&D or buy capability and a year later they are right back there again! Closer to home, there are definitely huge opportunities in Cloud and Mobility. APAC Joint Opportunity With Azure coming to Australia by the end of the year, this should see the end to the data sovereignty barriers. It wasn't clear if Office 365 would be in Australia any time soon, Its currently delivered out of Singapore but that doesn't seem to bother anyone anyway.

  • Installing Open vSwitch on CentOS 6.6 I would like to use Open vSwitch on my KVM nodes in my CloudStack deployment. I will download and build the Open vSwitch rpms on my deployment server, which is just a CentOS VM which i use for deploying packages etc. Create a directory to download the Open vSwitch […]

    Install Open vSwitch on CentOS

  • When cloning a CentOS VM the networking will need to be reconfigured. First of all you will probably need to edit the IP settings in ifcfg-eth0: vi /etc/sysconfig/network-scripts/ifcfg-eth0 But even after this you won’t be able to ping anything on the network. We have made a change to the network so restart the network service. Restarting the […]

    Cloning a CentOS VM requires network modification

Install CloudStack 4.5.1

Apache CloudStack 4.5.1

Rather than upgrade i have decided to rebuild my CloudStack to version 4.5.1 and thought i would document the installation process.
The Physical CloudStack Environment looks like this:
3 x KVM Nodes 1 x Management Node 1 x NFS Node 1 x Deployment Node   CloudStack Physical Diagram
And the Logical CloudStack Environment looks like this:
CloudStack Logical Diagram   CloudStack management http://stack.cloud-mate.org/client

Installation on all nodes

The first 7 steps need to be completed on all nodes. This is a fresh rebuild of all components, step by step: 1. install CentOS 6.5 Minimal I like to set the hostname and IP Address during the installation this way i can confirm the network is working by selecting the "Connect automatically" option. With a terminal ping i can see the network come online when i hit the apply button. setting IP Address at Install 2. Update CentOS #yum -y update 3. Confirm networking is working, then # chkconfig network on # service network start 4. Set Hostname and Configure name resolution between all hosts - # hostname --fqdn then add entries to /etc/hosts 5. Set SELinux to be permissive # setenforce 0 modify /etc/selinux/config SELINUX=permissive SELINUXTYPE=targeted 6. Install NTP # yum -y install ntp # chkconfig ntpd on # service ntpd start 7. Configure the CloudStack Repository create /etc/yum.repos.d/cloudstack.repo [cloudstack] name=cloudstack baseurl=http://cloudstack.apt-get.eu/rhel/4.5/ enabled=1 gpgcheck=0

Installation of Management Server only

8. Install MySQL # yum -y install mysql-server 9. modify /etc/my.cnf Add this into the [mysqld] section innodb_rollback_on_timeout=1 innodb_lock_wait_timeout=600 max_connections=350 log-bin=mysql-bin binlog-format = 'ROW' # service mysqld start # chkconfig mysqld on 10. Install Management Server # yum -y install cloudstack-management # cloudstack-setup-databases cloud:password@localhost --deploy-as=root # cloudstack-setup-management 11. Download and Install System Template Because Secondary Storage is on another server we have to mount it now. mkdir -p /mnt/secondary mount -t nfs ip address:/mnt/sata/sata_nfs/secondary /mnt/secondary 12. Download the system template by typing: /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \ -m /mnt/secondary \ -u http://cloudstack.apt-get.eu/systemvm/4.5/systemvm64template-4.5-kvm.qcow2.bz2 \ -h kvm -F

Installation of KVM nodes Only

13. Install the CloudStack Agent # yum -y install cloudstack-agent 14. KVM Configuration modify /etc/libvirt/qemu.conf by uncommenting vnc_listen=0.0.0.0 modify /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 tcp_port = "16509" mdns_adv = 0 auth_tcp = "none" modify /etc/sysconfig/libvirtd by uncommenting #LIBVIRTD_ARGS="--listen" # service libvirtd restart 15. Install OpenvSwitch and Configure http://cloud-mate.org/2015/06/installing-open-vswitch-centos-cloudstack/ And that completes the installation, now to the CloudStack Gui to configure a Zone: http://ipaddress:8080/client/ Reference http://cloudstack-installation.readthedocs.org/en/4.5/qig.html

[ufbl form_id="1"]