A hybrid cloud is a composition of at least one private cloud and at least one public cloud. The private cloud can either be maintained by the user company or a private cloud hosting provider. The public cloud can be any services subscribed from ay of the public cloud vendors like Amazon web services, Rackspace etc. Some use cases include keeping the application servers internally and make use of the storage in the public cloud for archived data. Such hybrid cloud approach offers immediate provisioning and rapid scalability on an as needed basis. Organizations may host critical applications on private clouds and applications with relatively less security concerns on the public cloud. A related term is Cloud Bursting. In Cloud bursting organization use their own computing infrastructure for normal usage, but access the cloud for high/peak load requirements. This ensures that a sudden increase in computing requirement is handled gracefully.
Cloud providers support private clouds now, for example; Virtual Private Cloud from Amazon Web Services. There is also support for establishing VPN connectivity to the private cloud from our company network. So this enables Cloud based hybrid cloud build either within the same provider or cross platform. The hybrid cloud is the best among the three as it brings together the comfort level of a private cloud & the flexibility and versatility of the public cloud. The figure on left side differentiates the Public, Private and Hybrid clouds.

Gartner/IDC survey predicts the following to happen in the cloud in near future.

  • “80% of new commercial enterprise apps will be deployed on cloud platforms”. .
  • “Amazon Web Services will exceed $1 billion in cloud services business in 2012 with Google’s Enterprise business to follow within 18 months”(IDC).
  • “By 2016, 40 percent of enterprises will make proof of independent security testing a precondition for using any type of cloud service” (Gartner).

More than 50 percent of Global 1000 companies will have stored customer-sensitive data in the public cloud by year-end 2016. Under pressure to reduce costs and operate more efficiently, more than 20% of organizations are already selectively storing customer-sensitive data in a hybrid cloud environment, Gartner says.

  • “At year-end 2016, more than 50 percent of Global 1000 companies will have stored customer-sensitive data in the public cloud” (Gartner).

Some interesting predictions by Forrester follows;

  • Multicloud becomes the norm: Companies will increasingly have to address working with several different cloud solutions, often from different pro viders.
  • The cloud market will grow beyond $60 billion: The cloud market including private, virtual private and public cloud markets will reach about $61 billion by the end of the year.
  • Private clouds will go beyond virtualization: With an increasing understanding of cloud computing, companies will shift their focus from technical virtualization projects to focus on the change management aspects required for flexible business models between IT and the line of business.

OpenNebula

OpenNebula is an open-source tool for datacenter virtualization. It helps us to build any type of cloud: private, public and hybrid for data centre management. This tool includes features for integration, management, scalability, security and accounting of data-centres. Its very efficient core is developed in C++ and possesses a highly scalable database back-end with support for MySQL and SQLite.

OpenNebula is the result of many years of research and development in efficient and scalable management of virtual machines on large scale distributed infrastructures. OpenNebula was first established as a research project in 2005 by the Distributed Systems Architecture Research Group at the Complutense University of Madrid. OpenNebula is sponsored by C12G (Numeronym for Cloud Computing) Labs. The initial release of OpenNebula was on March 2008.The initial release of OpenNebula was on March 2008. After that the stable version of OpenNebula 2.2 was released in March 2011 with the new SunStone GUI. Seven months later, in October 2011, the project released OpenNebula 3.0 which is the latest stable version. OpenNebula is developed and being nourished by The OpenNebula Community.

OpenNebula is a platform providing the ability to manage a pool of virtual resources. You can create virtual machines and configure them as you would configure a physical machine connected your network. Difference between OpenNebula and Amazon EC2 (and other public cloud providers) is that Amazon EC2 is a public service. Amazon uses an internal infrastructure management tool like OpenNebula for providing those virtual resources to people as they demand.

OpenNebula is the cloud management tool which helps to synchronize the storage, network and virtual techniques, and also helps the users to deploy and manage virtual machines on physical resources according to the allocation strategies at data centers and remote cloud resources dynamically. OpenNebula is mainly used to manage the data centre of private cloud and infrastructure of cluster, and it also support hybrid cloud to connect the local and public infrastructure. This is very useful to build high scalable cloud computing environment. OpenNebula also supports public cloud platform by providing interfaces and functions to virtual machines, storage and network management and so on.

OpenNebula cloud computing platform has many advantages. Firstly, from the view of infrastructure management, it can dynamically adjust the scale of the infrastructure of the cloud platform by increasing the number of hosts and partition clusters to meet different requirements.

We can use infrastructrue editor in web interface to create and modify the disposable infrastructure for your applications. The same editor is used to create new application and edit an existing application. The disposable infrastructure manager handles the infrastructure for each Applogic application which includes virtual appliances, catalogs etc.

Secondly, it can manage all the virtually and physically distributed infrastructures centralized and can create infrastructure with the heterogeneous resources at datacenters. This can guarantee the use of resources more efficiently and can reduce the number of the physical resources through the close integration of servers which further reduce the cost caused by space-saving, management, energy consumption, cooling and so on. From the point of infrastructure users, OpenNebula is scalable and can provide rapid response to user’s requirements. From the point of system integrators, users can deploy any kind of cloud and integrate the visual datacenters and products or services in the management tools say cloud providers, virtual machine managers, virtual image managers, service managers, management tools and so on. As OpenNebula is an open source, flexible cloud with extensible interfaces, structure and components,it makes it suit be used in any kind of data-centre.

Compared with Eucalyptus, OpenNebula has more strong support of private cloud platform and dynamic management of the scalability of virtual machines on clusters. For hybrid cloud, it provides on-demand access and elastic mechanisms as Amazon EC2 does. Other open-source solutions mainly focus on public cloud features and do not realize the full potential of virtualization in the data center to enable private cloud. And OpnNebula supports KVM, VMWare and Xen.

OpenNebula provides various interfaces which can be used to manage and interact with the physical and virtual resources. The two main interfaces are Command Line interface(CLI) and SunStone GUI. It also provides a set of commands to interact with the system from the command line interface. OpenNebula Sunstone is a Graphical User Interface intended for regular users and administrators that simplifies the typical management operations in private and hybrid cloud infrastructures. It allows us to easily manage all OpenNebula resources and perform typical operations on them. Apart from the above mentioned interfaces, there are several cloud interfaces like OCCI (Open Cloud Computing Interface) and EC2 (Elastic Compute Cloud) Query, that can be used for creating public clouds.

OpenNebula supports user accounts and groups. A user in OpenNebula is defined by a username and password. Each user has a unique ID, and belongs to a group. A group in OpenNebula makes it possible to isolate users and resources thereby preventing a user in a particular group from accessing the resources allocated for other groups. A powerful Access Control List (ACL) mechanism is used for allowing fine grain permission granting.

OpenNebula has a Network Subsystem that is easily adaptable and customizable, which makes it capable for better integration with existing datacenters. It makes use of VLANs and Open vSwitch, to restrict the network access.

The Virtualization Subsystem (virtualization manager) is the component in charge of talking with the hypervisor installed in the hosts and taking actions needed for each step in the VM lifecycle.

The Storage Subsystem can be configured to support non-shared and shared filesystems. It is flexible enough to support as many different image storage configurations as possible.

The Key features of the stable release of OpenNebula 3.0 are :-

OpenNebula offers powerful user security management,which make use of pluggable Auth Subsystem for authentication and authorization of requests. It also supports the authentication based on passwords, ssh rsa keypairs, X509 certificates or LDAP.

OpenNebula supports groups and ACLs (Access Control Lists).Groups allow administrators to isolate users and their resources from one another, while OpenNebula's implementation of ACLs allow cloud administrators to permit or deny operations for users and groups.

On-demand Provision of Virtual Data Centers(VDC) is the yet another feature which Open Nebula offers.A Virtual Data Centers (VDC) is a fully-isolated virtual infrastructure environment where a group of users, under the control of the VDC administrator, can create and manage compute, storage and networking capacity .

Advanced Control of Virtual Infrastructure by use of Image/Template repository subsystem with catalog and complete functionality for VM image/template management.OpenNebula offers full control of VM instance life-cycle and complete functionality for VM instance management

Effective monitoring of Virtual Infrastructure using configurable System Usage Statistics like OpenNebula Watch utility to visualize and report resource usage. OpenNebula allows for the automatic configuration of VMs which supports wide range of guest operating system including Microsoft Windows and Linux.

Hook Manager mechanism in OpenNebula can trigger administration scripts upon VM state change,which opens a wide area of automation for system administrators to tailor their cloud infrastructures.

OpenNebula can be configured to deploy public, private and hybrid clouds. Its powerful and extensible built-in monitoring subsystem and Host Management Subsystem helps in the advanced control and monitoring of physical infrastructure.

OpenNebula has a flexible Network Subsystem and a powerful Storage System. OpenNebula possesses a Virtualization Subsystem with broad hypervisor support(Xen KVM and Vmware),for the centralized management of environments with multiple hypervisors, and support for multiple hypervisors within the same physical box. Centralized Management of Multiple Zones is the yet another feature offered by OpenNebula.

OpenNebula offers the High Availability support.It has a persistent database backend with support for high availability configurations.

OpenNebula supports Hybrid Cloud Computing thereby managing an external public cloud just as it is another local resource. Therefore, any virtualized service can transparently use the public cloud.

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Route 53 effectively connects user requests to infrastructure running in Amazon Web Services (AWS) – such as an Amazon Elastic Compute Cloud (Amazon EC2) instance, an Amazon Elastic Load Balancer, or an Amazon Simple Storage Service (Amazon S3) bucket – and can also be used to route users to infrastructure outside of AWS

OpenNebula also offers cloud interfaces to its users on Private or Hybrid cloud so that they can grant access to external parties or users to their cloud infrastructure or can sell their over capacity to someone else. It supports AWS EC2 cloud API well. Also It can support multiple cloud APIs simultaneously.

Another interesting thing is,it provides user self-service portals and easy-to-use interfaces. OpenNebula provides a Unix-like command line interface with support over a wide range of operations including VM images,VM templates,virtual networks,zones,authentication etc. For more user friendliness,it offers a Sunstone graphical interface with usage statistics and cloud-watch like functionality. This supports VNCs,multiple zone management,different views for different roles etc.

OpenNebula packages are available for almost all the linux distributions and can be easily installed. We can configure and customize the software according to our requirements using the OpenNebula source code. OpenNebula consumes only around 10Mb space on the disk when installed. The elaborated log files by OpenNebula helps in understanding the internal work-flow and quick troubleshooting.

OpenNebula packages are available for almost all the linux distributions and can be easily installed. We can configure and customize the software according to our requirements using the OpenNebula source code. OpenNebula consumes only around 10Mb space on the disk when installed. The elaborated log files by OpenNebula helps in understanding the internal work-flow and quick troubleshooting.

OpenNebula QA, a part of OpenNebula carries out automated testing and quality analysis for functionality,scalability,robustness etc. Various tests conducted by OpenNebula QA are Unit tests,System tests,System integration checks and Scalability checks. It's proven in the environments consisting of tens of thousands of cores and Vms.

OpenQRM

Quoting the words from the official OpenQRM site “OpenQRM is the next generation, open-source Data-center management platform. It’s fully pluggable architecture focuses on automatic, rapid- and appliance-based deployment, monitoring, high-availability, cloud computing and especially on supporting and conforming multiple virtualization technologies.

OpenQRM is a single-management console for the complete IT-infra structure and provides a well defined API which can be used to integrate third-party tools as additional plugins.” The OpenQRM platform provides an easy way of building private cloud network inside your office/organization network.

System Requirements for installing openqQRM

The OpenQRM cloud software can be installed only on a Linux/Unix server, anyhow it supports major other Os's as its clients .

  • Requirement for this software are;
  • A Linux machine/server
  • Minimum 1 GB RAM
  • A database (Mysql/Postgres/Oracle/Db2)
  • Intel VT/ AMD (SVM/HVM) enabled hosts

As you can see the requirements for this software are very simple and will not need any additional purchase of hardware to meet the requirements

Now, coming to the capabilities of this cloud, with OpenQRM you will be able to

  • Use your existing physical servers and Storage Devices
    You will not be required to purchase any additional hardware to meet OpenQRM’s requirements; you can still use your existing resources with OpenQRM. The main advantage about this is, your “not fully utilized resources” can be utilized to their full performance and gain maximum output from your existing resources.
  • Can easily add new cloud resources automatically
    In cases when you need to add additional physical/virtual servers/devices to the cloud ... you don't need to manually do this task, it all will be done automatically by the OpenQRM by using the custom PXE boot mechanism provided with OpenQRM.
  • Easy fail-over setup with special “N+1” failover setup
    Setting up failover and automatically launching new instances in case of a server has never been so easy. With OpenQRM all these tasks can be done automatically, you just need to configure the failover and high availability from the cloud portal and that’s it; OpenQRM will handle all the remaining tasks for you. It even supports use of an N+1 failover and thus you don't need to spend more money on the idle fail-over resources
  • Supports windows/Unix/Linux clients
    If you suspect since it is “open source “, you will not be able to run the Windows server instances on your cloud. You are wrong then; you can even run windows servers as well with native Linux server instances on the cloud using OpenQRM.
  • Running machine as a cloud resource
    Suppose you have a running Windows/Linux server on your office/organization and a lot of free disk space is available on the server. If you wonder how will you use this space as a cloud resource without rebooting/PXE booting the server, you just need to execute a script/exe provided by the OpenQRM and your current server gets added to the OpenQRM as its resource and then you can use the free storage for the cloud
  • All control from a cloud management portal
    As mentioned above, the OpenQRM comes with a web based, fully fledged cloud management portal similar (or even better) to the ones you get with the public cloud services. You can do almost all the tasks related to your cloud from this panel
  • API support
    OpenQRM comes with a set of API's and command line tools which can be utilized by your development team in cases where you need to control the OpenQRM from external apps.
  • Supports High availability
    OpenQRM comes with a custom High availability plug-in which allows you to launch new servers when a current server goes down. This is very helpful in cases where your application is important and HA is very critical for your project. And DRBD as you all know, is an important HA mechanism used today , OpenQRM is coming with a web portal for managing the DRBD on its clients There are plenty of such exciting features and facts about OpenQRM ,going through all such plug-ins and features will not be possible with this single post , anyhow we will try to provide you with a better understanding about the OpenQRM infrastructure that will help you in setting up a private cloud inside your office/Organization. You can read more on OpenQRM from the official website itself “www.openqrm.com” We will start with the basic OpenQRM cloud architecture and then move on to each of its components

OpenQRM Architecture

As you can see from the sample architecture, the OpenQRM software binds different resources and technologies and constitutes it in to a cloud network within your infrastructure. This Plug – in feature of OpenQRM makes it so special from the other remaining cloud softwares

Components

Components are the main building units of the private cloud, the different instances of components are joined together when launching a server instance within the cloud. The main components that make the cloud are

Resources

The Resources given in the architecture refers to the actual physical/virtual servers, storage devices in your current infrastructure. You will be using these devices as your cloud resources. Adding new resources to the cloud can be done automatically via a custom PXE boot mechanism provided with OpenQRM. Also, as mentioned above, running resources too can be added to the cloud using a script/exe file provided in OpenQRM.

Virtualization Technology

This is the virtualization technology that you will be using for your cloud. The OpenQRM supports all the major open-source virtualization technologies like KVM, Xen, and VirtualBox etc. When it comes to serious virtualization the main two open source technologies used today are kvm and Xen and your cloud supports both.

And say if you know nothing about these technologies, you don't need to worry about that either. All related tasks can be done from the web portal.

The OpenQRM does this act of virtualization by the help of available plugins shipped along with OpenQRM. You don't need to compile the kernel, install and configure the Virtualization software and all the other tiresome work; you just enable a plugin and mark a resource with that plugin that’s all. In cases you might need to install some packages to meet the softwares requirement but that too can be done from the package manager

Storage Server

This is the storage server mechanism which is used within your cloud. For most of the clouds, for reliability we should not be using the ordinary Hard disk based storage. When using the ordinary hard disk based storage, you are actually compromising the HA on your cloud. Since Hard disks will be the first thing going to get a failure on a server, storing your cloud instances inside that will not be suitable.

We highly recommend you to use modern storage mechanisms like SAN for utilizing the cloud to its best performance. You can even build software San's using your existing Disks, using a RAIDED fashion for better availability and reliability. Anyhow for testing purposes you can use a physical Linux/Unix/Windows server as a storage server, but we highly recommend the use of a SAN

Now coming to the storage mechanisms supported by the OpenQRM , the OpenQRM supports ISCSI/AOE(coraid) based san's , native LVM and ZFS storage mechanisms , NFS storage solutions etc. DRBD, even though cannot be called as a storage mechanism, involves mirroring and OpenQRM supports it as well. Now with the latest release of OpenQRM it is said to be supporting gluster storage mechanisms.

Once you enable a storage plugin, say LVM, you now need to create the storage server component which can be done using an idle resource on the cloud. After you mark an idle resource with the storage plugin, the idle resource becomes the new storage server and you can manage the logical volumes on the storage server from the cloud portal.

Now you can save the server boot image (you can either create it yourself or can be downloaded from OpenQRM’s image-store) and save it to an unused logical volume. And these boot images are used within the appliance configurations for launching cloud server instances.

Suppose you have already saved a particular set of boot image files to a logical volume. You can even clone this logical volume to a new logical volume and thus you don't need to go through all the tasks of downloading, extracting and saving boot images to storage server.

IMAGES

From the above mentioned storage server we will now create the appliance images. We create the image configuration by using the cloud management portal, then give a unique name for the image configuration, select the correct logical volume from the Storage server, and save this configuration. These images are similar to the Amazon’s AMI's and using these images you can launch new server instances

Kernel

These are the Linux kernels utilized by the server instances on the cloud. These kernels are saved to the OpenQRM server, when the appliance starts booting it fetches the correct kernel as per the appliance configurations. These are the normal Linux kernels, and OpenQRM is shipped with a default kernel and you can manually add additional kernels as per necessity.

These were the main components which can be called as the building blocks for the OpenQRM cloud. The above mentioned instances components are plugged-in to an appliance configuration to create a cloud server instance, and that part is detailed below.

Appliances

The appliances are the cloud server instances running inside the cloud; these could be physical/virtual server instance. You will be creating an appliance configuration from the portal using the component instances (image, kernel, resource). Once the appliance configuration is completed with a boot image, kernel, resource on which the appliance should run, you can start the appliance and a server instance with the given configurations is launched on the resource you have mentioned in your configuration.

This way the OpenQRM’s pluggable fashion makes it easy for you to create, launch and delete instances through the portal.

Still confused about the plug-in model? Let us try again to make it simpler for you.

You are going to create component instances first then will launch appliances form this;

You have added few resources to your public cloud either automatically or manually.

  • Your cloud comes with a default kernel which the appliances will be using; you can add additional kernels if you want to get more updated kernels than the default one.
  • You will now enable a storage plugin, say LVM. Create a new storage server component using the enabled plugin (LVM) , on a resource in the resource pool. This server is used for storing the boot images of your cloud server instances. You can enable the image-shelf plugin in OpenQRM to download the boot images from an OpenQRM managed image pool.
  • You now need to create images which are similar to Amazon AMI's. While creating the image, you mention which storage server should be used, which logical volume in the server should be used etc. And thus this image contains configurations used by the appliances to use a particular server instance from the storage server
  • You now enable a virtualization plugin say KVM, you assign an idle resource as your kvm host, select the necessary image, Kernel and start the kvm host appliance. The physical server (resource marked as kvm host) will get restarted as a new KVM host on your cloud
  • Now you have a kvm host resource on your cloud, a storage server resource on your cloud. You have everything needed to launch your final cloud server instance. You will now create the server appliance configuration by selecting a kernel, image and then launch it on the new kvm host resource.

That’s all … now you have a running kvm VM on your kvm host.

As you can see, the complete process of launching a server instance inside your cloud is very simple using the portal provided by OpenQRM.

Cloud Management Portal

This is the web based cloud management portal included with OpenQRM, and you can see all the tasks related to the cloud can be done form this portal. You can see sections for events (logs), appliances, components (kernel, image, storage, resource etc), plug-in manager etc.

With this post we have covered the basic details about OpenQRM, details like different components, appliances, adding/removing resources etc.

By a planned configuration of the cloud, you can save more on the resources and get more outputs from the existing resources. And the installation, configuration, tweaking of OpenQRM is very simple and all can be controlled using a web portal, thereby setting up a cloud using OpenQRM inside your office/organization a very easy task.