• A single pane of glass into all your public and private cloud accounts.

A smarter way to manage all your CLOUD Resources from one place         A Hybrid Cloud that will pass the test of time and go beyond!

Basic working of xPlore.Cloud : A Walkthrough

The following schematic diagram represents a very high level and grossly watered down view of the overall architecture of the xPlore.Cloud solution.

https://xplore.cloud/img/screenshots/xcarch-004.png

Let us now follow along a typical workflow of adding support for listing and managing a resource type within a target cloud using the xPlore.Cloud hybrid cloud development platform.

STEP 1: Decide on one or more target platforms. To fix ideas let us assume we are targeting an AWS cloud account. One might argue, just a single AWS account is not much of a Hybrid cloud, but, this will still give you a clear picture of what is involved without making it too complicated at this early stage.

STEP 2: Launch a CAS within your AWS account and activate it (refer to the Getting Started section).

While activating the CAS from xPlore.Cloud Add Access server interface, remember to select Generic for the Target Cloud. Choosing AWS here will install the xPlore.Cloud reference implementation onto the CAS, which we do not want to do in this walkthrough.

While launching the CAS virtual machine, attach a suitable IAM role to it so that you do not have to store any AWS credentials even on the CAS! You can learn more about this here.

STEP 3: From within the HyBench interface on the front-end, add a feature and name it virtual_machine. You may choose a convenient human readable name like Servers as the nickname for the feature, which will in fact show up on the self service portal xPlore within the front-end.

STEP 4: Decide upon a suitable Python library that lets you connect to and manage AWS accounts programmatically, e.g., boto3. boto3 comes pre-installed on the CAS, so just insert the following line at the top of the feature file (you can edit this file using the code editor within the HyBench tool).

import boto3

At this stage, the virtual_machine.py file will look something like the following:

import boto3
def create(jsonData, store, settings):
    result = {}

    #Add code to create the entity in your cloud

    return { 'error_code' : '0', 'error_msg' : '', 'result' : result }

def readall(jsonData, store, settings):
    #there is no need to add/modify any code in this function

    filter_field = ''
    filter_value = ''
    if 'filter_field' in jsonData:
        filter_field = jsonData['filter_field']
        filter_value = jsonData['filter_value']
    result = store.read_all(filter_field, filter_value, jsonData['region_id'])
    return { 'error_code' : '0', 'error_msg' : '', 'result' : result }

def readone(jsonData, store, settings):
    #there is no need to add/modify any code in this function

    result = store.read_one(jsonData['instance_id'], jsonData['region_id'])
    return { 'error_code' : '0', 'error_msg' : '', 'result' : result }

def remove(jsonData, store, settings):
    result = {}

    #Add code to delete the entity in your cloud. jsonData['instance_id'] is the unique
    #identifier (this coincides with the field you designated as id_field)

    return { 'error_code' : '0', 'error_msg' : '', 'result' : result }

def resync(jsonData, store, settings):
    result = []

    #Add code here to fill up the result list with entities fetched from your cloud
    #Field names in each dict members of the list to match field names defined by you
    #at the HyBench interface

    store.remove_all(jsonData['region_id'])
    store.save_all(result, jsonData['id_field'], jsonData['region_id'])
    return { 'error_code' : '0', 'error_msg' : '', 'result' : result }

def jobstatus(jsonData, store, settings):
    #Modify this function to send notifications back about status of a resource in case there is a
    #long running process running in the background. E.g. a server is being created
    result = { 'status' : 'wip' }
    #possible values for status = 'wip'/'complete'/'failed'
    return { 'error_code' : '0', 'error_msg' : '', 'result' : result }

STEP 5: In the above code, you may never need to actually modify the code for the functions readall and readone. They are complete for almost all normal needs.

You need to modify jobstatus only when there is possibility of some long-running processes for the feature. E.g., for our Servers feature, the operations Stop and Start ARE indeed such operations and so, we may at some point need to modify the jobstatus function.

The function create is to be modified and appropriate code added to it when we implement the Add Server action, say.

Same with remove. You need to modify it and add VM termination code to it when you implement the Delete Server action.

However, you will always need to modify the resync function, which actually corresponds to the Resync button that appears on the xPlore interface towards the top right of the listing screen for a Feature. This action, as the name suggests, is meant to bring over data on the target resource from AWS EC2 and save them locally.

Let us now add code to the resync function that will fetch the virtual machines data from the AWS account and store in the local database on CAS.Since in Step 2 above, we attached an IAM role to the CAS instance, we really do not need to do anything special to authenticate our call to EC2. boto3 automagically does this for us. You can read about this feature here.

Assuming, quite unrealistically, but, to reduce repetitions in the code, you host all your AWS instances on the us-west-2 region, here are the two lines that will fetch all instances in there!

ec2 = boto3.client('ec2', region_name='us-west-2')
response = ec2.describe_instances()

All now is left to do, to complete our resync implementation, is to loop through the vms returned, collect the data we want to store on each VM as a key-value pair in a dict representing a row of data, append it to the empty list result already declared within the boilerplate code. Like so:

for r in response['Reservations']:
   for vm in r['Instances']:
      instance_id = vm['InstanceId']
      priv_ip = vm['PrivateIpAddress']
      pub_ip = vm['PublicIpAddress']
      state = vm['State']['Name']

      result.append({ 'instance_id' : instance_id, 'priv_ip' : priv_ip, 'pub_ip' : pub_ip, 'state' : state })

Here is how the modified resync function should look like:

def resync(jsonData, store, settings):
   result = []

   #Add code here to fill up the result list with entities fetched from your cloud
   #Field names in each dict members of the list to match field names defined by you
   #at the HyBench interface
   ec2 = boto3.client('ec2', region_name='us-west-2')
   response = ec2.describe_instances()
   for r in response['Reservations']:
      for vm in r['Instances']:
         instance_id = vm['InstanceId']
         priv_ip = vm['PrivateIpAddress']
         pub_ip = 'N/A'
         if 'PublicIpAddress' in vm: #because the Public IP Address may not be defined for all VMs
            pub_ip = vm['PublicIpAddress']
         state = vm['State']['Name']

         result.append({ 'instance_id' : instance_id, 'priv_ip' : priv_ip, 'pub_ip' : pub_ip, 'state' : state })

   store.remove_all(jsonData['region_id'])
   store.save_all(result, jsonData['id_field'], jsonData['region_id'])
   return { 'error_code' : '0', 'error_msg' : '', 'result' : result }

And, that is all there is to it. The boilerplate code takes care of removing stale data and storing fresh data into the local database.

You could collect a lot more data and store in the row, but, I guess you get the flow of things by now!

  1. STEP 6: Next you need to tell the xPlore frontend about the four fields you are storing for the Feature and which you want to show on the frontend within the listing pane for the feature. For this, you need to revisit the HyBench Feature editor and on the General tab, towards the bottom add the four fields we defined within the resync function, viz., instance_id, priv_ip, pub_ip and state. Remember to tag the instance_id field as an ID Field as we will need that in our next step when we add the Start action to our Feature.
  2. STEP 7: Now go to the Actions tab within the Feature editor and add an action and name it start. Just remember to set the Global flag to No and Visible to Yes. The rest do not matter at this stage. Once you add this action, go to the Code editor tab and you will find the following function added to the bottom of the Feature code:
def start(jsonData, store, settings):
    result = {}
    return { 'error_code' : '0', 'error_msg' : '', 'result' : result }

The instance_id that will identify this VM is available within this function as jsonData[‘instance_id’]. Now, let us use boto3 once again to programmatically start this VM.

However, first add the following line to the top of the file, so that you can intelligently catch exceptions if they occur:

from botocore.exceptions import ClientError

Now, you are finally ready to write the few lines that are necessary to start a VM from the CAS. Here is how the start function should look like now:

def start(jsonData, store, settings):
   result = {}
   ec2 = boto3.client('ec2', region_name='us-west-2')
   try:
       response = ec2.start_instances(InstanceIds=[jsonData['instance_id']], DryRun=False)
       result = { 'error_code' : 0, 'error_msg' : '', 'result' : {} }
   except ClientError as e:
       result = { 'error_code' : 1000, 'error_msg' : 'Could not start VM', 'result' : {} }

   return { 'error_code' : 0, 'error_msg' : '', 'result' : result }

True, there are still some more work left to complete the feature (for instance, we need to flesh out the functions create, remove and jobstatus), but, none should be more difficult than the couple we did flesh out above.

You may now test out this feature by going into the Preview mode by clicking the film-roll button on top-right of the Feature editor.

Once in the Preview mode, you will see that the Servers tab is highlighted on the interface. It is showing an empty table for now. Click on the Resync button to the top right of the pane. You may have to refresh the screen a few times, but, eventually you should see the VMs listed in the interface. Each row will have a Delete (trash-can icon) button (which incidentally will do nothing as we did not flesh out the remove function within the feature code) and another button corresponding to the Start action we defined above.

You can now click on the Start button for a stopped VM to start it.

About

Indranil is the Chief Product Officer at xPlore.Cloud and the main architect of the product.

Archives

  1. May 2018
  2. April 2018
  3. March 2018
  4. February 2018
 
If you have an inhouse team of developers, click below to start your Free trial.
If you want us to build your Hybrid Cloud, click below to contact us.