Rubrik : Using Rubrik's APIs

Background

One use case for Rubrik API is when you have a master SQL instance and a stand-by server that needs to be populated with the latest data set. The reason is when a disaster occurs, you want to start as quick as possible in DR mode to avoid service interruption and a too far recovery point objective (RPO). In this case, I thought automating recovery to a certain point in time on regular schedule could potentially do it. Rubrik do not support automated restore/export (yet). But, there is a way to do it by yourself

Let's API !

Yes, API is the key and API are really really helpful. You know my love about php. yes I know, this is oldschool and not making the buzz anymore these days. But, honestly, php is all terrain language and again, this post prove by itself how easy it is to use it in any type of issues. I'm not giving coding lessons, I'm not a qualified programmer. But I really think this is easy to do things that just works even if this could definitely be optimized by a professional developer.

There are two URLs you need to know about Rubrik APIs. This is called the playground, a place where you can learn and try API calls by yourself before implementing into your codes.
The first playground is called v1 and the second is called INTERNAL. You need to pay attention to the second one since this is subject to change between Atlas version upgrades. Be sure to check your API calls once upgraded.
  • v1 can be reach on https://<node-ip>/docs/v1/playground/
  • INTERNAL is here : https://<node-ip>/docs/internal/playground/
First, let set the environment. In my case, I'm working with 3 files. An include file with all my functions, a credentials file where I save my access to the brik for security I prefer to leave it separate and finally my main code.

Credentials.php

<?php
$clusterConnect=array(
"username" => "user",
"password" => "password",
"ip" => "ip_address"
);

?>

You can find my PhP framework on GitHub. Feel free to download it and use it in your projects.

And finally, the main project file. Here is a sample of a code that retrieve some of your cluster key values.

rkGetInfo.php (also available on GitHub)

#!/usr/bin/php 

<?php
include_once "credentials.php";
include_once "rkFramework.php";

$padSize=80;
$lastEventCount=5;

// ===========================================================================
// Main entry point
// ===========================================================================

$cluster=json_decode(getRubrikClusterDetails($clusterConnect));
$SLA=json_decode(getRubrikSLAs($clusterConnect));

print("+-".str_pad("",$padSize-11,"-",STR_PAD_RIGHT)."-+\n");

print("| ".str_pad("Cluster Name : ".rkColorOutput($cluster -> name),$padSize," ",STR_PAD_RIGHT)." |\n");

print("+-".str_pad("",$padSize-11,"-",STR_PAD_RIGHT)."-+\n");

print("| ".str_pad("Atlas version : ".rkColorOutput($cluster -> version),$padSize," ",STR_PAD_RIGHT)." |\n");
print("| ".str_pad("Total capacity : ".rkColorOutput(formatBytes(json_decode(getRubrikTotalStorage($clusterConnect))->bytes)),$padSize," ",STR_PAD_RIGHT)." |\n");
print("| ".str_pad("Number of node(s) : ".rkColorOutput(json_decode(getRubrikNodeCount($clusterConnect))->total),$padSize," ",STR_PAD_RIGHT)." |\n");

$clusterData=json_decode(getRubrikNodeCount($clusterConnect));
$nodeNum=1;
foreach ($clusterData->data as $item) 
{
print("| ".str_pad("Node #".$nodeNum." : ".rkColorOutput($item->id." (".$item->ipAddress.")"),$padSize," ",STR_PAD_RIGHT)." |\n");
$nodeNum++;
}

print("+-".str_pad("",$padSize-11,"-",STR_PAD_RIGHT)."-+\n");

print("| ".str_pad("Available SLAs (Total VMs)",$padSize-11," ",STR_PAD_RIGHT)." |\n");

print("+-".str_pad("",$padSize-11,"-",STR_PAD_RIGHT)."-+\n");

foreach ($SLA->data as $item) 
{
$obj=0;
$obj=$obj+ $item -> numVms + $item -> numNutanixVms + $item -> numHypervVms;

    print("| ".str_pad(rkColorOutput($item -> name." (".$obj.") ") ,$padSize," ",STR_PAD_RIGHT)." |\n");
}
print("+-".str_pad("",$padSize-11,"-",STR_PAD_RIGHT)."-+\n");
$availableSpace=json_decode(getRubrikAvailableStorage($clusterConnect));
print("| ".str_pad("Available Space : ".rkColorOutput(formatBytes($availableSpace->value)),$padSize," ",STR_PAD_RIGHT)." |\n");
print("| ".str_pad("Cluster Runway : ".rkColorOutput(json_decode(getRubrikRunway($clusterConnect))->days." day(s)"),$padSize," ",STR_PAD_RIGHT)." |\n");
print("+-".str_pad("",$padSize-11,"-",STR_PAD_RIGHT)."-+\n\n");

print("Last ".$lastEventCount." events in the cluster of type 'Backup' \n\n");

$events=json_decode(getRubrikEvents($clusterConnect,$lastEventCount,"Backup","",""));

foreach ($events->data as $item) 
{
print("Time : ".$item->time."\n");
print("Message : ".json_decode($item->eventInfo)->message."\n");
print("---------\n");
}


?>


There is a little bit of layout in the script to make it nice, but the interesting part is definitely not there.

Here is the output on my test EDGE :


Sorry for the red rectangle, I don't want to reveal anything that can hurt.

This is a pretty good example of how to use the provided framework and the list of functions.

Now, for the MS SQL Export/DR

Global idea is the restore a specific DB or a list of specific DB from its/their latest possible recovery point. I'm passing the source DB name and Host as argument, as well as target details.

The call looks like this : 

$restore=rkMSSQLRestore(
$clusterConnect,
rkGetMSSQLid($clusterConnect,$DataBase,$Host),
$targetInstanceID,
$tDataBase,
rkGetEpoch(date('D M d H:i:s e Y', $latestRecoveryPoint)),
$targetPath

);

$clusterConnect : the credentials to your Rubrik cluster;
rkGetMSSQLid : returns the internal SQL ID of the $DataBase/$Host found in the brik;
$targetInstanceID : returns the instance ID found in the brik;
$tDataBase : is the target database name (can be same as source);
rkGetEpoch : is the timestamp from where you restore the DB
$targetPath : is the absolute path on the target server (something like "C:\Data")

You can easily "manage" what returns the $restore variable by checking the content like this : 

if(isset($restore->status)) 
        print("Job status : ".$restore->status." (ID : ".$restore->id.")\n");
else
{
  print("Something goes wrong -> ");
  print("Job status : ".$restore->message."\n");
print("Check Rubrik logs for further details.\n");
  print("\n");
}

This can be called in a loop to restore multiple databases (this is actually my case)

Here is the screen capture of a looped restore script : 


Again excuse the red's this is only hiding the Database name.

The logic within a loop is something like this : 

$Databases=array( "db1", "db2", "db3", "db4", "db5", );

 // Target parameters 
$targetHost="192.168.1.130";
$targetInstance="MSSQLSERVER"; 
$targetPath="E:\\MSSQL\\DATA"; 
[...]

$targetInstanceID=rkGetMSSQLInstanceID($clusterConnect,$targetInstance,$targetHost);
$dbCount=count($Databases);
for($i=0;$i<$dbCount;$i++)
{
$currentDBcount=$i+1;
print("Initiating recovery DB #".$currentDBcount."/".$dbCount." - ".rkColorOutput($Databases[$i])."\n");

$msSQLID=rkGetMSSQLid($clusterConnect,$Databases[$i],$Host);
$latestRecoveryPoint=rkGetTimeStamp(json_decode(rkGetSpecificMSSQL($clusterConnect,$msSQLID))->latestRecoveryPoint);
$latestRecoveryPoint.="000";

$restore=rkMSSQLRestore(
$clusterConnect,
$msSQLID,
$targetInstanceID,
$Databases[$i],
$latestRecoveryPoint,
$targetPath
);

if(isset($restore->status))
{
print("Job status : ".rkColorOutput($restore->status)." (ID : ".rkColorOutput($restore->id).")\n");
else
{
print("Something goes wrong -> ");
print("Job status : ".rkColorRed($restore->message)."\n");
print("Check Rubrik logs for further details.\n");
print("\n");
}
}

Actual limitations

- If the database already exists on the target system it stops with an error message. This will be solved in the next Atlas release with a overwriting parameter. You can issue a DROP DATABASE [db_name]; before starting the recovery job.
- There is no way to check in the Rubrik log if the job is successful or not - you need to either schedule a custom report or check by yourself. Indeed, there are some API calls that can be triggered but they requires a support token (and only Rubrik support team can generate them).



Comments

  1. Thanks for sharing very nice article on RPO disaster recovery and backups. It help me to understand more about RPO and disaster recovery and backups.

    ReplyDelete
  2. I think you have such an amazing information. But, as you have mentioned MS SQL Export, do you have any information on SQL Server Load Rest API?

    ReplyDelete

Post a Comment

Thank you for your message, it has been sent to the moderator for review...

What's hot ?

ShredOS : HDD degaussing with style

Wallbox : Get The Most Of It (with API)

ThingSpeak : Create some useful formulas