en

API-First Backup Architecture

“API-first backup” sounds like a solved problem—until you try to automate across more than one platform.

In a single-hypervisor environment, almost any backup solution with a REST API looks sufficient. But the moment your infrastructure spans VMware, OpenStack, Kubernetes, Proxmox, or OpenShift Virtualization, things start to break down. APIs behave differently. Features are inconsistently exposed. Automation workflows that worked in one environment fail in another. This is where most “API-first” claims fall apart.

The real question is not whether a backup platform has an API—it’s whether that API is:

  • complete
  • consistent
  • and usable across every platform you operate

In heterogeneous environments, anything less becomes an architectural constraint. This article breaks down what a genuinely API-driven backup architecture looks like in practice—and what it enables for teams building automation across multi-platform infrastructure.

The Two Kinds of Backup API: Bolt-On vs. First-Class

There is a meaningful technical difference between a backup platform that was built API-first and one that had an API layered on top after the core product shipped. The distinction matters in practice.

A bolt-on API is essentially a translation layer: it maps HTTP calls into internal function calls that were designed to be triggered by a GUI. The API tends to lag behind the UI in feature coverage. Operations exposed in the interface may not have API equivalents for months or years after release. Testing and quality assurance is applied to the GUI first; the API is validated downstream. Vendor support for bolt-on APIs is often limited. Documentation is frequently incomplete.

A first-class API, by contrast, is the actual mechanism through which the software controls itself. The web interface consumes the same endpoints exposed to external callers. Any capability available through the UI is available through the API at the same time it ships. The API is subject to the same test coverage and release validation as the core product. There is no feature latency, no undocumented internal surface, and no additional licensing required to access it.

Storware Backup and Recovery is built on this model. Every operation the management console performs — scheduling a backup job, assigning a policy, triggering a restore, querying inventory, monitoring task status — is executed via the same RESTful API published to customers and integration partners. The UI is a consumer of the API, not a replacement for it.

What the Storware REST API Actually Covers

The base URL for all API calls in Storware Backup and Recovery is:

https://STORWARE_SERVER:PORT/api

The API uses standard RESTful conventions — HTTP verbs (GET, POST, PUT, DELETE), JSON request and response bodies, session-based authentication via POST /session/login, and HTTPS on port 8181 (optionally exposed on 443). Every object in the platform is addressable by a GUID, which serves as the stable identifier across all operations.

Key operational domains exposed via the API include:

  • Inventory management: List and query all protected workloads across every connected hypervisor and cloud platform — VMs, containers, storage volumes, application instances. For multi-tenant environments, workloads can be filtered by tenantID, enabling per-project scoping in OpenStack environments.
  • Backup job orchestration: Create export and store tasks programmatically, with full control over backup window (windowStart/windowEnd as UNIX timestamps), job priority (0–100), backup type (FULL or INCREMENTAL), and target destination. The two-phase export-then-store model is fully exposed; store tasks are chained automatically from a successful export.
  • Policy and schedule management: Create, assign, and modify SLA protection policies through the API. Policies can be assigned at the infrastructure level — hypervisor, cluster, host — or per individual workload. Regex-based and tag-based auto-assignment rules are also configurable programmatically.
  • Restore operations: Trigger full VM restores, file-level recoveries, and Instant Restore mount operations entirely via API. Disk layout, network configuration, and target environment can all be specified at call time — enabling fully automated DR orchestration without manual intervention.
  • Task monitoring: Query the status and progress of any running or completed task by task ID. Suitable for polling-based integrations or webhook-style notifications from an orchestration layer.
  • Recovery Plan execution: Recovery Plans — multi-VM DR sequences with defined failover order, network remapping, and recovery validation — can be triggered and monitored via API. This enables full DR automation without a human at the console.

The Heterogeneous Environment Problem That Single-Platform APIs Cannot Solve

Here is where the architectural conversation gets substantive. A backup platform that was designed primarily around one hypervisor — even if it has technically added others over time — tends to have a center-of-gravity problem. The API coverage for the primary platform is deep and mature. Coverage for everything else is thinner, added later, and often inconsistent in how it models equivalent concepts.

Consider what a backup automation engineer in a heterogeneous shop actually needs to build. They need to:

  • Enumerate protected workloads: across VMware vSphere, OpenStack Nova instances, OpenShift Virtualization VMs, Proxmox VEs, Nutanix AHV, and Kubernetes namespaces — using a single, consistent inventory model
  • Apply protection policies: using the same policy constructs regardless of the underlying hypervisor
  • Trigger and monitor backup jobs: with the same API shape whether the target is a vSphere VM or a Ceph RBD volume
  • Initiate restores: including cross-hypervisor restores and V2V migrations, through a unified restore API
  • Integrate with their ITSM, service catalog, or orchestration layer: via webhook or polling — without needing separate integration codebases for each platform

Storware Backup and Recovery was architected around this reality from the beginning. The platform currently supports VMware vSphere, Microsoft Hyper-V, Nutanix AHV, OpenStack (Vanilla, Red Hat, Canonical, and OpenStack-compatible), Red Hat OpenShift, OpenShift Virtualization, Proxmox VE, XCP-ng, XenServer, oVirt, Oracle Linux Virtualization Manager, VergeOS, SC//Platform, Kubernetes, Azure Stack HCI — plus Ceph RBD, file-level backup for Windows and Linux via OS Agent, and application-level backup via a customizable script framework. All of it is reachable through the same API surface.

This is not a feature list. It is an architectural commitment: the API is not a VMware-first interface with adapters bolted on for everything else. Every supported source goes through the same inventory model, the same policy engine, and the same task framework.

Practical Automation PatternsSelf-Service Backup Portals

The most common integration pattern is embedding backup operations into an existing self-service portal or IT service management platform — ServiceNow, Jira Service Management, a custom cloud portal, or an OpenStack Horizon extension. The Storware API provides all the primitives needed to build this: inventory queries, on-demand backup triggers, restore initiation, and task status monitoring. Users never need to interact with the Storware management console directly.

A typical self-service portal integration looks like this:

POST /session/login  →  authenticate
GET  /virtual-machines?tenantid={PROJECTID}  →  list user's VMs
POST /tasks/export  →  trigger on-demand backup
GET  /tasks/{taskId}  →  poll for completion
POST /tasks/restore  →  initiate restore from backup history

Infrastructure as Code Integration

Teams managing infrastructure through Ansible, Terraform, or Pulumi can wire backup policy assignment directly into provisioning workflows. When a new VM is provisioned, an API call to Storware assigns the appropriate SLA policy before the workload is handed off. When a VM is decommissioned, the API updates retention and archival settings. Backup lifecycle follows the infrastructure lifecycle — without any manual step.

OpenStack and OpenShift Native Integration

For organizations running OpenStack or OpenShift, the Storware API includes native project-scoped inventory filtering. OpenStack project IDs map directly to the tenantID parameter across inventory, backup, and restore calls. This makes it straightforward to build per-tenant backup self-service into existing OpenStack portals or Kubernetes operator frameworks without custom mapping logic.

Automated DR Testing

Recovery Plan execution via API enables scheduled DR testing that runs entirely without human intervention. A test orchestrator calls the Recovery Plan execution endpoint on a schedule, monitors task completion, captures the test report, and publishes the result to a monitoring dashboard. The organization gets documented, timestamped evidence that its recovery procedures actually work — not just theoretical coverage.

Licensing: No Additional Cost to Use the API

A detail that is frequently buried in fine print but matters enormously at scale: in Storware Backup and Recovery, there is no additional licensing cost to access or use the REST API. Full API access is included with every deployment under the standard universal license.

Storware’s universal licensing model means a single license covers all supported sources — whether you are protecting VMware VMs, OpenStack instances, Kubernetes namespaces, or bare-metal Linux systems via OS Agent. Adding a new hypervisor platform to your environment does not require a new license SKU, a new contract amendment, or a conversation with a sales team. The API covers everything the license covers, which is everything the platform supports.

This matters for automation specifically because API-driven workflows tend to scale. An integration that starts by automating backup requests for fifty VMs may eventually cover ten thousand. The licensing model should not become the ceiling that limits automation scope.

Architecture: What Sits Behind the API

For architects evaluating backup platforms, it is worth understanding what the Storware API is actually talking to. Requests sent to the Storware Server are executed by one or more Nodes — the data movers responsible for backup and restore operations. Node-level task scheduling, load balancing, and failover are managed by the server layer; the API caller does not need to know which Node is executing a given task.

Key architectural properties relevant to API-driven deployments:

  • Multi-node architecture: Tasks are distributed across available Nodes with automatic load balancing. Geographically distributed environments can have Nodes co-located with the workloads they protect, reducing data movement and WAN traffic.
  • Agentless by default: For all supported hypervisors and cloud platforms, backup operations are performed agentlessly — no software is deployed inside the VM being protected. This keeps the inventory model clean and avoids agent lifecycle management complexity in automation workflows.
  • Policy-based task assignment: Regex and tag-based policy auto-assignment means large inventories can be fully covered by policy without per-VM API calls. The API is used for exception handling and on-demand operations, not for managing every individual workload.
  • RBAC and Keycloak MFA: API authentication supports full role-based access control and integrates with Keycloak for MFA. Service accounts used by automation systems can be scoped to the minimum permissions required for their function.

Frequently Asked Questions

Does Storware Backup and Recovery have a publicly documented REST API?

Yes. The API is documented and accessible to all customers as part of the standard product. The base URL is https://STORWARE_SERVER:PORT/api. Storware also provides a generated Java client; contact support for the version matching your deployment.

Is the Storware API available without additional licensing?

Yes. Full REST API access is included in the standard Storware Backup and Recovery license. There are no additional API access tiers or surcharges.

Does the API work consistently across all supported hypervisors?

Yes. The same inventory model, policy assignment, task creation, and restore endpoints work across all supported platforms — VMware, OpenStack, OpenShift, Proxmox, Nutanix, VergeOS, Hyper-V, XCP-ng, and others. Platform-specific behavior is handled internally by the platform adapters; the API surface is uniform.

Can we use the API to build a multi-tenant self-service backup portal?

Yes. The API exposes a tenantID field on inventory and task objects, enabling per-tenant scoping for multi-tenant deployments. This is particularly relevant for OpenStack environments where project IDs map directly to tenantID values.

Can Recovery Plans be triggered and monitored via API?

Yes. Recovery Plan execution, status monitoring, and result retrieval are all available through the API, enabling fully automated DR test orchestration.

Conclusion: API-First Means Nothing If It Only Works for One Ecosystem

The argument for API-driven backup architecture is sound. Automation reduces human error. Programmatic policy enforcement scales in a way that manual configuration cannot. Integrating backup into provisioning and decommissioning workflows closes the protection gaps that appear when infrastructure moves faster than the backup team.

But the argument only holds if the API actually covers the full scope of the infrastructure being protected. An API that delivers deep automation for VMware and shallow, inconsistent coverage for everything else is not an API-first architecture — it is a VMware-first architecture with API endpoints for the other platforms. In environments where OpenStack, OpenShift, Proxmox, and Kubernetes run alongside vSphere, that distinction is not theoretical. It determines whether the automation project succeeds or gets abandoned halfway through.

Storware Backup and Recovery was built for the world as it actually exists: heterogeneous, multi-hypervisor, and increasingly open-source-dominated. The REST API reflects that architecture. The same endpoints, the same policy model, the same task framework — whether you are automating backup for a VMware cluster or a bare-metal OpenStack deployment.

That is what API-first actually means.

text written by:

Łukasz Błocki, Professional Services Architect