If you’re using StorSimple, one option you have is to migrate to Azure File Sync (AFS). However, it’s an arduous migration process, and AFS has limitations that make it more suitable for smaller organizations.
Another option you have is to replace StorSimple with HubStor.
StorSimple migration without egress
If your StorSimple implementation has been around for any length of time, the majority of your data is in Azure. Therefore, a chief concern you probably have is whether or not a replacement solution is going to incur egress costs along with a substantial impact on your network bandwidth.
The good news is that you can avoid egress when moving to HubStor.
Table of Contents
- How StorSimple migration to HubStor works
- Capturing deltas from StorSimple appliance on-premise
- Syncing original metadata to HubStor
- Making the switch from StorSimple
- The go-forward plan using HubStor
How StorSimple migration to HubStor works
HubStor is a data platform with a SaaS backbone running on Azure components such as blob storage. Installable services offer backup and archival of various data sources on-premise and in the cloud.
For the cloud portion of your StorSimple footprint, we deploy a StorSimple virtual appliance and an instance of the HubStor Connector Service (HCS) on a Windows virtual machine in Azure. HubStor then simply leverages this infrastructure to capture and migrate the bulk of your StorSimple data.
Since the StorSimple virtual appliance and HubStor Connector Service instances run in the same Azure region as your StorSimple data, all the data transfer occurs within the cloud, so you avoid egressing the data back to your datacenter.
Capturing deltas from StorSimple appliance on-premises
Your StorSimple appliance(s) in your datacenter likely has data that is local. A local instance of the HubStor Connector Service (HCS) can run on a Windows virtual machine in your datacenter to capture any delta via the front-end Windows Server, pushing it up to HubStor over your network connection securely.
Syncing original metadata to HubStor
The directory structure, original timestamps, and permissions will likely be missing from the cloud-based migration process.
During the delta capture process, HubStor first runs in what we call a “blobless capture” mode. HubStor has a unique blobless method specifically for StorSimple migration, which will match up the original metadata with the blobs captured via the cloud-based virtual appliance interface.
Making the switch from StorSimple
With the full dataset from StorSimple captured into HubStor, we can now make the switch to a new, cloud-enabled storage solution with HubStor serving as its underlying backup, archive, and cloud tiering engine.
One of the cool features of HubStor is that it eliminates the need to migrate the entire dataset to a new storage array physically.
Instead, your data can present virtually in a new file directory, avoiding the need to migrate the entire data set while still allowing users and apps to view and recall files.
The process of getting your data set virtually presenting in a new path involves using the HubStor Export Utility to run two operations, as follows:
- Job #1: Run a restore job as seamless pointers – We first run a job that recreates the file directory, permissions, and files as a simple metadata operation. The file directory and permissions are metadata by default, but HubStor uses offline objects to present the data in a metadata-only manner.
- Job #2: Optionally, rehydrate the most recent data physically (e.g., data last modified within 30 days). This step is entirely configurable and is only necessary if you wish to minimize initial recalls from the cloud once you point your users and apps to the new storage location.
The go-forward plan using HubStor
With the dataset now presenting in the new storage location as mostly pointer files, you can direct users and refactor apps to use the new file directory.
HubStor supports Linux-based storage arrays, Windows Server clusters, or a storage architecture/device with a Windows front end.
HubStor works behind the scenes to cloud-enable the new storage location, providing an incremental backup of any changes and new files, tiering data as it ages, and delivering recall of tiered items with local caching.