To run these deployment options, you first need:
- an existing Azure ML workspace (see cookbook)
- have permissions to create resources, set permissions, and create identities in this subscription (or at least in one resource group),
- Note that to set permissions, you typically need Owner role in the subscription or resource group - Contributor role is not enough. This is key for being able to secure the setup.
- Optional: install the Azure CLI.
Note: both orchestrator and silo can be deployed using the same arm/bicep script, changing Pair Base Name accordingly.
-
Adjust parameters, in particular:
- Region: this will be set by Azure to the region of your resource group.
- Machine Learning Name: need to match the name of the AzureML workspace in the resource group.
- Machine Learning Region: the region in which the AzureML workspace was deployed (default: same as resource group).
- Pair Region: the region where the compute and storage will be deployed (default: same as resource group).
- Pair Base Name: a unique name for the orchestrator, example
orch
. This will be used to create all other resources (storage name, compute name, etc.).
In the resource group of your AzureML workspace, use the following command with parameters corresponding to your setup:
az deployment group create --template-file ./mlops/bicep/modules/fl_pairs/open_compute_storage_pair.bicep --resource-group <resource group name> --parameters pairBaseName="orch" pairRegion="eastus" machineLearningName="aml-fldemo" machineLearningRegion="eastus"