Verb | Description |
---|---|
Spark.impex.exp(data) |
Export Spark entities (versions, services, or folders). |
Spark.impex.imp(data) |
Import exported Spark entities into your workspace. |
This method relies on the Export API to export Spark entities from a tenant workspace. You may choose to export either specific versions, services or folders, or a combination of them, which eventually are packaged up into a zip file that will be downloaded to your local machine.
The expected keyword arguments are as follows:
Property | Type | Description |
---|---|---|
folders | None | list[str] |
1+ folder name(s). |
services | None | list[str] |
1+ service URI(s). |
version_ids | None | list[str] |
1+ version UUID(s) of the desired service. |
file_filter | 'migrate' | 'onpremises' |
For data migration or hybrid deployments (defaults to migrate ). |
version_filter | latest | all |
Which version of the file to export (defaults to latest ). |
source_system | None | str |
Source system name to export from (e.g., Spark Python SDK ). |
correlation_id | None | str |
Correlation ID for the export (useful for tagging). |
max_retries | None | int |
Maximum number of retries when checking the export status. |
retry_interval | None | float |
Interval between status check retries in seconds. |
Note
Remember that a service URI can be one of the following:
{folder}/{service}[?{version}]
orservice/{service_id}
orversion/{version_id}
.
Check out the API reference for more information.
spark.impex.exp(
services=['my-folder/my-service[0.4.2]', 'my-other-folder/my-service-2'],
file_filter='onpremises',
max_retries=5,
retry_interval=3,
)
When successful, this method returns an array of exported entities, where each entity
is an HttpResponse
object with the buffer containing the exported entity.
Tip
This method is transactional. It will initiate an export job, poll its status
until it completes, and download the exported files. If you need more control over
these steps, consider using the exports
resource directly. You may use the following
methods:
Spark.impex.exports.initiate(data)
creates an export job.Spark.impex.exports.get_status(job_id)
gets an export job's status.Spark.impex.exports.download(urls)
downloads the exported files as a ZIP.
This method lets you import exported Spark entities into your workspace. Note that only entities that were exported for data migration can be imported back into Spark.
The expected keyword arguments are as follows:
Property | Type | Description |
---|---|---|
file | BinaryIO |
The ZIP file containing the exported entities. |
destination | str | List[str] | Mapping[str, str] | List[Mapping[str, str]] |
The destination service URI(s). |
if_present | 'abort' | 'replace' | 'add_version' |
What to do if the entity already exists in the destination (defaults to add_version ). |
source_system | None | str |
Source system name to export from (e.g., Spark Python SDK ). |
correlation_id | None | str |
Correlation ID for the export (useful for tagging). |
max_retries | None | int |
Maximum number of retries when checking the export status. |
retry_interval | None | float |
Interval between status check retries in seconds. |
The destination
folder should exist in the target workspace to import the entities.
You may define how to map the exported entities to the destination tenant by providing
any of the formats indicated below:
- when
str
orList[str]
, the SDK assumes that the destination service URIs are the same as the source service URIs. - when
Mapping[str, str]
orList[Mapping[str, str]]
, the SDK expects a mapping of source service URIs to destination service URIs as shown in the table below.
Property | Type | Description |
---|---|---|
source | str |
The service URI of the source tenant. |
target | str | None |
The service URI of the destination tenant (defaults to source ) |
upgrade | 'major' | 'minor' | 'patch' |
The version upgrade strategy (defaults to minor ). |
Check out the API reference for more information.
spark.impex.imp(
destination={'source': 'my-folder/my-service', 'target': 'this-folder/my-service', 'upgrade': 'patch'},
file=open('exported.zip', 'rb'),
max_retries=7,
retry_interval=3
)
When successful, this method returns a JSON payload containing the import summary and the imported entities that have been created/mapped in the destination tenant. See the sample response below.
{
"object": "import",
"id": "uuid",
"response_timestamp": "1970-12-03T04:56:56.186Z",
"status": "closed",
"status_url": "https://excel.my-env.coherent.global/my-tenant/api/v4/import/job-uuid/status",
"process_time": 123,
"outputs": {
"services": [
{
"service_uri_source": "my-folder/my-service",
"folder_source": "my-folder",
"service_source": "my-service",
"folder_destination": "my-folder",
"service_destination": "my-service",
"service_uri_destination": "my-folder/my-service",
"service_id_destination": "uuid",
"status": "added"
}
],
"service_versions": [
{
"service_uri_source": "my-folder/my-service[0.1.0]",
"folder_source": "my-folder",
"service_source": "my-service",
"version_source": "0.1.0",
"version_id_source": "uuid",
"folder_destination": "my-folder",
"service_destination": "my-service",
"version_destination": "0.1.0",
"service_uri_destination": "my-folder/my-service[0.1.0]",
"service_id_destination": "uuid",
"version_id_destination": "uuid",
"status": "added"
},
{
"service_uri_source": "my-folder/my-service[0.2.0]",
"folder_source": "my-folder",
"service_source": "my-service",
"version_source": "0.2.0",
"version_id_source": "uuid",
"folder_destination": "my-folder",
"service_destination": "my-service",
"version_destination": "0.2.0",
"service_uri_destination": "my-folder/my-service[0.2.0]",
"service_id_destination": "uuid",
"version_id_destination": "uuid",
"status": "added"
}
]
},
"errors": null,
"warnings": [],
"source_system": "Spark Python SDK",
"correlation_id": null
}
Being transactional, this method will create an import job, and poll its status
continuously until it completes the import process. You may consider using the
imports
resource directly and control the import process manually:
Spark.impex.imports.initiate(data)
creates an import job.Spark.impex.imports.get_status(job_id)
gets an import job's status.
Remember that exporting and importing entities is a time-consuming process. Be sure to use enough retries and intervals to avoid timeouts.