Skip to content

Commit

Permalink
feat(fabric_spark_workspace_settings): add more properties to rs/ds (#…
Browse files Browse the repository at this point in the history
…201)

# 📥 Pull Request

## ❓ What are you trying to address

This pull request introduces new properties to the
`fabric_spark_workspace_settings` data source and resource, enhancing
its capabilities. The changes include adding new attributes for managing
job settings and notebook pipeline runs, along with updates to the
documentation and schema definitions.

## ✨ Description of new changes

Enhancements to `fabric_spark_workspace_settings`:

* Added new properties to the `fabric_spark_workspace_settings` data
source and resource:
  - `high_concurrency.notebook_pipeline_run_enabled` (Boolean)
  - `jobs.conservative_job_admission_enabled` (Boolean)
  - `jobs.session_timeout_in_minutes` (Number)
  • Loading branch information
DariuszPorowski authored Jan 23, 2025
1 parent 991f704 commit 857d238
Show file tree
Hide file tree
Showing 9 changed files with 194 additions and 17 deletions.
9 changes: 9 additions & 0 deletions .changes/unreleased/added-20250116-152025.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
kind: added
body: |
Added additional properties for `fabric_spark_workspace_settings` Data-Source and Resource:
- `high_concurrency.notebook_pipeline_run_enabled` (Boolean)
- `job.conservative_job_admission_enabled` (Boolen)
- `job.session_timeout_in_minutes` (Number)
time: 2025-01-16T15:20:25.9324812-08:00
custom:
Issue: "201"
11 changes: 11 additions & 0 deletions docs/data-sources/spark_workspace_settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ data "fabric_spark_workspace_settings" "example" {
- `environment` (Attributes) Environment properties. (see [below for nested schema](#nestedatt--environment))
- `high_concurrency` (Attributes) High Concurrency properties. (see [below for nested schema](#nestedatt--high_concurrency))
- `id` (String) The ID of this resource.
- `job` (Attributes) (see [below for nested schema](#nestedatt--job))
- `pool` (Attributes) Pool properties. (see [below for nested schema](#nestedatt--pool))

<a id="nestedatt--timeouts"></a>
Expand Down Expand Up @@ -75,6 +76,16 @@ Read-Only:
Read-Only:

- `notebook_interactive_run_enabled` (Boolean) The status of the high concurrency for notebook interactive run. `false` - Disabled, `true` - Enabled.
- `notebook_pipeline_run_enabled` (Boolean) The status of the high concurrency for notebook pipeline run. `false` - Disabled, `true` - Enabled.

<a id="nestedatt--job"></a>

### Nested Schema for `job`

Read-Only:

- `conservative_job_admission_enabled` (Boolean) Reserve maximum cores for active Spark jobs. When this setting is enabled, your Fabric capacity reserves the maximum number of cores needed for active Spark jobs, ensuring job reliability by making sure that cores are available if a job scales up. When this setting is disabled, jobs are started based on the minimum number of cores needed, letting more jobs run at the same time. `false` - Disabled, `true` - Enabled.
- `session_timeout_in_minutes` (Number) Time to terminate inactive Spark sessions. The maximum is 14 days (20160 minutes).

<a id="nestedatt--pool"></a>

Expand Down
17 changes: 17 additions & 0 deletions docs/resources/spark_workspace_settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,12 @@ resource "fabric_spark_workspace_settings" "example" {
*/
}
job = {
/*
your settings here
*/
}
pool = {
/*
your settings here
Expand Down Expand Up @@ -91,6 +97,7 @@ resource "fabric_spark_workspace_settings" "example2" {
- `automatic_log` (Attributes) Automatic Log properties. (see [below for nested schema](#nestedatt--automatic_log))
- `environment` (Attributes) Environment properties. (see [below for nested schema](#nestedatt--environment))
- `high_concurrency` (Attributes) High Concurrency properties. (see [below for nested schema](#nestedatt--high_concurrency))
- `job` (Attributes) Jobs properties. (see [below for nested schema](#nestedatt--job))
- `pool` (Attributes) Pool properties. (see [below for nested schema](#nestedatt--pool))
- `timeouts` (Attributes) (see [below for nested schema](#nestedatt--timeouts))

Expand Down Expand Up @@ -122,6 +129,16 @@ Optional:
Optional:

- `notebook_interactive_run_enabled` (Boolean) The status of the high concurrency for notebook interactive run. `false` - Disabled, `true` - Enabled.
- `notebook_pipeline_run_enabled` (Boolean) The status of the high concurrency for notebook pipeline run. `false` - Disabled, `true` - Enabled.

<a id="nestedatt--job"></a>

### Nested Schema for `job`

Optional:

- `conservative_job_admission_enabled` (Boolean) Reserve maximum cores for active Spark jobs. When this setting is enabled, your Fabric capacity reserves the maximum number of cores needed for active Spark jobs, ensuring job reliability by making sure that cores are available if a job scales up. When this setting is disabled, jobs are started based on the minimum number of cores needed, letting more jobs run at the same time. `false` - Disabled, `true` - Enabled.
- `session_timeout_in_minutes` (Number) Time to terminate inactive Spark sessions. The maximum is 14 days (20160 minutes).

<a id="nestedatt--pool"></a>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,12 @@ resource "fabric_spark_workspace_settings" "example" {
*/
}

job = {
/*
your settings here
*/
}

pool = {
/*
your settings here
Expand Down
18 changes: 18 additions & 0 deletions internal/services/spark/data_spark_workspace_settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,24 @@ func (d *dataSourceSparkWorkspaceSettings) Schema(ctx context.Context, _ datasou
MarkdownDescription: "The status of the high concurrency for notebook interactive run. `false` - Disabled, `true` - Enabled.",
Computed: true,
},
"notebook_pipeline_run_enabled": schema.BoolAttribute{
MarkdownDescription: "The status of the high concurrency for notebook pipeline run. `false` - Disabled, `true` - Enabled.",
Computed: true,
},
},
},
"job": schema.SingleNestedAttribute{
Computed: true,
CustomType: supertypes.NewSingleNestedObjectTypeOf[jobPropertiesModel](ctx),
Attributes: map[string]schema.Attribute{
"conservative_job_admission_enabled": schema.BoolAttribute{
MarkdownDescription: "Reserve maximum cores for active Spark jobs. When this setting is enabled, your Fabric capacity reserves the maximum number of cores needed for active Spark jobs, ensuring job reliability by making sure that cores are available if a job scales up. When this setting is disabled, jobs are started based on the minimum number of cores needed, letting more jobs run at the same time. `false` - Disabled, `true` - Enabled.",
Computed: true,
},
"session_timeout_in_minutes": schema.Int32Attribute{
MarkdownDescription: "Time to terminate inactive Spark sessions. The maximum is 14 days (20160 minutes).",
Computed: true,
},
},
},
"pool": schema.SingleNestedAttribute{
Expand Down
30 changes: 17 additions & 13 deletions internal/services/spark/data_spark_workspace_settings_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,27 +18,31 @@ var (
)

func TestAcc_SparkWorkspaceSettingsDataSource(t *testing.T) {
capacity := testhelp.WellKnown()["Capacity"].(map[string]any)
capacityID := capacity["id"].(string)

workspaceResourceHCL, workspaceResourceFQN := testhelp.TestAccWorkspaceResource(t, capacityID)
workspace := testhelp.WellKnown()["WorkspaceDS"].(map[string]any)
workspaceID := workspace["id"].(string)

resource.ParallelTest(t, testhelp.NewTestAccCase(t, &testDataSourceSparkWorkspaceSettingsFQN, nil, []resource.TestStep{
// read
{
ResourceName: testDataSourceSparkWorkspaceSettingsFQN,
Config: at.JoinConfigs(
workspaceResourceHCL,
at.CompileConfig(
testDataSourceSparkWorkspaceSettingsHeader,
map[string]any{
"workspace_id": testhelp.RefByFQN(workspaceResourceFQN, "id"),
},
)),
Config: at.CompileConfig(
testDataSourceSparkWorkspaceSettingsHeader,
map[string]any{
"workspace_id": workspaceID,
},
),
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttrSet(testDataSourceSparkWorkspaceSettingsFQN, "workspace_id"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "workspace_id", workspaceID),
resource.TestCheckResourceAttrSet(testDataSourceSparkWorkspaceSettingsFQN, "id"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "automatic_log.enabled", "true"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "high_concurrency.notebook_interactive_run_enabled", "true"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "high_concurrency.notebook_pipeline_run_enabled", "false"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "pool.customize_compute_enabled", "true"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "pool.default_pool.name", "Starter Pool"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "pool.default_pool.type", "Workspace"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "environment.runtime_version", "1.3"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "job.conservative_job_admission_enabled", "false"),
resource.TestCheckResourceAttr(testDataSourceSparkWorkspaceSettingsFQN, "job.session_timeout_in_minutes", "20"),
),
},
},
Expand Down
61 changes: 58 additions & 3 deletions internal/services/spark/models_spark_workspace_settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ type baseSparkWorkspaceSettingsModel struct {
AutomaticLog supertypes.SingleNestedObjectValueOf[automaticLogPropertiesModel] `tfsdk:"automatic_log"`
Environment supertypes.SingleNestedObjectValueOf[environmentPropertiesModel] `tfsdk:"environment"`
HighConcurrency supertypes.SingleNestedObjectValueOf[highConcurrencyPropertiesModel] `tfsdk:"high_concurrency"`
Job supertypes.SingleNestedObjectValueOf[jobPropertiesModel] `tfsdk:"job"`
Pool supertypes.SingleNestedObjectValueOf[poolPropertiesModel] `tfsdk:"pool"`
}

Expand Down Expand Up @@ -76,6 +77,19 @@ func (to *baseSparkWorkspaceSettingsModel) set(ctx context.Context, from fabspar

to.HighConcurrency = highConcurrency

job := supertypes.NewSingleNestedObjectValueOfNull[jobPropertiesModel](ctx)

if from.Job != nil {
jobModel := &jobPropertiesModel{}
jobModel.set(from.Job)

if diags := job.Set(ctx, jobModel); diags.HasError() {
return diags
}
}

to.Job = job

pool := supertypes.NewSingleNestedObjectValueOfNull[poolPropertiesModel](ctx)

if from.Pool != nil {
Expand Down Expand Up @@ -115,10 +129,22 @@ func (to *environmentPropertiesModel) set(from *fabspark.EnvironmentProperties)

type highConcurrencyPropertiesModel struct {
NotebookInteractiveRunEnabled types.Bool `tfsdk:"notebook_interactive_run_enabled"`
NotebookPipelineRunEnabled types.Bool `tfsdk:"notebook_pipeline_run_enabled"`
}

func (to *highConcurrencyPropertiesModel) set(from *fabspark.HighConcurrencyProperties) {
to.NotebookInteractiveRunEnabled = types.BoolPointerValue(from.NotebookInteractiveRunEnabled)
to.NotebookPipelineRunEnabled = types.BoolPointerValue(from.NotebookPipelineRunEnabled)
}

type jobPropertiesModel struct {
ConservativeJobAdmissionEnabled types.Bool `tfsdk:"conservative_job_admission_enabled"`
SessionTimeoutInMinutes types.Int32 `tfsdk:"session_timeout_in_minutes"`
}

func (to *jobPropertiesModel) set(from *fabspark.JobsProperties) {
to.ConservativeJobAdmissionEnabled = types.BoolPointerValue(from.ConservativeJobAdmissionEnabled)
to.SessionTimeoutInMinutes = types.Int32PointerValue(from.SessionTimeoutInMinutes)
}

type poolPropertiesModel struct {
Expand Down Expand Up @@ -226,10 +252,39 @@ func (to *requestUpdateSparkWorkspaceSettings) set(ctx context.Context, from res
return diags
}

var reqHighConcurrency fabspark.HighConcurrencyProperties

if !highConcurrency.NotebookInteractiveRunEnabled.IsNull() && !highConcurrency.NotebookInteractiveRunEnabled.IsUnknown() {
to.HighConcurrency = &fabspark.HighConcurrencyProperties{
NotebookInteractiveRunEnabled: highConcurrency.NotebookInteractiveRunEnabled.ValueBoolPointer(),
}
reqHighConcurrency.NotebookInteractiveRunEnabled = highConcurrency.NotebookInteractiveRunEnabled.ValueBoolPointer()
}

if !highConcurrency.NotebookPipelineRunEnabled.IsNull() && !highConcurrency.NotebookPipelineRunEnabled.IsUnknown() {
reqHighConcurrency.NotebookPipelineRunEnabled = highConcurrency.NotebookPipelineRunEnabled.ValueBoolPointer()
}

if reqHighConcurrency != (fabspark.HighConcurrencyProperties{}) {
to.HighConcurrency = &reqHighConcurrency
}
}

if !from.Job.IsNull() && !from.Job.IsUnknown() {
job, diags := from.Job.Get(ctx)
if diags.HasError() {
return diags
}

var reqJob fabspark.JobsProperties

if !job.ConservativeJobAdmissionEnabled.IsNull() && !job.ConservativeJobAdmissionEnabled.IsUnknown() {
reqJob.ConservativeJobAdmissionEnabled = job.ConservativeJobAdmissionEnabled.ValueBoolPointer()
}

if !job.SessionTimeoutInMinutes.IsNull() && !job.SessionTimeoutInMinutes.IsUnknown() {
reqJob.SessionTimeoutInMinutes = job.SessionTimeoutInMinutes.ValueInt32Pointer()
}

if reqJob != (fabspark.JobsProperties{}) {
to.Job = &reqJob
}
}

Expand Down
40 changes: 40 additions & 0 deletions internal/services/spark/resource_spark_workspace_settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import (
"fmt"

"github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts"
"github.com/hashicorp/terraform-plugin-framework-validators/int32validator"
"github.com/hashicorp/terraform-plugin-framework-validators/resourcevalidator"
"github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator"
"github.com/hashicorp/terraform-plugin-framework/diag"
Expand Down Expand Up @@ -138,6 +139,44 @@ func (r *resourceSparkWorkspaceSettings) Schema(ctx context.Context, _ resource.
boolplanmodifier.UseStateForUnknown(),
},
},
"notebook_pipeline_run_enabled": schema.BoolAttribute{
MarkdownDescription: "The status of the high concurrency for notebook pipeline run. `false` - Disabled, `true` - Enabled.",
Optional: true,
Computed: true,
PlanModifiers: []planmodifier.Bool{
boolplanmodifier.UseStateForUnknown(),
},
},
},
},
"job": schema.SingleNestedAttribute{
MarkdownDescription: "Jobs properties.",
Optional: true,
Computed: true,
CustomType: supertypes.NewSingleNestedObjectTypeOf[jobPropertiesModel](ctx),
PlanModifiers: []planmodifier.Object{
objectplanmodifier.UseStateForUnknown(),
},
Attributes: map[string]schema.Attribute{
"conservative_job_admission_enabled": schema.BoolAttribute{
MarkdownDescription: "Reserve maximum cores for active Spark jobs. When this setting is enabled, your Fabric capacity reserves the maximum number of cores needed for active Spark jobs, ensuring job reliability by making sure that cores are available if a job scales up. When this setting is disabled, jobs are started based on the minimum number of cores needed, letting more jobs run at the same time. `false` - Disabled, `true` - Enabled.",
Optional: true,
Computed: true,
PlanModifiers: []planmodifier.Bool{
boolplanmodifier.UseStateForUnknown(),
},
},
"session_timeout_in_minutes": schema.Int32Attribute{
MarkdownDescription: "Time to terminate inactive Spark sessions. The maximum is 14 days (20160 minutes).",
Optional: true,
Computed: true,
Validators: []validator.Int32{
int32validator.AtMost(20160),
},
PlanModifiers: []planmodifier.Int32{
int32planmodifier.UseStateForUnknown(),
},
},
},
},
"pool": schema.SingleNestedAttribute{
Expand Down Expand Up @@ -259,6 +298,7 @@ func (r *resourceSparkWorkspaceSettings) ConfigValidators(_ context.Context) []r
path.MatchRoot("automatic_log"),
path.MatchRoot("environment"),
path.MatchRoot("high_concurrency"),
path.MatchRoot("job"),
path.MatchRoot("pool"),
),
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,28 @@ func TestAcc_SparkWorkspaceSettingsResource_CRUD(t *testing.T) {
"automatic_log": map[string]any{
"enabled": false,
},
"high_concurrency": map[string]any{
"notebook_interactive_run_enabled": false,
"notebook_pipeline_run_enabled": true,
},
"job": map[string]any{
"conservative_job_admission_enabled": true,
"session_timeout_in_minutes": 60,
},
},
)),
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "pool.default_pool.name", "Starter Pool"),
resource.TestCheckResourceAttrSet(testResourceSparkWorkspaceSettingsFQN, "workspace_id"),
resource.TestCheckResourceAttrSet(testResourceSparkWorkspaceSettingsFQN, "id"),
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "automatic_log.enabled", "false"),
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "high_concurrency.notebook_interactive_run_enabled", "false"),
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "high_concurrency.notebook_pipeline_run_enabled", "true"),
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "pool.customize_compute_enabled", "true"),
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "pool.default_pool.name", "Starter Pool"),
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "pool.default_pool.type", "Workspace"),
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "environment.runtime_version", "1.3"),
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "job.conservative_job_admission_enabled", "true"),
resource.TestCheckResourceAttr(testResourceSparkWorkspaceSettingsFQN, "job.session_timeout_in_minutes", "60"),
),
},
// Update and Read
Expand Down

0 comments on commit 857d238

Please sign in to comment.