Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] Add doc for the incoming 1.2.10 #390

Merged
merged 4 commits into from
Oct 25, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/content/connector-sink.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ The Flink connector supports DataStream API, Table API & SQL, and Python API. It

| Connector | Flink | StarRocks | Java | Scala |
|-----------|--------------------------|---------------| ---- |-----------|
| 1.2.10 | 1.15,1.16,1.17,1.18,1.19 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.9 | 1.15,1.16,1.17,1.18 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.8 | 1.13,1.14,1.15,1.16,1.17 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.7 | 1.11,1.12,1.13,1.14,1.15 | 2.1 and later| 8 | 2.11,2.12 |
Expand Down Expand Up @@ -102,6 +103,7 @@ In your Maven project's `pom.xml` file, add the Flink connector as a dependency
| sink.buffer-flush.interval-ms | No | 300000 | The interval at which data is flushed. This parameter is available only when `sink.semantic` is `at-least-once`. Valid values: 1000 to 3600000. Unit: ms. |
| sink.max-retries | No | 3 | The number of times that the system retries to perform the Stream Load job. This parameter is available only when you set `sink.version` to `V1`. Valid values: 0 to 10. |
| sink.connect.timeout-ms | No | 30000 | The timeout for establishing HTTP connection. Valid values: 100 to 60000. Unit: ms. Before 1.2.9, the default value is 1000. |
| sink.socket.timeout-ms | No | -1 | Supported sink 1.2.10. It's the timeout in milliseconds that the http client waits for data. Unit: ms. The default value `-1` means there is no timeout. |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Supported since version 1.2.10. The time duration for which the HTTP client waits for data. Unit: ms. The default value -1 means no timeout is applied.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

| sink.wait-for-continue.timeout-ms | No | 10000 | Supported since 1.2.7. The timeout for waiting response of HTTP 100-continue from the FE. Valid values: `3000` to `600000`. Unit: ms |
| sink.ignore.update-before | No | true | Supported since version 1.2.8. Whether to ignore `UPDATE_BEFORE` records from Flink when loading data to Primary Key tables. If this parameter is set to false, the record is treated as a delete operation to StarRocks table. |
| sink.parallelism | No | NONE | The parallelism of loading. Only available for Flink SQL. If this parameter is not specified, Flink planner decides the parallelism. **In the scenario of multi-parallelism, users need to guarantee data is written in the correct order.** |
Expand All @@ -111,6 +113,7 @@ In your Maven project's `pom.xml` file, add the Flink connector as a dependency
| sink.properties.row_delimiter | No | \n | The row delimiter for CSV-formatted data. |
| sink.properties.max_filter_ratio | No | 0 | The maximum error tolerance of the Stream Load. It's the maximum percentage of data records that can be filtered out due to inadequate data quality. Valid values: `0` to `1`. Default value: `0`. See [Stream Load](https://docs.starrocks.io/en-us/latest/sql-reference/sql-statements/data-manipulation/STREAM%20LOAD) for details. |
| sink.properties.strict_mode | No | false | Specifies whether to enable the strict mode for Stream Load. It affects the loading behavior when there are unqualified rows, such as inconsistent column values. Valid values: `true` and `false`. Default value: `false`. See [Stream Load](https://docs.starrocks.io/en-us/latest/sql-reference/sql-statements/data-manipulation/STREAM%20LOAD) for details. |
| sink.properties.compression | No | NONE | The compression algorithm used for Stream Load. Currently only support json format. Valid values: `lz4_frame`. |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The compression algorithm used for Stream Load. Currently, compression is only supported for the JSON format. Valid values: NONE (compression will not be used) and lz4_frame.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refine the description. NONE means not set the option, but it's not a valid value.


## Data type mapping between Flink and StarRocks

Expand Down
5 changes: 5 additions & 0 deletions docs/content/connector-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ Unlike the JDBC connector provided by Flink, the Flink connector of StarRocks su

| Connector | Flink | StarRocks | Java | Scala |
|-----------|--------------------------|---------------| ---- |-----------|
| 1.2.10 | 1.15,1.16,1.17,1.18,1.19 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.9 | 1.15,1.16,1.17,1.18 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.8 | 1.13,1.14,1.15,1.16,1.17 | 2.1 and later| 8 | 2.11,2.12 |
| 1.2.7 | 1.11,1.12,1.13,1.14,1.15 | 2.1 and later| 8 | 2.11,2.12 |
Expand Down Expand Up @@ -141,6 +142,10 @@ The following data type mapping is valid only for Flink reading data from StarRo
| DECIMAL128 | DECIMAL |
| CHAR | CHAR |
| VARCHAR | STRING |
| JSON | STRING <br> **NOTE:** <br> **Supported since version 1.2.10** |
| ARRAY | ARRAY <br> **NOTE:** <br> **Supported since version 1.2.10** |
| STRUCT | ROW <br> **NOTE:** <br> **Supported since version 1.2.10** |
| MAP | MAP <br> **NOTE:** <br> **Supported since version 1.2.10** |

## Examples

Expand Down
Loading