diff --git a/docs/deployment/helm.md b/docs/deployment/helm.md
index 5d8bf846fcc04..df739f3570779 100644
--- a/docs/deployment/helm.md
+++ b/docs/deployment/helm.md
@@ -4,7 +4,7 @@
## Before you begin
-- [Create a Kubernetes cluster](./sr_operator#create-kubernetes-cluster).
+- [Create a Kubernetes cluster](./sr_operator.md#create-kubernetes-cluster).
- [Install Helm](https://helm.sh/docs/intro/quickstart/).
## Procedure
diff --git a/docs/loading/Flink-connector-starrocks.md b/docs/loading/Flink-connector-starrocks.md
index 73abafcabdb9d..c18738b9ebf1d 100644
--- a/docs/loading/Flink-connector-starrocks.md
+++ b/docs/loading/Flink-connector-starrocks.md
@@ -179,7 +179,7 @@ In your Maven project's `pom.xml` file, add the Flink connector as a dependency
checkpoint, instead of due to timeout (which may cause data loss).
- `label_keep_max_second` and `label_keep_max_num`: StarRocks FE configurations, default values are `259200` and `1000`
- respectively. For details, see [FE configurations](https://docs.starrocks.io/en-us/latest/loading/Loading_intro#fe-configurations). The value of `label_keep_max_second` needs to be larger than the downtime of the Flink job. Otherwise, the Flink connector can not check the state of transactions in StarRocks by using the transaction labels saved in the Flink's savepoint or checkpoint and figure out whether these transactions are committed or not, which may eventually lead to data loss.
+ respectively. For details, see [FE configurations](../loading/Loading_intro.md#fe-configurations). The value of `label_keep_max_second` needs to be larger than the downtime of the Flink job. Otherwise, the Flink connector can not check the state of transactions in StarRocks by using the transaction labels saved in the Flink's savepoint or checkpoint and figure out whether these transactions are committed or not, which may eventually lead to data loss.
These configurations are mutable and can be modified by using `ADMIN SET FRONTEND CONFIG`:
diff --git a/docs/loading/Spark-connector-starrocks.md b/docs/loading/Spark-connector-starrocks.md
index 95c67827992ba..70405d123727b 100644
--- a/docs/loading/Spark-connector-starrocks.md
+++ b/docs/loading/Spark-connector-starrocks.md
@@ -616,7 +616,7 @@ Here we take the counting of UV as an example to show how to load data into colu
2. Create a Spark table.
- The schema of the Spark table is inferred from the StarRocks table, and the Spark does not support the `HLL` type. So you need to customize the corresponding column data type in Spark, for example as `BIGINT`, by configuring the option `"starrocks.column.types"="visit_users BIGINT"`. When using Stream Load to ingest data, the connector uses the [`hll_hash`](../sql-reference/sql-functions/aggregate-functions/hll_hash) function to convert the data of `BIGINT` type into `HLL` type.
+ The schema of the Spark table is inferred from the StarRocks table, and the Spark does not support the `HLL` type. So you need to customize the corresponding column data type in Spark, for example as `BIGINT`, by configuring the option `"starrocks.column.types"="visit_users BIGINT"`. When using Stream Load to ingest data, the connector uses the [`hll_hash`](../sql-reference/sql-functions/aggregate-functions/hll_hash.md) function to convert the data of `BIGINT` type into `HLL` type.
Run the following DDL in `spark-sql`:
diff --git a/docs/table_design/Temporary_partition.md b/docs/table_design/Temporary_partition.md
index 347e7c5a87645..6258592d0216d 100644
--- a/docs/table_design/Temporary_partition.md
+++ b/docs/table_design/Temporary_partition.md
@@ -90,7 +90,7 @@ ADD TEMPORARY PARTITIONS START ("2020-04-01") END ("2021-01-01") EVERY (INTERVAL
## Show temporary partitions
-You can view the temporary partitions by using the [SHOW TEMPORARY PARTITIONS](../sql-reference/sql-statements/data-manipulation/SHOW_PARTITIONS) command.
+You can view the temporary partitions by using the [SHOW TEMPORARY PARTITIONS](../sql-reference/sql-statements/data-manipulation/SHOW_PARTITIONS.md) command.
```SQL
SHOW TEMPORARY PARTITIONS FROM [db_name.]table_name [WHERE] [ORDER BY] [LIMIT]
diff --git a/docs/table_design/expression_partitioning.md b/docs/table_design/expression_partitioning.md
index 82d001edbb85e..4fab66913714b 100644
--- a/docs/table_design/expression_partitioning.md
+++ b/docs/table_design/expression_partitioning.md
@@ -26,7 +26,7 @@ expression ::=
| Parameters | Required | Description |
| ----------------------- | -------- | ------------------------------------------------------------ |
-| `expression` | YES | Currently, only the [date_trunc](../sql-reference/sql-functions/date-time-functions/date_trunc) and [time_slice](../sql-reference/sql-functions/date-time-functions/time_slice) functions are supported. If you use the function `time_slice`, you do not need to pass the `boundary` parameter. It is because in this scenario, the default and valid value for this parameter is `floor`, and the value cannot be `ceil`. |
+| `expression` | YES | Currently, only the [date_trunc](../sql-reference/sql-functions/date-time-functions/date_trunc) and [time_slice](../sql-reference/sql-functions/date-time-functions/time_slice.md) functions are supported. If you use the function `time_slice`, you do not need to pass the `boundary` parameter. It is because in this scenario, the default and valid value for this parameter is `floor`, and the value cannot be `ceil`. |
| `time_unit` | YES | The partition granularity, which can be `hour`, `day`, `month` or `year`. The `week` partition granularity is not supported. If the partition granularity is `hour`, the partition column must be of the DATETIME data type and cannot be of the DATE data type. |
| `partition_column` | YES | The name of the partition column.