-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] [S3File] [zeta-local] Error writing to S3File in version 2.3.4:: Java lang. An IllegalStateException: Connection pool shut down #6678
Comments
In addition, there's nothing wrong with debugging with idea, but that's what happens on the server. |
@ruanwenjun may have something to do with your issue #5903, which is whether checkpoint uses hdfs or cache. I saw that you submitted #6039 and noticed the cache problem, but checkpoint did not: 在这个类;org.apache.seatunnel.engine.checkpoint.storage.hdfs.common.HdfsConfiguration |
s3n is because the S3Conf obtained from AggregatedCommit does not use this buildWithConfig method, but uses DEFAULT_SCHEMA. debug finds that it seems to be related to dag:buildWithConfig method screenshot:DAG initialization S3conf screenshot:AggregatedCommit获取的hadoopconf和shema截图:So far I have found two solutions:1. Change DEFAULT_SCHEMA to s3a:2. Set up the profile:_ But I am not familiar with the code of dag and AggregatedCommint, I need to ask my teacher to help me look at the root cause! _ |
@EricJoy2048 I see that you have been working on the mutli table feature #6698 of S3file connector recently. Have you ever encountered that the schema in hadoop conf obtained by mutli table is the default s3n instead of the s3a specified in the configuration file |
I'll look at that as soon as I can |
|
|
Yes, I am now 100% repeat, I officially downloaded a seatunnel2.3.4 package and plug-in, deployed on the server and executed in local mode, there will be such a problem, but local idea debugging is not possible, it cannot appear |
Can you try use remote debug? |
I seem to have spotted the problem, noting that this code is actually fine, by executing the buildWithConfig method it assigns [SCHEMA] to "s3a", but the key is that when the whole [sink] is passed downstream to [multiTableSink], it is de-serialized, The deserialization process re-instantiates the static variables, and the member variables of the entire S3CONF class are [stastic] modified, including [SCHEMA], resulting in [mutiltable sink] getting the default value of [SCHEMA], which is "s3n". |
Related PR: #6698 |
You can add |
|
Search before asking
What happened
When jdbc- > s3File is executed using the local mode, it occurs sporadically, good and bad in most cases, but there is no problem with using the cluster mode:
I probably searched for this error about using aws-sdk: Connection pool shut down, the related issues is awslabs/amazon-sqs-java-messaging-lib#96.
It mentions the problem of multithreaded connection pooling, so I initially guess whether it is possible that there is a [aggregate commit] operation after [S3file] executes the [sink] operation, but the [sink] operation executes the close method, which causes the [rename] of [commit] to report an error, indicating that the connection pool is closed, but why the lcoal mode is good and bad, the cluster mode is not a problem.
And I've tried the 2.3.3 local mode, and that's not the case.
error.txt
SeaTunnel Version
2.3.4
SeaTunnel Config
Running Command
./bin/seatunnel.sh -e local --config job/s3_sink.conf
Error Exception
Zeta or Flink or Spark Version
local模式
Java or Scala Version
1.8
Screenshots
No response
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: