Replies: 1 comment
-
FWIW hadoop-aws S3A client ended up with its own ContentStreamProvider implementation as the interior of the SDK was doing things we didn't want (copying stuff) while not doing things we did (restarting when attempting a second block upload). It's not that hard to do and you may want to do the same. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Disclaimer
I asked the original question on SO, but decided to duplicate it here.
I am using aws-sdk to work with corporate S3 compatible storage. My S3 client configuration looks like this (aws sdk 2.31.12):
the chunked upload setting is intentional, because my storage provider does not fully support it (i.e. I have it disabled).
I am trying to upload a file to S3 by receiving it from the user in the API as a multipart request (the file is passed to the application API as a multipart request). But when uploading, I get an error like:
The request content has fewer bytes than the specified content-length: N bytes.
I tried wrapping the original InputStream in
BufferedInputStream
as suggested here (even though the javadoc ofRequestBody.fromInputStream
says that automatic wrapping occurs if the original stream does not support mark/reset, then I get an error when trying to createBufferedInputStream
:Caused by: java.io.IOException: Resetting to invalid mark
My upload code looks like this:
where resource is
MultipartFileResource
andresource.getInputStream()
returnsChannelInputStream
.Please tell me if it is possible to somehow upload a file to S3 using InputStream from the original multipart request?
Beta Was this translation helpful? Give feedback.
All reactions