你当前正在访问 Microsoft Azure Global Edition 技术文档网站。 如果需要访问由世纪互联运营的 Microsoft Azure 中国技术文档网站,请访问 https://docs.azure.cn。
DataLakeFileClient 类
- java.
lang. Object - com.
azure. storage. file. datalake. DataLakePathClient - com.
azure. storage. file. datalake. DataLakeFileClient
- com.
- com.
public class DataLakeFileClient
extends DataLakePathClient
此类提供包含 Azure 存储 Data Lake 的文件操作的客户端。 此客户端提供的操作包括创建文件、删除文件、重命名文件、设置元数据和 http 标头、设置和检索访问控制、获取属性、读取文件以及追加和刷新数据以写入文件。
此客户端通过 DataLakePathClientBuilder 实例化或通过 检索 getFileClient(String fileName)。
有关详细信息,请参阅 Azure Docs 。
方法摘要
方法继承自 DataLakePathClient
方法继承自 java.lang.Object
方法详细信息
append
public void append(BinaryData data, long fileOffset)
将数据追加到指定的资源,以便稍后刷新 (通过调用刷新) 写入
示例代码
client.append(binaryData, offset);
System.out.println("Append data completed");
有关详细信息,请参阅 Azure Docs
Parameters:
append
public void append(InputStream data, long fileOffset, long length)
将数据追加到指定的资源,以便稍后刷新 (通过调用刷新) 写入
示例代码
client.append(data, offset, length);
System.out.println("Append data completed");
有关详细信息,请参阅 Azure Docs
Parameters:
appendWithResponse
public Response
将数据追加到指定的资源,以便稍后刷新 (通过调用刷新) 写入
示例代码
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
Response<Void> response = client.appendWithResponse(binaryData, offset, contentMd5, leaseId, timeout,
new Context(key1, value1));
System.out.printf("Append data completed with status %d%n", response.getStatusCode());
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
appendWithResponse
public Response
将数据追加到指定的资源,以便稍后刷新 (通过调用刷新) 写入
示例代码
BinaryData binaryData = BinaryData.fromStream(data, length);
FileRange range = new FileRange(1024, 2048L);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions()
.setLeaseId(leaseId)
.setContentHash(contentMd5)
.setFlush(true);
Response<Void> response = client.appendWithResponse(binaryData, offset, appendOptions, timeout,
new Context(key1, value1));
System.out.printf("Append data completed with status %d%n", response.getStatusCode());
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
appendWithResponse
public Response
将数据追加到指定的资源,以便稍后刷新 (通过调用刷新) 写入
示例代码
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
Response<Void> response = client.appendWithResponse(data, offset, length, contentMd5, leaseId, timeout,
new Context(key1, value1));
System.out.printf("Append data completed with status %d%n", response.getStatusCode());
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
appendWithResponse
public Response
将数据追加到指定的资源,以便稍后刷新 (通过调用刷新) 写入
示例代码
FileRange range = new FileRange(1024, 2048L);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
DataLakeFileAppendOptions appendOptions = new DataLakeFileAppendOptions()
.setLeaseId(leaseId)
.setContentHash(contentMd5)
.setFlush(true);
Response<Void> response = client.appendWithResponse(data, offset, length, appendOptions, timeout,
new Context(key1, value1));
System.out.printf("Append data completed with status %d%n", response.getStatusCode());
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
delete
public void delete()
删除文件。
示例代码
client.delete();
System.out.println("Delete request completed");
有关详细信息,请参阅 Azure Docs
deleteIfExists
public boolean deleteIfExists()
删除文件(如果存在)。
示例代码
client.deleteIfExists();
System.out.println("Delete request completed");
有关详细信息,请参阅 Azure Docs
Overrides:
DataLakeFileClient.deleteIfExists()Returns:
true
如果文件成功删除,则为 ; false
如果文件不存在,则为 。deleteIfExistsWithResponse
public Response
删除文件(如果存在)。
示例代码
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
DataLakePathDeleteOptions options = new DataLakePathDeleteOptions().setIsRecursive(false)
.setRequestConditions(requestConditions);
Response<Boolean> response = client.deleteIfExistsWithResponse(options, timeout, new Context(key1, value1));
if (response.getStatusCode() == 404) {
System.out.println("Does not exist.");
} else {
System.out.printf("Delete completed with status %d%n", response.getStatusCode());
}
有关详细信息,请参阅 Azure Docs
Overrides:
DataLakeFileClient.deleteIfExistsWithResponse(DataLakePathDeleteOptions options, Duration timeout, Context context)Parameters:
Returns:
deleteWithResponse
public Response
删除文件。
示例代码
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
client.deleteWithResponse(requestConditions, timeout, new Context(key1, value1));
System.out.println("Delete request completed");
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
flush
@Deprecated
public PathInfo flush(long position)
已放弃
刷新 (写入之前通过对追加的调用追加到文件的数据) 。 以前上传的数据必须是连续的。
默认情况下,此方法不会覆盖现有数据。
示例代码
client.flush(position);
System.out.println("Flush data completed");
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
flush
public PathInfo flush(long position, boolean overwrite)
刷新 (写入之前通过对追加的调用追加到文件的数据) 。 以前上传的数据必须是连续的。
示例代码
boolean overwrite = true;
client.flush(position, overwrite);
System.out.println("Flush data completed");
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
flushWithResponse
public Response
刷新 (写入之前通过追加调用追加到文件) 数据。 以前上传的数据必须是连续的。
示例代码
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
boolean retainUncommittedData = false;
boolean close = false;
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentLanguage("en-US")
.setContentType("binary");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
Response<PathInfo> response = client.flushWithResponse(position, retainUncommittedData, close, httpHeaders,
requestConditions, timeout, new Context(key1, value1));
System.out.printf("Flush data completed with status %d%n", response.getStatusCode());
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
flushWithResponse
public Response
刷新 (写入之前通过追加调用追加到文件) 数据。 以前上传的数据必须是连续的。
示例代码
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
boolean retainUncommittedData = false;
boolean close = false;
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentLanguage("en-US")
.setContentType("binary");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
Integer leaseDuration = 15;
DataLakeFileFlushOptions flushOptions = new DataLakeFileFlushOptions()
.setUncommittedDataRetained(retainUncommittedData)
.setClose(close)
.setPathHttpHeaders(httpHeaders)
.setRequestConditions(requestConditions)
.setLeaseAction(LeaseAction.ACQUIRE)
.setLeaseDuration(leaseDuration)
.setProposedLeaseId(leaseId);
Response<PathInfo> response = client.flushWithResponse(position, flushOptions, timeout,
new Context(key1, value1));
System.out.printf("Flush data completed with status %d%n", response.getStatusCode());
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
getCustomerProvidedKeyClient
public DataLakeFileClient getCustomerProvidedKeyClient(CustomerProvidedKey customerProvidedKey)
使用指定的 DataLakeFileClient 创建一个新的 customerProvidedKey
。
Overrides:
DataLakeFileClient.getCustomerProvidedKeyClient(CustomerProvidedKey customerProvidedKey)Parameters:
Returns:
customerProvidedKey
的 。getFileName
public String getFileName()
获取此文件的名称,不包括其完整路径。
Returns:
getFilePath
public String getFilePath()
获取此文件的路径,不包括资源本身的名称。
Returns:
getFileUrl
public String getFileUrl()
获取此客户端在 Data Lake 服务上表示的文件的 URL。
Returns:
getOutputStream
public OutputStream getOutputStream()
创建并打开输出流以将数据写入文件。 如果服务上已存在该文件,则会覆盖该文件。
Returns:
getOutputStream
public OutputStream getOutputStream(DataLakeFileOutputStreamOptions options)
创建并打开输出流以将数据写入文件。 如果服务上已存在该文件,则会覆盖该文件。
若要避免覆盖,请将“*”传递给 setIfNoneMatch(String ifNoneMatch)。
Parameters:
Returns:
getOutputStream
public OutputStream getOutputStream(DataLakeFileOutputStreamOptions options, Context context)
创建并打开输出流以将数据写入文件。 如果服务上已存在该文件,则会覆盖该文件。
若要避免覆盖,请将“*”传递给 setIfNoneMatch(String ifNoneMatch)。
Parameters:
Returns:
openInputStream
public DataLakeFileOpenInputStreamResult openInputStream()
打开文件输入流以下载文件。 锁定 ETag。
DataLakeFileOpenInputStreamResult inputStream = client.openInputStream();
Returns:
openInputStream
public DataLakeFileOpenInputStreamResult openInputStream(DataLakeFileInputStreamOptions options)
打开文件输入流以下载文件的指定范围。 如果未指定选项,则默认为 ETag 锁定。
DataLakeFileInputStreamOptions options = new DataLakeFileInputStreamOptions().setBlockSize(1024)
.setRequestConditions(new DataLakeRequestConditions());
DataLakeFileOpenInputStreamResult streamResult = client.openInputStream(options);
Parameters:
Returns:
openInputStream
public DataLakeFileOpenInputStreamResult openInputStream(DataLakeFileInputStreamOptions options, Context context)
打开文件输入流以下载文件的指定范围。 如果未指定选项,则默认为 ETag 锁定。
options = new DataLakeFileInputStreamOptions().setBlockSize(1024)
.setRequestConditions(new DataLakeRequestConditions());
DataLakeFileOpenInputStreamResult stream = client.openInputStream(options, new Context(key1, value1));
Parameters:
Returns:
openQueryInputStream
public InputStream openQueryInputStream(String expression)
打开输入流以查询文件。
有关详细信息,请参阅 Azure Docs
示例代码
String expression = "SELECT * from BlobStorage";
InputStream inputStream = client.openQueryInputStream(expression);
// Now you can read from the input stream like you would normally.
Parameters:
Returns:
InputStream
对象,表示用于读取查询响应的流。openQueryInputStreamWithResponse
public Response
打开输入流以查询文件。
有关详细信息,请参阅 Azure Docs
示例代码
String expression = "SELECT * from BlobStorage";
FileQuerySerialization input = new FileQueryDelimitedSerialization()
.setColumnSeparator(',')
.setEscapeChar('\n')
.setRecordSeparator('\n')
.setHeadersPresent(true)
.setFieldQuote('"');
FileQuerySerialization output = new FileQueryJsonSerialization()
.setRecordSeparator('\n');
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId("leaseId");
Consumer<FileQueryError> errorConsumer = System.out::println;
Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: "
+ progress.getBytesScanned());
FileQueryOptions queryOptions = new FileQueryOptions(expression)
.setInputSerialization(input)
.setOutputSerialization(output)
.setRequestConditions(requestConditions)
.setErrorConsumer(errorConsumer)
.setProgressConsumer(progressConsumer);
InputStream inputStream = client.openQueryInputStreamWithResponse(queryOptions).getValue();
// Now you can read from the input stream like you would normally.
Parameters:
Returns:
InputStream
表示用于读取查询响应的流的 对象。query
public void query(OutputStream stream, String expression)
将整个文件查询到输出流中。
有关详细信息,请参阅 Azure Docs
示例代码
ByteArrayOutputStream queryData = new ByteArrayOutputStream();
String expression = "SELECT * from BlobStorage";
client.query(queryData, expression);
System.out.println("Query completed.");
Parameters:
queryWithResponse
public FileQueryResponse queryWithResponse(FileQueryOptions queryOptions, Duration timeout, Context context)
将整个文件查询到输出流中。
有关详细信息,请参阅 Azure Docs
示例代码
ByteArrayOutputStream queryData = new ByteArrayOutputStream();
String expression = "SELECT * from BlobStorage";
FileQueryJsonSerialization input = new FileQueryJsonSerialization()
.setRecordSeparator('\n');
FileQueryDelimitedSerialization output = new FileQueryDelimitedSerialization()
.setEscapeChar('\0')
.setColumnSeparator(',')
.setRecordSeparator('\n')
.setFieldQuote('\'')
.setHeadersPresent(true);
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions().setLeaseId(leaseId);
Consumer<FileQueryError> errorConsumer = System.out::println;
Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: "
+ progress.getBytesScanned());
FileQueryOptions queryOptions = new FileQueryOptions(expression, queryData)
.setInputSerialization(input)
.setOutputSerialization(output)
.setRequestConditions(requestConditions)
.setErrorConsumer(errorConsumer)
.setProgressConsumer(progressConsumer);
System.out.printf("Query completed with status %d%n",
client.queryWithResponse(queryOptions, timeout, new Context(key1, value1))
.getStatusCode());
Parameters:
Returns:
read
public void read(OutputStream stream)
将整个文件读取到输出流中。
示例代码
client.read(new ByteArrayOutputStream());
System.out.println("Download completed.");
有关详细信息,请参阅 Azure Docs
Parameters:
readToFile
public PathProperties readToFile(String filePath)
将整个文件读取到路径指定的文件中。
该文件将创建且不得存在,如果文件已存在, FileAlreadyExistsException 则会引发 。
示例代码
client.readToFile(file);
System.out.println("Completed download to file");
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
readToFile
public PathProperties readToFile(String filePath, boolean overwrite)
将整个文件读取到路径指定的文件中。
如果 overwrite 设置为 false,则将创建文件,并且必须不存在,如果文件已存在, FileAlreadyExistsException 则会引发 。
示例代码
boolean overwrite = false; // Default value
client.readToFile(file, overwrite);
System.out.println("Completed download to file");
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
readToFileWithResponse
public Response
将整个文件读取到路径指定的文件中。
默认情况下,将创建文件,并且不得存在,如果文件已存在, FileAlreadyExistsException 则会引发 。 若要替代此行为,请提供适当的 OpenOptions
示例代码
FileRange fileRange = new FileRange(1024, 2048L);
DownloadRetryOptions downloadRetryOptions = new DownloadRetryOptions().setMaxRetryRequests(5);
Set<OpenOption> openOptions = new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW,
StandardOpenOption.WRITE, StandardOpenOption.READ)); // Default options
client.readToFileWithResponse(file, fileRange, new ParallelTransferOptions().setBlockSizeLong(4L * Constants.MB),
downloadRetryOptions, null, false, openOptions, timeout, new Context(key2, value2));
System.out.println("Completed download to file");
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
readWithResponse
public FileReadResponse readWithResponse(OutputStream stream, FileRange range, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean getRangeContentMd5, Duration timeout, Context context)
将文件中的一系列字节读取到输出流中。
示例代码
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
System.out.printf("Download completed with status %d%n",
client.readWithResponse(new ByteArrayOutputStream(), range, options, null, false,
timeout, new Context(key2, value2)).getStatusCode());
有关详细信息,请参阅 Azure Docs
Parameters:
Returns:
rename
public DataLakeFileClient rename(String destinationFileSystem, String destinationPath)
将文件移动到文件系统中的另一个位置。 有关详细信息,请参阅 Azure Docs。
示例代码
DataLakeDirectoryAsyncClient renamedClient = client.rename(fileSystemName, destinationPath).block();
System.out.println("Directory Client has been renamed");
Parameters:
null
,用于当前文件系统。
Returns:
renameWithResponse
public Response
将文件移动到文件系统中的另一个位置。 有关详细信息,请参阅 Azure Docs。
示例代码
DataLakeRequestConditions sourceRequestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
DataLakeRequestConditions destinationRequestConditions = new DataLakeRequestConditions();
DataLakeFileClient newRenamedClient = client.renameWithResponse(fileSystemName, destinationPath,
sourceRequestConditions, destinationRequestConditions, timeout, new Context(key1, value1)).getValue();
System.out.println("Directory Client has been renamed");
Parameters:
null
,用于当前文件系统。
Returns:
scheduleDeletion
public void scheduleDeletion(FileScheduleDeletionOptions options)
计划要删除的文件。
示例代码
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
client.scheduleDeletion(options);
System.out.println("File deletion has been scheduled");
Parameters:
scheduleDeletionWithResponse
public Response
计划要删除的文件。
示例代码
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
Context context = new Context("key", "value");
client.scheduleDeletionWithResponse(options, timeout, context);
System.out.println("File deletion has been scheduled");
Parameters:
Returns:
upload
public PathInfo upload(BinaryData data)
创建新文件。 默认情况下,此方法不会覆盖现有文件。
示例代码
try {
client.upload(binaryData);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
upload
public PathInfo upload(BinaryData data, boolean overwrite)
创建新文件,或更新现有文件的内容。
示例代码
try {
boolean overwrite = false;
client.upload(binaryData, overwrite);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
upload
public PathInfo upload(InputStream data, long length)
创建新文件。 默认情况下,此方法不会覆盖现有文件。
示例代码
try {
client.upload(data, length);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
upload
public PathInfo upload(InputStream data, long length, boolean overwrite)
创建新文件,或更新现有文件的内容。
示例代码
try {
boolean overwrite = false;
client.upload(data, length, overwrite);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
uploadFromFile
public void uploadFromFile(String filePath)
使用指定文件的内容创建文件。 默认情况下,此方法不会覆盖现有文件。
示例代码
try {
client.uploadFromFile(filePath);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
uploadFromFile
public void uploadFromFile(String filePath, boolean overwrite)
使用指定文件的内容创建文件。
示例代码
try {
boolean overwrite = false;
client.uploadFromFile(filePath, overwrite);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
uploadFromFile
public void uploadFromFile(String filePath, ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map
使用指定文件的内容创建文件。
若要避免覆盖,请将“*”传递给 setIfNoneMatch(String ifNoneMatch)。
示例代码
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
try {
client.uploadFromFile(filePath, parallelTransferOptions, headers, metadata, requestConditions, timeout);
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
uploadFromFileWithResponse
public Response
使用指定文件的内容创建文件。
若要避免覆盖,请将“*”传递给 setIfNoneMatch(String ifNoneMatch)。
示例代码
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
try {
Response<PathInfo> response = client.uploadFromFileWithResponse(filePath, parallelTransferOptions, headers,
metadata, requestConditions, timeout, new Context("key", "value"));
System.out.printf("Upload from file succeeded with status %d%n", response.getStatusCode());
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns:
uploadWithResponse
public Response
创建新文件。 若要避免覆盖,请将“*”传递给 setIfNoneMatch(String ifNoneMatch)。
示例代码
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
try {
client.uploadWithResponse(new FileParallelUploadOptions(data, length)
.setParallelTransferOptions(parallelTransferOptions).setHeaders(headers)
.setMetadata(metadata).setRequestConditions(requestConditions)
.setPermissions("permissions").setUmask("umask"), timeout, new Context("key", "value"));
System.out.println("Upload from file succeeded");
} catch (UncheckedIOException ex) {
System.err.printf("Failed to upload from file %s%n", ex.getMessage());
}
Parameters:
Returns: