DataWorks同步任务同步到OSS,总是报下面的错误,如何解决?
Caused by: com.aliyun.oss.ClientException: The target server failed to respond
at com.aliyun.oss.common.utils.ExceptionFactory.createNetworkException(ExceptionFactory.java:71)
at com.aliyun.oss.common.comm.DefaultServiceClient.sendRequestCore(DefaultServiceClient.java:127)
at com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:133)
at com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:70)
at com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:83)
at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:145)
at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:102)
at com.aliyun.oss.internal.OSSMultipartOperation.initiateMultipartUpload(OSSMultipartOperation.java:226)
at com.aliyun.oss.OSSClient.initiateMultipartUpload(OSSClient.java:727)
at org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.getUploadId(AliyunOSSFileSystemStore.java:641)
at org.apache.hadoop.fs.aliyun.oss.AliyunOSSBlockOutputStream.uploadCurrentPart(AliyunOSSBlockOutputStream.java:177)
at org.apache.hadoop.fs.aliyun.oss.AliyunOSSBlockOutputStream.write(AliyunOSSBlockOutputStream.java:151)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at parquet.bytes.ConcatenatingByteArrayCollector.writeAllTo(ConcatenatingByteArrayCollector.java:46)
at parquet.hadoop.ParquetFileWriter.writeDataPages(ParquetFileWriter.java:347)
at parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writeToFileWriter(ColumnChunkPageWriteStore.java:182)
at parquet.hadoop.ColumnChunkPageWriteStore.flushToFileWriter(ColumnChunkPageWriteStore.java:238)
at parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:155)
at parquet.hadoop.InternalParquetRecordWriter.checkBlockSizeReached(InternalParquetRecordWriter.java:131)
at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:123)
at parquet.hadoop.ParquetWriter.write(ParquetWriter.java:258)
at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.parquetFileStartWrite(HdfsHelper.java:1068)
... 4 more
以下为热心网友提供的参考意见
这个错误是由于目标服务器无法响应导致的。你可以尝试以下方法来解决这个问题:
- 检查网络连接是否正常,确保你的程序可以访问目标服务器。
- 检查目标服务器的防火墙设置,确保允许你的程序访问OSS服务。
- 增加重试次数和重试间隔,以便在网络不稳定的情况下有更多的机会成功连接到目标服务器。
以下为热心网友提供的参考意见
目前oss数据源 数据集成连通性测试是通过的么,看历史的实例运行情况只失败了这一次 目前怀疑是并发操作异常之类的导致 我再确认下哈,排查了下 可能是凌晨的时候网络抖动导致写入失败,所以文件处于一个不正常状态 目前看任务已经配置上了自动重跑 自动重跑可以缓解网络抖动导致的影响 另外可以尽量避免任务在凌晨高峰期执行 ,此回答整理自钉群“DataWorks交流群(答疑@机器人)”
本文来自投稿,不代表新手站长_郑州云淘科技有限公司立场,如若转载,请注明出处:https://www.cnzhanzhang.com/13572.html