Spark写入Hudi报分区列乱码问题java.net.URISyntaxException: Illegal character in path at index 46:
在做hudi测试的时候 发现 如果hudi的分区列是 中文字符的 就会报 先下面的错误 hdfs://linux01:9000/hudi/insertHDFS/ggggggg/ä¸å½/eb4ddae6-9841-469b-9fed-c2375f13d616-0_2-21-28_20210122113859.parquet
在gggggggg目录下面报乱码这个是我的分区列 ä¸å½ 是中文字符报这个问题 , 以后用英文吧 或者其他的 实在不行上拼音 嘎嘎
16241 [qtp1096084691-76] INFO org.apache.hudi.common.table.view.AbstractTableFileSystemView - #files found in partition (春秋) =2, Time taken =2
16241 [qtp1096084691-74] INFO org.apache.hudi.common.table.view.HoodieTableFileSystemView - Adding file-groups for partition :default, #FileGroups=1
16241 [qtp1096084691-74] INFO org.apache.hudi.common.table.view.AbstractTableFileSystemView - addFilesToView: NumFiles=3, FileGroupsCreationTime=1, StoreTimeTaken=0
16241 [qtp1096084691-74] INFO org.apache.hudi.common.table.view.AbstractTableFileSystemView - Time to load partition (default) =4
16241 [qtp1096084691-76] INFO org.apache.hudi.common.table.view.HoodieTableFileSystemView - Adding file-groups for partition :春秋, #FileGroups=1
16241 [qtp1096084691-76] INFO org.apache.hudi.common.table.view.AbstractTableFileSystemView - addFilesToView: NumFiles=2, FileGroupsCreationTime=0, StoreTimeTaken=0
16241 [qtp1096084691-76] INFO org.apache.hudi.common.table.view.AbstractTableFileSystemView - Time to load partition (春秋) =4
16242 [qtp1096084691-70] INFO org.apache.hudi.common.table.view.AbstractTableFileSystemView - Pending Compaction instant for (FileSlice {fileGroupId=HoodieFileGroupId{partitionPath='三国', fileId='eb4ddae6-9841-469b-9fed-c2375f13d616-0'}, baseCommitTime=20210122113859, baseFile='HoodieDataFile{fullPath=hdfs://linux01:9000/hudi/insertHDFS/ggggggg/三国/eb4ddae6-9841-469b-9fed-c2375f13d616-0_2-21-28_20210122113859.parquet, fileLen=435686}', logFiles='[]'}) is :Option{val=null}
16242 [qtp1096084691-74] INFO org.apache.hudi.common.table.view.AbstractTableFileSystemView - Pending Compaction instant for (FileSlice {fileGroupId=HoodieFileGroupId{partitionPath='default', fileId='7570a823-0c9c-4c9c-9ac1-4b7049bda06a-0'}, baseCommitTime=20210122113859, baseFile='HoodieDataFile{fullPath=hdfs://linux01:9000/hudi/insertHDFS/ggggggg/default/7570a823-0c9c-4c9c-9ac1-4b7049bda06a-0_0-21-26_20210122113859.parquet, fileLen=432566}', logFiles='[]'}) is :Option{val=null}
16242 [qtp1096084691-76] INFO org.apache.hudi.common.table.view.AbstractTableFileSystemView - Pending Compaction instant for (FileSlice {fileGroupId=HoodieFileGroupId{partitionPath='春秋', fileId='38955f0a-4891-412d-aa3a-ff83cb85f81e-0'}, baseCommitTime=20210122113859, baseFile='HoodieDataFile{fullPath=hdfs://linux01:9000/hudi/insertHDFS/ggggggg/春秋/38955f0a-4891-412d-aa3a-ff83cb85f81e-0_1-21-27_20210122113859.parquet, fileLen=435503}', logFiles='[]'}) is :Option{val=null}
16243 [qtp1096084691-76] INFO org.apache.hudi.timeline.service.FileSystemViewHandler - TimeTakenMillis[Total=6, Refresh=0, handle=6, Check=0], Success=true, Query=partition=%E6%98%A5%E7%A7%8B&basepath=%2Fhudi%2FinsertHDFS%2Fggggggg&lastinstantts=20210122113859&timelinehash=1dd7dcd13f88921e2bf8ec650caa2f6f9c7c9b224d72d86a2e0a453368b72d9e, Host=windows:51997, synced=false
16243 [qtp1096084691-74] INFO org.apache.hudi.timeline.service.FileSystemViewHandler - TimeTakenMillis[Total=7, Refresh=1, handle=6, Check=0], Success=true, Query=partition=default&basepath=%2Fhudi%2FinsertHDFS%2Fggggggg&lastinstantts=20210122113859&timelinehash=1dd7dcd13f88921e2bf8ec650caa2f6f9c7c9b224d72d86a2e0a453368b72d9e, Host=windows:51997, synced=false
16243 [qtp1096084691-70] INFO org.apache.hudi.timeline.service.FileSystemViewHandler - TimeTakenMillis[Total=7, Refresh=1, handle=6, Check=0], Success=true, Query=partition=%E4%B8%89%E5%9B%BD&basepath=%2Fhudi%2FinsertHDFS%2Fggggggg&lastinstantts=20210122113859&timelinehash=1dd7dcd13f88921e2bf8ec650caa2f6f9c7c9b224d72d86a2e0a453368b72d9e, Host=windows:51997, synced=false
16274 [Executor task launch worker for task 32] ERROR org.apache.spark.executor.Executor - Exception in task 0.0 in stage 28.0 (TID 32)
java.lang.RuntimeException: java.net.URISyntaxException: Illegal character in path at index 46: hdfs://linux01:9000/hudi/insertHDFS/ggggggg/ä¸å½/eb4ddae6-9841-469b-9fed-c2375f13d616-0_2-21-28_20210122113859.parquet
at org.apache.hudi.common.table.timeline.dto.FilePathDTO.toPath(FilePathDTO.java:54)
at org.apache.hudi.common.table.timeline.dto.FileStatusDTO.toFileStatus(FileStatusDTO.java:102)
at org.apache.hudi.common.table.timeline.dto.BaseFileDTO.toHoodieBaseFile(BaseFileDTO.java:46)
at org.apache.hudi.common.table.timeline.dto.FileSliceDTO.toFileSlice(FileSliceDTO.java:58)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.apache.hudi.table.compact.HoodieMergeOnReadTableCompactor.lambda$generateCompactionPlan$85ff16a$1(HoodieMergeOnReadTableCompactor.java:207)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at scala.collection.AbstractIterator.to(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$15.apply(RDD.scala:990)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$15.apply(RDD.scala:990)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.URISyntaxException: Illegal character in path at index 46: hdfs://linux01:9000/hudi/insertHDFS/ggggggg/ä¸å½/eb4ddae6-9841-469b-9fed-c2375f13d616-0_2-21-28_20210122113859.parquet
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parseHierarchical(URI.java:3105)
at java.net.URI$Parser.parse(URI.java:3053)
at java.net.URI.<init>(URI.java:588)
at org.apache.hudi.common.table.timeline.dto.FilePathDTO.toPath(FilePathDTO.java:52)
... 38 more
16274 [Executor task launch worker for task 33] ERROR org.apache.spark.executor.Executor - Exception in task 1.0 in stage 28.0 (TID 33)
java.lang.RuntimeException: java.net.URISyntaxException: Illegal character in path at index 45: hdfs://linux01:9000/hudi/insertHDFS/ggggggg/æ¥ç§/38955f0a-4891-412d-aa3a-ff83cb85f81e-0_1-21-27_20210122113859.parquet
at org.apache.hudi.common.table.timeline.dto.FilePathDTO.toPath(FilePathDTO.java:54)
at org.apache.hudi.common.table.timeline.dto.FileStatusDTO.toFileStatus(FileStatusDTO.java:102)
at org.apache.hudi.common.table.timeline.dto.BaseFileDTO.toHoodieBaseFile(BaseFileDTO.java:46)
at org.apache.hudi.common.table.timeline.dto.FileSliceDTO.toFileSlice(FileSliceDTO.java:58)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.apache.hudi.table.compact.HoodieMergeOnReadTableCompactor.lambda$generateCompactionPlan$85ff16a$1(HoodieMergeOnReadTableCompactor.java:207)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at scala.collection.AbstractIterator.to(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$15.apply(RDD.scala:990)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$15.apply(RDD.scala:990)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
因为我的分区字段时中文。
所以尽量避免中文字符作为hudi分区列 是在不行用拼音 总可以吧 !
Spark写入Hudi报分区列乱码问题java.net.URISyntaxException: Illegal character in path at index 46:相关推荐
- flutter 报错java.net.URISyntaxException: Illegal character in opaque part at index 2
记录flutter报错如下错误以及解决办法 FAILURE: Build failed with an exception. * What went wrong: Execution failed f ...
- http请求报错Illegal character in query at index 303的解决方法
http请求报错"Illegal character in query at index 303"的解决方法 执行jmeter的http请求时,请求失败,在Sampler resu ...
- Spark写入MySQL报错乱码+报错
1.美图 1.背景 /*** 测试点:测试写入到mysql 指定编码方式* 测试数据:* 1,1,梁川川* 测试结果:* 同时写入mysql成功* 1. 不指定编码方式 会乱码 ?characterE ...
- spark 写入tidb 报错read-uncommitted is not supported
报错日志: Caused by: java.sql.SQLException: The isolation level 'READ-UNCOMMITTED' is not supported. Set ...
- activeMQ启动失败报错illegal character in hostname at index
我在安装activemq的时候发现启动失败,查看了日志发下打印了一堆东西,但是关键信息就是illegal character in hostname at index 突然发现应该是hostname有 ...
- 【Spark】Spark Stream读取kafka写入kafka报错 AbstractMethodError
1.概述 根据这个博客 [Spark]Spark 2.4 Stream 读取kafka 写入kafka 报错如下 Exception in thread "main" java.l ...
- flinksql写入hudi 踩坑实录
flinksql写入hudi 测试环境: Flink 1.11.1 hudi 0.8.0 Hadoop 3.0.0 Hive 2.1.1 准备工作: 1.安装flink 1.11.1,要下载带hado ...
- 向Spark的DataFrame增加一列数据
前言 先说个题外话,如何给hive表增加一个列,并且该把该列的所有字段设为'China'? 如果仅仅是增加一列倒是很简单: alter table test add columns(flag stri ...
- [待解答]R语言读文件报错“列的数目比列的名字要多”
遇到的问题 当要读取.csv文件时,查看csv发现每列属性都有列名呀,但是竟然还被报错"列的数目比列的名字要多". 查看问题 data=read.csv('文件路径',header ...
- Spark SQL DataFrame新增一列的四种方法
Spark SQL DataFrame新增一列的四种方法 方法一:利用createDataFrame方法,新增列的过程包含在构建rdd和schema中 方法二:利用withColumn方法,新增列的过 ...
最新文章
- awbeci网站之技术篇
- 实现根据条件删除_Vue源码解析,keep-alive是如何实现缓存的?
- 2020年下半年信息系统项目管理师章节占分比
- rest_framework07:权限/频率/过滤组件/排序/异常处理封装Response对象
- 让数据大白于天下:GCC插件实现代码分析和安全审计
- halcon python缺陷检测_halcon边缘提取缺陷检测的思路
- Python 爬取 3 万条游戏评分数据,原来程序员最爱玩的游戏竟然是......
- redmi airdots左右耳不串联怎么办_小米AirDots二代4小时售罄,告诉你戴狂卖3500万的耳机是什么感受...
- java 字符串转换int_java IPV4字符串转int或long
- mysql数据库回滚日志_Mysql数据库慢查询日志的使用
- [HNOI2016]树
- php基础案例例子,PHP基础案例教程
- 解决Idea 出现 Could not autowire.. 错误
- java判断线与矩形相交_判断任意多边形与矩形的相交(线段与矩形相交或线段与线段相交)...
- 【标题】视频标注软件DARKLABEL V2.4 主页中英对照图
- 使用 H3C 的办公室路由器和 IDC 的防火墙建立 IPSec ***
- Unity开发OpenXR | (二)使用 OpenXR 制作一款简单VR示例场景 的全过程详细教程,包含两个实战案例。
- [shader]Unity 移动端海面
- WebView加载页面出现白屏解决方案
- oracle中常用关键字,oracle常用函数及关键字笔记