一、报错信息
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve 'concat_ws(,,(hiveudaffunction(HiveFunctionWrapper(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFCollectSet,org.apache.hadoop.hive.ql.udf.generic.GenericUDAFCollectSet@1208707b),SF_ID,false,0,0),mode=Complete,isDistinct=false))' due to data type mismatch: argument 2 requires (array<string> or string) type, however, 'hiveudaffunction(HiveFunctionWrapper(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFCollectSet,org.apache.hadoop.hive.ql.udf.generic.GenericUDAFCollectSet@1208707b),SF_ID,false,0,0)' is of array<decimal(38,18)> type.; line 1 pos 72
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
二、报错原因分析
1. 分析SQL语句
select t1.ACTIVITY_ID,concat_ws(',',collect_set(t1.SF_ID)) as SF_IDS
from str_template_rel t1
group by t1.ACTIVITY_ID;
使用DBVisualizer客户端连接Spark(参考博文:
https://blog.csdn.net/u011817217/article/details/81673101
),将SQL语句拆解初步排查问题。
1)排查是否为表数据问题
发现源表中activity_id字段数据没有问题;
发现源表中sf_id字段中部分数据值为null,初步怀疑是不是null影响了concat_ws函数的使用;
于是对SQL语句进行改进:
还是报同样的错误。
2)排查是否为concat_ws函数使用不规范问题
collect_set中不使用表中的字段而是使用固定的值,如下:
这次从错误Message中发现了一条重要信息:due to data type mismatch, requires (array<string> or string) type。
如梦初醒,原来collect_set中填入的必须是string类型,测试下:
果不其然,把数值1加上单引号后,再次执行SQL没有报错:
所以,问题出在t1.SF_ID字段类型上。
3)t1.SF_ID字段类型
查看str_template_rel表的结构描述,如下:
sf_id字段是数值类型。
三、解决方案
既然发现问题是出在sf_id字段的类型上,需要将数值类型转换为字符类型。
于是对SQL语句进行改造:
select t1.ACTIVITY_ID,concat_ws(',',collect_set(cast(t1.SF_ID as string))) as SF_IDS
from str_template_rel t1
where t1.SF_ID is not null
group by t1.ACTIVITY_ID;
完美解决!!!
反思:concat_ws函数中collect_set传入的值必须为string类型。
一、报错信息Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve 'concat_ws(,,(hiveudaffunction(HiveFunctionWrapper(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFCollectSet...
在
spark
中
遇到Exception in thread “main” org.apache.
spark
.
sql
.,这往往是所选取 'pibid'字段不存在,或者名字写错了。所以要查该字段!!!
Exception in thread “main” org.apache.
spark
.
sql
.AnalysisException: cannot
resolve
‘pibid’ given input columns: [
spark
_catalog.lg_edu_warehouse.dwd_t_baseexamh
Exception in thread "main" org.apache.
spark
.
sql
.AnalysisException: cannot
resolve
'`id`' given input columns: [
id, name, age, sex];;
Table or view not found: aaa.bbb
The column number of the existing table dmall_search.query_embedding_
data
_1(struct<>) doesn’t
match
the
data
schema(struct<user_id:string,dt:string,sku_list:array>);
Cannot insert into table ddw_ware.purchase_d.
在编写代码的时候老是报argument
type
mis
match
错误,后面修改xml映射文件也没办法,当看见这篇文章发现了问题。
文章出自:https://blog.csdn.net/avinegar/article/details/8132375
Struts的ActionServlet在接受到页面的请求后,会调用RequestUtils.populate方法对Form进行填值,而此方...
写
spark
sql
语句时出现的错误,他说该字段不在表
中
Exception in thread "main" org.apache.
spark
.
sql
.AnalysisException: cannot
resolve
'`Class`' given input columns: [Cno, Cnam, Tno]; line 1 pos 28;
'Project [*]
+- 'Filter...
AnalysisException: cannot
resolve
'`Magnitude
Type
`' given input columns: [Date, Day, Depth, Latitude, Longitude, Magnitude, Month, Time,
Type
, Year];
'Project [Magnitude#22, 'Magnitude
Type
]
+- Relation [Date#16,Time#17,Latitude#18,Longitude#19,
Type
#20,De