![]() |
霸气的大葱 · 【拜厄钢琴基础教程】+电子书【添加至稍后再看 ...· 8 月前 · |
![]() |
性感的灯泡 · Spring Cloud Stream ...· 8 月前 · |
![]() |
玩篮球的灯泡 · 河南省民政厅党员干部下沉社区抗疫—— ...· 9 月前 · |
![]() |
卖萌的青椒 · Git报错: Failed to ...· 1 年前 · |
![]() |
呐喊的洋葱 · 新加坡留学生陪读妈妈政策详细解读 - 知乎· 1 年前 · |
前言Apache Hudi bootstrap源码简要走读,不了解Hudi bootstrap的可以参考:利用Hudi Bootstrap转化现有Hive表的parquet/orc文件为Hudi表版本Hudi 0.12.0Spark 2.4.4入口val bootstrapDF = spark.emptyDataFrame bootstrapDF.write. format("hudi"). options(extraOpts). option(DataSourceWriteOptions.OPERATION.key, DataSourceWriteOptions.BOOTSTRAP_OPERATION_OPT_VAL). ...... save(basePath)根据文章:Hudi Spark源码学习总结-df.write.format(“hudi”).save可知,save方法会走到DefaultSource.createRelationoverride def createRelation(sqlContext: SQLContext, mode: SaveMode, optParams: Map[String, String], df: DataFrame): BaseRelation = { val dfWithoutMetaCols = df.drop(HoodieRecord.HOODIE_META_COLUMNS.asScala:_*) if (optParams.get(OPERATION.key).contains(BOOTSTRAP_OPERATION_OPT_VAL)) { HoodieSparkSqlWriter.bootstrap(sqlContext, mode, optParams, dfWithoutMetaCols) } else { HoodieSparkSqlWriter.write(sqlContext, mode, optParams, dfWithoutMetaCols) new HoodieEmptyRelation(sqlContext, dfWithoutMetaCols.schema) }它会判断OPERATION是否为BOOTSTRAP_OPERATION_OPT_VAL,这里为true,所以会调用HoodieSparkSqlWriter.bootstrapHoodieSparkSqlWriter.bootstrap这里首先获取一些参数,然后判断表是否存在,如果不存在证明是第一次写,需要设置写一些配置参数,然后进行初始化:HoodieTableMetaClient.initTable,接着调用writeClient.bootstrapdef bootstrap(sqlContext: SQLContext, mode: SaveMode, optParams: Map[String, String], df: DataFrame, hoodieTableConfigOpt: Option[HoodieTableConfig] = Option.empty, hoodieWriteClient: Option[SparkRDDWriteClient[HoodieRecordPayload[Nothing]]] = Option.empty): Boolean = { assert(optParams.get("path").exists(!StringUtils.isNullOrEmpty(_)), "'path' must be set") val path = optParams("path") val basePath = new Path(path) val sparkContext = sqlContext.sparkContext val fs = basePath.getFileSystem(sparkContext.hadoopConfiguration) tableExists = fs.exists(new Path(basePath, HoodieTableMetaClient.METAFOLDER_NAME)) val tableConfig = getHoodieTableConfig(sparkContext, path, hoodieTableConfigOpt) validateTableConfig(sqlContext.sparkSession, optParams, tableConfig, mode == SaveMode.Overwrite) val (parameters, hoodieConfig) = mergeParamsAndGetHoodieConfig(optParams, tableConfig, mode) val tableName = hoodieConfig.getStringOrThrow(HoodieWriteConfig.TBL_NAME, s"'${HoodieWriteConfig.TBL_NAME.key}' must be set.") val tableType = hoodieConfig.getStringOrDefault(TABLE_TYPE) val bootstrapBasePath = hoodieConfig.getStringOrThrow(BASE_PATH, s"'${BASE_PATH.key}' is required for '${BOOTSTRAP_OPERATION_OPT_VAL}'" + " operation'") val bootstrapIndexClass = hoodieConfig.getStringOrDefault(INDEX_CLASS_NAME) var schema: String = null if (df.schema.nonEmpty) { val (structName, namespace) = AvroConversionUtils.getAvroRecordNameAndNamespace(tableName) schema = AvroConversionUtils.convertStructTypeToAvroSchema(df.schema, structName, namespace).toString } else { schema = HoodieAvroUtils.getNullSchema.toString if (mode == SaveMode.Ignore && tableExists) { log.warn(s"hoodie table at $basePath already exists. Ignoring & not performing actual writes.") if (!hoodieWriteClient.isEmpty) { hoodieWriteClient.get.close() false } else { // Handle various save modes handleSaveModes(sqlContext.sparkSession, mode, basePath, tableConfig, tableName, WriteOperationType.BOOTSTRAP, fs) if (!tableExists) { // 表如果不存在 val archiveLogFolder = hoodieConfig.getStringOrDefault(HoodieTableConfig.ARCHIVELOG_FOLDER) val partitionColumns = HoodieWriterUtils.getPartitionColumns(parameters) val recordKeyFields = hoodieConfig.getString(DataSourceWriteOptions.RECORDKEY_FIELD) val keyGenProp = hoodieConfig.getString(HoodieTableConfig.KEY_GENERATOR_CLASS_NAME) val populateMetaFields = java.lang.Boolean.parseBoolean(parameters.getOrElse( HoodieTableConfig.POPULATE_META_FIELDS.key(), String.valueOf(HoodieTableConfig.POPULATE_META_FIELDS.defaultValue()) val baseFileFormat = hoodieConfig.getStringOrDefault(HoodieTableConfig.BASE_FILE_FORMAT) val useBaseFormatMetaFile = java.lang.Boolean.parseBoolean(parameters.getOrElse( HoodieTableConfig.PARTITION_METAFILE_USE_BASE_FORMAT.key(), String.valueOf(HoodieTableConfig.PARTITION_METAFILE_USE_BASE_FORMAT.defaultValue()) // 进行一些配置后,初始化Hudi表 HoodieTableMetaClient.withPropertyBuilder() .setTableType(HoodieTableType.valueOf(tableType)) .setTableName(tableName) .setRecordKeyFields(recordKeyFields) .setArchiveLogFolder(archiveLogFolder) .setPayloadClassName(hoodieConfig.getStringOrDefault(PAYLOAD_CLASS_NAME)) .setPreCombineField(hoodieConfig.getStringOrDefault(PRECOMBINE_FIELD, null)) .setBootstrapIndexClass(bootstrapIndexClass) .setBaseFileFormat(baseFileFormat) .setBootstrapBasePath(bootstrapBasePath) .setPartitionFields(partitionColumns) .setPopulateMetaFields(populateMetaFields) .setKeyGeneratorClassProp(keyGenProp) .setHiveStylePartitioningEnable(hoodieConfig.getBoolean(HIVE_STYLE_PARTITIONING)) .setUrlEncodePartitioning(hoodieConfig.getBoolean(URL_ENCODE_PARTITIONING)) .setCommitTimezone(HoodieTimelineTimeZone.valueOf(hoodieConfig.getStringOrDefault(HoodieTableConfig.TIMELINE_TIMEZONE))) .setPartitionMetafileUseBaseFormat(useBaseFormatMetaFile) .initTable(sparkContext.hadoopConfiguration, path) val jsc = new JavaSparkContext(sqlContext.sparkContext) val writeClient = hoodieWriteClient.getOrElse(DataSourceUtils.createHoodieClient(jsc, schema, path, tableName, mapAsJavaMap(parameters))) try { writeClient.bootstrap(org.apache.hudi.common.util.Option.empty()) } finally { writeClient.close() val metaSyncSuccess = metaSync(sqlContext.sparkSession, hoodieConfig, basePath, df.schema) metaSyncSuccess }writeClient.bootstrap这里的writeClient为SparkRDDWriteClient,然后调用HoodieTable的bootstrap,我们这里使用表类型为COW,所以为HoodieSparkCopyOnWriteTableinitTable(WriteOperationType.UPSERT, Option.ofNullable(HoodieTimeline.METADATA_BOOTSTRAP_INSTANT_TS)).bootstrap(context, extraMetadata); public static <T extends HoodieRecordPayload> HoodieSparkTable<T> create(HoodieWriteConfig config, HoodieSparkEngineContext context, HoodieTableMetaClient metaClient) { HoodieSparkTable<T> hoodieSparkTable; switch (metaClient.getTableType()) { case COPY_ON_WRITE: hoodieSparkTable = new HoodieSparkCopyOnWriteTable<>(config, context, metaClient); break; case MERGE_ON_READ: hoodieSparkTable = new HoodieSparkMergeOnReadTable<>(config, context, metaClient); break; default: throw new HoodieException("Unsupported table type :" + metaClient.getTableType()); return hoodieSparkTable; }HoodieSparkCopyOnWriteTable.bootstrappublic HoodieBootstrapWriteMetadata<HoodieData<WriteStatus>> bootstrap(HoodieEngineContext context, Option<Map<String, String>> extraMetadata) { return new SparkBootstrapCommitActionExecutor((HoodieSparkEngineContext) context, config, this, extraMetadata).execute(); }SparkBootstrapCommitActionExecutor.execute这里首先通过listAndProcessSourcePartitions返回mode和对应的分区,其中mode有两种METADATA_ONLY和FULL_RECORD,然后对于METADATA_ONLY对应的分区路径执行metadataBootstrap,FULL_RECORD对应的分区路径执行fullBootstrap,从这里可以看出两点:1、通过listAndProcessSourcePartitions返回的mode值判断是进行METADATA_ONLY还是FULL_RECORD 2、具体的逻辑分别在metadataBootstrap,fullBootstrap。那么我们分别来看一下,首先看一下listAndProcessSourcePartitions是如何分会mode的@Override public HoodieBootstrapWriteMetadata<HoodieData<WriteStatus>> execute() { validate(); try { HoodieTableMetaClient metaClient = table.getMetaClient(); Option<HoodieInstant> completedInstant = metaClient.getActiveTimeline().getCommitsTimeline().filterCompletedInstants().lastInstant(); ValidationUtils.checkArgument(!completedInstant.isPresent(), "Active Timeline is expected to be empty for bootstrap to be performed. " + "If you want to re-bootstrap, please rollback bootstrap first !!"); // 返回 mode和对应的分区,其中mode有两种METADATA_ONLY和FULL_RECORD Map<BootstrapMode, List<Pair<String, List<HoodieFileStatus>>>> partitionSelections = listAndProcessSourcePartitions(); // First run metadata bootstrap which will auto commit // 首先运行metadataBootstrap,如果partitionSelections中有METADATA_ONLY则继续执行metadataBootstrap的逻辑,没有的话,什么都不执行,直接返回 Option<HoodieWriteMetadata<HoodieData<WriteStatus>>> metadataResult = metadataBootstrap(partitionSelections.get(BootstrapMode.METADATA_ONLY)); // if there are full bootstrap to be performed, perform that too // 然后运行fullBootstrap,如果partitionSelections中有FULL_RECORD则继续执行fullBootstrap的逻辑,没有的话,什么都不执行,直接返回 Option<HoodieWriteMetadata<HoodieData<WriteStatus>>> fullBootstrapResult = fullBootstrap(partitionSelections.get(BootstrapMode.FULL_RECORD)); // Delete the marker directory for the instant WriteMarkersFactory.get(config.getMarkersType(), table, instantTime) .quietDeleteMarkerDir(context, config.getMarkersDeleteParallelism()); return new HoodieBootstrapWriteMetadata(metadataResult, fullBootstrapResult); } catch (IOException ioe) { throw new HoodieIOException(ioe.getMessage(), ioe); }listAndProcessSourcePartitions这里的主要实现是selector.select,这里的select是通过MODE_SELECTOR_CLASS_NAME(hoodie.bootstrap.mode.selector)配置的,默认值为MetadataOnlyBootstrapModeSelector,我们的例子中FULL_RECORD设置的为FullRecordBootstrapModeSelector,让我们分别看一下他们具体的实现private Map<BootstrapMode, List<Pair<String, List<HoodieFileStatus>>>> listAndProcessSourcePartitions() throws IOException { List<Pair<String, List<HoodieFileStatus>>> folders = BootstrapUtils.getAllLeafFoldersWithFiles( table.getMetaClient(), bootstrapSourceFileSystem, config.getBootstrapSourceBasePath(), context); LOG.info("Fetching Bootstrap Schema !!"); HoodieBootstrapSchemaProvider sourceSchemaProvider = new HoodieSparkBootstrapSchemaProvider(config); bootstrapSchema = sourceSchemaProvider.getBootstrapSchema(context, folders).toString(); LOG.info("Bootstrap Schema :" + bootstrapSchema); BootstrapModeSelector selector = (BootstrapModeSelector) ReflectionUtils.loadClass(config.getBootstrapModeSelectorClass(), config); Map<BootstrapMode, List<String>> result = selector.select(folders); Map<String, List<HoodieFileStatus>> partitionToFiles = folders.stream().collect( Collectors.toMap(Pair::getKey, Pair::getValue)); // Ensure all partitions are accounted for ValidationUtils.checkArgument(partitionToFiles.keySet().equals( result.values().stream().flatMap(Collection::stream).collect(Collectors.toSet()))); return result.entrySet().stream().map(e -> Pair.of(e.getKey(), e.getValue().stream() .map(p -> Pair.of(p, partitionToFiles.get(p))).collect(Collectors.toList()))) .collect(Collectors.toMap(Pair::getKey, Pair::getValue)); }selector.selectMetadataOnlyBootstrapModeSelector和FullRecordBootstrapModeSelector都是UniformBootstrapModeSelector的子类,区别是bootstrapMode不一样,它们的select方法是在父类UniformBootstrapModeSelector实现的public class MetadataOnlyBootstrapModeSelector extends UniformBootstrapModeSelector { public MetadataOnlyBootstrapModeSelector(HoodieWriteConfig bootstrapConfig) { super(bootstrapConfig, BootstrapMode.METADATA_ONLY); public class FullRecordBootstrapModeSelector extends UniformBootstrapModeSelector { public FullRecordBootstrapModeSelector(HoodieWriteConfig bootstrapConfig) { super(bootstrapConfig, BootstrapMode.FULL_RECORD); }UniformBootstrapModeSelector.select很显然上面的mode的返回值和bootstrapMode是对应的,所以当MODE_SELECTOR_CLASS_NAME为MetadataOnlyBootstrapModeSelector和FullRecordBootstrapModeSelector时,他们的mode值是唯一的,要么执行metdata的逻辑要么执行full的逻辑,那么有没有两种模式同时会运行的情况呢,答案是有的。public Map<BootstrapMode, List<String>> select(List<Pair<String, List<HoodieFileStatus>>> partitions) { return partitions.stream().map(p -> Pair.of(bootstrapMode, p)) .collect(Collectors.groupingBy(Pair::getKey, Collectors.mapping(x -> x.getValue().getKey(), Collectors.toList()))); }BootstrapRegexModeSelectorBootstrapRegexModeSelector我们在之前的文章中讲过:首先有配置:hoodie.bootstrap.mode.selector.regex.mode 默认值METADATA_ONLY、hoodie.bootstrap.mode.selector.regex默认值.*但是如果不是默认值的话,比如上面的2020/08/2[0-9],假设我们有分区”2020/08/10,2020/08/10/11,2020/08/20,2020/08/21”,那么匹配成功的2020/08/20和2020/08/21对应的类型为METADATA_ONLY,匹配不成功的2020/08/10和2020/08/10/11则为FULL_RECORD。而至于我的为啥都是FULL_RECORD,原因是regex设置错误,我设置的是2022/10/0[0-9],但实际的分区值为2022-10-08和2022-10-09(分隔符不一样),而如果用默认的.*的话,则全部能匹配上,也就都是METADATA_ONLY(默认情况)public BootstrapRegexModeSelector(HoodieWriteConfig writeConfig) { super(writeConfig); this.pattern = Pattern.compile(writeConfig.getBootstrapModeSelectorRegex()); this.bootstrapModeOnMatch = writeConfig.getBootstrapModeForRegexMatch(); // defaultMode和bootstrapModeOnMatch对立 this.defaultMode = BootstrapMode.FULL_RECORD.equals(bootstrapModeOnMatch) ? BootstrapMode.METADATA_ONLY : BootstrapMode.FULL_RECORD; LOG.info("Default Mode :" + defaultMode + ", on Match Mode :" + bootstrapModeOnMatch); @Override public Map<BootstrapMode, List<String>> select(List<Pair<String, List<HoodieFileStatus>>> partitions) { return partitions.stream() // 匹配上的话,值为bootstrapModeOnMatch,默认为METADATA_ONLY,否则为defaultMode,也就是另外一种类型`FULL_RECORD` // bootstrapModeOnMatch 和 defaultMode是对立的 .map(p -> Pair.of(pattern.matcher(p.getKey()).matches() ? bootstrapModeOnMatch : defaultMode, p.getKey())) .collect(Collectors.groupingBy(Pair::getKey, Collectors.mapping(Pair::getValue, Collectors.toList()))); }关于BootstrapModeSelector的实现一共只有上面讲的这三种,下面让我们来看一下metadataBootstrap,fullBootstrapmetadataBootstrap这里首先创建keyGenerator,然后获取bootstrapPaths,核心逻辑在于后面的getMetadataHandler(config, table, partitionFsPair.getRight().getRight()).runMetadataBootstrap,其中getMetadataHandler我们在之前的文章中讲过了,根据文件类型返回ParquetBootstrapMetadataHandler或者OrcBootstrapMetadataHandler,我们这里返回ParquetBootstrapMetadataHandlerprivate HoodieData<BootstrapWriteStatus> runMetadataBootstrap(List<Pair<String, List<HoodieFileStatus>>> partitions) { if (null == partitions || partitions.isEmpty()) { return context.emptyHoodieData(); TypedProperties properties = new TypedProperties(); properties.putAll(config.getProps()); KeyGeneratorInterface keyGenerator; try { keyGenerator = HoodieSparkKeyGeneratorFactory.createKeyGenerator(properties); } catch (IOException e) { throw new HoodieKeyGeneratorException("Init keyGenerator failed ", e); BootstrapPartitionPathTranslator translator = (BootstrapPartitionPathTranslator) ReflectionUtils.loadClass( config.getBootstrapPartitionPathTranslatorClass(), properties); List<Pair<String, Pair<String, HoodieFileStatus>>> bootstrapPaths = partitions.stream() .flatMap(p -> { String translatedPartitionPath = translator.getBootstrapTranslatedPath(p.getKey()); return p.getValue().stream().map(f -> Pair.of(p.getKey(), Pair.of(translatedPartitionPath, f))); .collect(Collectors.toList()); context.setJobStatus(this.getClass().getSimpleName(), "Bootstrap metadata table: " + config.getTableName()); return context.parallelize(bootstrapPaths, config.getBootstrapParallelism()) .map(partitionFsPair -> getMetadataHandler(config, table, partitionFsPair.getRight().getRight()).runMetadataBootstrap(partitionFsPair.getLeft(), partitionFsPair.getRight().getLeft(), keyGenerator)); }ParquetBootstrapMetadataHandler.runMetadataBootstrapParquetBootstrapMetadataHandler的runMetadataBootstrap是在其父类BaseBootstrapMetadataHandler中实现的,这里的核心逻辑在executeBootstrappublic BootstrapWriteStatus runMetadataBootstrap(String srcPartitionPath, String partitionPath, KeyGeneratorInterface keyGenerator) { Path sourceFilePath = FileStatusUtils.toPath(srcFileStatus.getPath()); HoodieBootstrapHandle<?, ?, ?, ?> bootstrapHandle = new HoodieBootstrapHandle(config, HoodieTimeline.METADATA_BOOTSTRAP_INSTANT_TS, table, partitionPath, FSUtils.createNewFileIdPfx(), table.getTaskContextSupplier()); try { Schema avroSchema = getAvroSchema(sourceFilePath); List<String> recordKeyColumns = keyGenerator.getRecordKeyFieldNames().stream() .map(HoodieAvroUtils::getRootLevelFieldName) .collect(Collectors.toList()); Schema recordKeySchema = HoodieAvroUtils.generateProjectionSchema(avroSchema, recordKeyColumns); LOG.info("Schema to be used for reading record Keys :" + recordKeySchema); AvroReadSupport.setAvroReadSchema(table.getHadoopConf(), recordKeySchema); AvroReadSupport.setRequestedProjection(table.getHadoopConf(), recordKeySchema); executeBootstrap(bootstrapHandle, sourceFilePath, keyGenerator, partitionPath, avroSchema); } catch (Exception e) { throw new HoodieException(e.getMessage(), e); BootstrapWriteStatus writeStatus = (BootstrapWriteStatus) bootstrapHandle.writeStatuses().get(0); BootstrapFileMapping bootstrapFileMapping = new BootstrapFileMapping( config.getBootstrapSourceBasePath(), srcPartitionPath, partitionPath, srcFileStatus, writeStatus.getFileId()); writeStatus.setBootstrapSourceFileMapping(bootstrapFileMapping); return writeStatus; }executeBootstrapexecuteBootstrap在ParquetBootstrapMetadataHandler,首先创建一个ParquetReader,然后将reader封装成ParquetReaderIterator,作为BoundedInMemoryExecutor的参数构造wrapper,然后执行wrapper.execute()void executeBootstrap(HoodieBootstrapHandle<?, ?, ?, ?> bootstrapHandle, Path sourceFilePath, KeyGeneratorInterface keyGenerator, String partitionPath, Schema avroSchema) throws Exception { BoundedInMemoryExecutor<GenericRecord, HoodieRecord, Void> wrapper = null; ParquetReader<IndexedRecord> reader = AvroParquetReader.<IndexedRecord>builder(sourceFilePath).withConf(table.getHadoopConf()).build(); try { wrapper = new BoundedInMemoryExecutor<GenericRecord, HoodieRecord, Void>(config.getWriteBufferLimitBytes(), new ParquetReaderIterator(reader), new BootstrapRecordConsumer(bootstrapHandle), inp -> { String recKey = keyGenerator.getKey(inp).getRecordKey(); GenericRecord gr = new GenericData.Record(HoodieAvroUtils.RECORD_KEY_SCHEMA); gr.put(HoodieRecord.RECORD_KEY_METADATA_FIELD, recKey); BootstrapRecordPayload payload = new BootstrapRecordPayload(gr); HoodieRecord rec = new HoodieAvroRecord(new HoodieKey(recKey, partitionPath), payload); return rec; }, table.getPreExecuteRunnable()); wrapper.execute(); } catch (Exception e) { throw new HoodieException(e); } finally { reader.close(); if (null != wrapper) { wrapper.shutdownNow(); wrapper.awaitTermination(); bootstrapHandle.close(); }wrapper.execute()这里是一个生产者-消费者模型,可以参考生产者-消费者模型在Hudi中的应用public E execute() { try { startProducers(); Future<E> future = startConsumer(); // Wait for consumer to be done return future.get(); } catch (InterruptedException ie) { shutdownNow(); Thread.currentThread().interrupt(); throw new HoodieException(ie); } catch (Exception e) { throw new HoodieException(e);
前言首先安装好Ceph,可以参考我上篇文章Ceph分布式集群安装配置版本spark: 2.4.5hadoop: hdp版本 3.1.1.3.1.0.0-78spark-shell读写S3jar包配置hadoop-aws-3.1.1.3.1.0.0-78.jar 注意版本要和hadoop版本对应 aws-java-sdk-s3-1.12.22.jar aws-java-sdk-core-1.12.22.jar aws-java-sdk-dynamodb-1.12.22.jar 可能还需要: hadoop-client-api-3.1.1.3.1.0.0-78.jar hadoop-client-runtime-3.1.1.3.1.0.0-78.jar将上面的jar包拷贝到$SPARK_HOME/jarsspark-shell读S3CMD创建测试文件创建Buckets3cmd mb s3://txt Bucket 's3://txt/' created 本地生成测试txtvi test.txt 4将test.txt上传到s3://txts3cmd put test.txt s3://txt upload: 'test.txt' -> 's3://txt/test.txt' [1 of 1] 8 of 8 100% in 0s 45.82 B/s done代码spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", "XLSMNHY9Z4ML094IBGOY") spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", "iGn9bmqKAArUqiMIohYDmF3WSPi0YAyVO3J9WnxZ") spark.sparkContext.hadoopConfiguration.set("fs.s3a.endpoint", "10.110.105.162:7480") spark.sparkContext.hadoopConfiguration.set("fs.s3a.connection.ssl.enabled", "false") val rdd = spark.read.text("s3a://txt/test.txt") rdd.count rdd.foreach(println(_))结果scala> rdd.count res4: Long = 4 scala> rdd.foreach(println(_)) [4]spark-shell写S3CMD创建用于写的BUCKETs3cmd mb s3://test-s3-write Bucket 's3://test-s3-write/' created 代码spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", "XLSMNHY9Z4ML094IBGOY") spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", "iGn9bmqKAArUqiMIohYDmF3WSPi0YAyVO3J9WnxZ") spark.sparkContext.hadoopConfiguration.set("fs.s3a.endpoint", "10.110.105.162:7480") spark.sparkContext.hadoopConfiguration.set("fs.s3a.connection.ssl.enabled", "false") import spark.implicits._ val df = Seq((1, "a1", 10, 1000, "2022-09-27")).toDF("id", "name", "value", "ts", "dt") df.write.mode("overwrite").parquet("s3a://test-s3-write/test_df") spark.read.parquet("s3a://test-s3-write/test_df").show验证我们在上面的代码里已经通过读来验证了一次spark.read.parquet("s3a://test-s3-write/test_df").show +---+----+-----+----+----------+ | id|name|value| ts| dt| +---+----+-----+----+----------+ | 1| a1| 10|1000|2022-09-27| +---+----+-----+----+----------+接下来再用s3cmd命令验证对应的s3路径下是否有对应的我们用代码写的parquet文件s3cmd ls s3://test-s3-write/test_df/ 2022-09-28 07:35 0 s3://test-s3-write/test_df/_SUCCESS 2022-09-28 07:35 1222 s3://test-s3-write/test_df/part-00000-f1f23a4e-bb07-424c-853e-59281af2920c-c000.snappy.parquetIDEA Spark代码读写S3pom依赖<dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-aws</artifactId> <version>3.1.1.3.1.0.0-78</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-core</artifactId> <version>1.12.22</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-s3</artifactId> <version>1.12.22</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-dynamodb</artifactId> <version>1.12.22</version> </dependency>代码import org.apache.spark.sql.SparkSession object SparkS3Demo { def main(args: Array[String]): Unit = { val spark = SparkSession.builder(). master("local[*]"). appName("SparkS3Demo"). getOrCreate() spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", "XLSMNHY9Z4ML094IBGOY") spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", "iGn9bmqKAArUqiMIohYDmF3WSPi0YAyVO3J9WnxZ") spark.sparkContext.hadoopConfiguration.set("fs.s3a.endpoint", "10.110.105.162:7480") spark.sparkContext.hadoopConfiguration.set("fs.s3a.connection.ssl.enabled", "false") testReadTxt(spark) testWriteAndReadParquet(spark) spark.stop() * 测试Spark读S3txt * 需要先创建Bucket s3cmd mb s3://txt * 再上传test.txt s3cmd put test.txt s3://txt * 最后用Spark读 def testReadTxt(spark: SparkSession) = { val rdd = spark.read.text("s3a://txt/test.txt") println(rdd.count) rdd.foreach(println(_)) * s3cmd mb s3://test-s3-write * 需要先创建Bucket def testWriteAndReadParquet(spark: SparkSession) = { import spark.implicits._ val df = Seq((1, "a1", 10, 1000, "2022-09-27")).toDF("id", "name", "value", "ts", "dt") df.write.mode("overwrite").parquet("s3a://test-s3-write/test_df") spark.read.parquet("s3a://test-s3-write/test_df").show }完整代码完整代码已上传到GitHub,有需要的同学可自行下载:https://github.com/dongkelun/S3_Demos3,s3a,s3n 的区别引用自https://m.imooc.com/wenda/detail/606708S3本机文件系统(URI方案:s3n)用于在S3上读写常规文件的本机文件系统。该文件系统的优点是您可以访问S3上用其他工具编写的文件。相反,其他工具可以访问使用Hadoop编写的文件。缺点是S3施加的文件大小限制为5GB。S3A(URI方案:s3a)S3a:系统是S3本机s3n fs的后继产品,它使用Amazon的库与S3进行交互。这使S3a支持更大的文件(没有更多的5GB限制),更高性能的操作等等。该文件系统旨在替代S3本机/替代S3本机:从s3n:// URL访问的所有对象也应该仅通过替换URL架构就可以从s3a访问。S3块文件系统(URI方案:s3)由S3支持的基于块的文件系统。文件存储为块,就像它们在HDFS中一样。这样可以有效地执行重命名。此文件系统要求您为文件系统专用存储桶-您不应使用包含文件的现有存储桶,也不应将其他文件写入同一存储桶。该文件系统存储的文件可以大于5GB,但不能与其他S3工具互操作。异常解决缺jar包异常:java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found缺包:hadoop-aws-3.1.1.3.1.0.0-78.jar异常:java.lang.NoClassDefFoundError: com/amazonaws/services/s3/model/MultiObjectDeleteException缺包:aws-java-sdk-s3-1.12.22.jar异常:java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceExceptionjava.lang.NoClassDefFoundError: com/amazonaws/SdkBaseException缺包:aws-java-sdk-core-1.12.22.jar异常:java.lang.NoClassDefFoundError: com/amazonaws/services/dynamodbv2/model/AmazonDynamoDBException缺包:aws-java-sdk-dynamodb-1.12.22.jar异常:java.lang.NoClassDefFoundError: org/apache/hadoop/fs/statistics/IOStatisticsSource缺包:hadoop-client-api-3.1.1.3.1.0.0-78.jar异常:java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions缺包:hadoop-client-runtime-3.1.1.3.1.0.0-78Status Code: 403; Error Code: 403 Forbiddenjava.nio.file.AccessDeniedException: s3a://txt/test.txt: getFileStatus on s3a://txt/test.txt: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 7WGTHV5104XV9QG1; S3 Extended Request ID: foP4XEGSFN258IhbdV8NolM8Rmn8pESxAIK8LCwxFWxjL3Bd5Cm+kJJSjOODxeQ2cnTnqbxaXjg=; Proxy: null), S3 Extended Request ID: foP4XEGSFN258IhbdV8NolM8Rmn8pESxAIK8LCwxFWxjL3Bd5Cm+kJJSjOODxeQ2cnTnqbxaXjg=:403 Forbidden ... 49 elided Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 7WGTHV5104XV9QG1; S3 Extended Request ID: foP4XEGSFN258IhbdV8NolM8Rmn8pESxAIK8LCwxFWxjL3Bd5Cm+kJJSjOODxeQ2cnTnqbxaXjg=; Proxy: null) ... 66 more原因是我们的fs.s3a.endpoint设置的值不对,网上很多网站配置的值都是s3.cn-north-1.amazonaws.com.cn,比如这篇文章:https://blog.csdn.net/zhouyan8603/article/details/77640643,这样作为新手的我很容易被带进坑里出不来~,正确的值应该是ip:7480,设置Nginx代理的8080端口也不行,原因未知,另外在同一个spark-shell里设置了错误的endpoint值,我们再重新设置该值还是会报同样的错误,必须退出spark-shell重新设置才可以。No AWS Credentials provided by BasicAWSCredentialsProvider22/09/29 19:22:27 WARN InstanceMetadataServiceResourceFetcher: Fail to retrieve token com.amazonaws.SdkClientException: Failed to connect to service endpoint: org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on txt: com.amazonaws.AmazonClientException: No AWS Credentials provided by BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Failed to connect to service endpoint: : No AWS Credentials provided by BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Failed to connect to service endpoint:这是因为我们没有正确的设置fs.s3a.access.key和fs.s3a.secret.key,有一些文章上写的key为spark.hadoop.fs.s3a.access.key和和spark.hadoop.fs.s3a.secret.key,也就是所有的key都多了前缀spark.hadoop,经过验证,这样配置是不生效的,以下几种方式都不对:spark.sparkContext.hadoopConfiguration.set("spark.hadoop.fs.s3a.access.key","XLSMNHY9Z4ML094IBGOY") spark.conf.set("fs.s3a.access.key", "XLSMNHY9Z4ML094IBGOY") spark.conf.set("spark.hadoop.fs.s3a.access.key","XLSMNHY9Z4ML094IBGOY")以上就是我遇到的几个主要的问题及解决思路,当然大家一开始按照我总结的正确的方式是不会有这些问题的。记录在这里主要是为了备忘和为其他一开始没有按照我写的文章进行配置导致出现同样问题的同学提供解决思路,网上资料太少了~总结本文主要总结了Spark读写Ceph S3文件的配置和代码示例,以及一些异常的解决方法,希望能对大家有所帮助。
4. 创建RBD块存储4.1 创建pool在ceph1节点上执行如下命令创建pool:https://dongkelun.com/2022/09/29/cephInstallConf/第一个64代表设置的pg数量,第二个64代表设置的pgp数量使用如下命令查看当前已有的pool:[root@ceph1 ~]# ceph osd lspools 1 rbd查看指定pool中的pg和pgp数量:[root@ceph1 ~]# ceph osd pool get rbd pg_num pg_num: 64 [root@ceph1 ~]# ceph osd pool get rbd pgp_num pgp_num: 64查看指定 pool 中的副本数(副本数默认为3):[root@ceph2 ~]# ceph osd pool get rbd size size: 3查看指定 pool 的调度算法(默认为replicated_rule):[root@ceph1 ~]# ceph osd pool get rbd crush_rule crush_rule: replicated_rule调整指定pool的pg和pgp数量:ceph osd pool set rbd pg_num 128 ceph osd pool set rbd pgp_num 128 调整指定pool的副本数:ceph osd pool set rbd size 2 一般来说,创建pool后,需要对这个pool进行初始化,例如用于rbd块存储的pool使用rbd > > pool init命令就可以将指定pool初始化为rbd类型的application。如果不进行这个初始化的> 操作,不会影响存储的使用,但是会在集群信息中显示报警信息。5. 创建RGW对象存储5.1. 创建RGW在ceph1节点的/usr/local/ceph-cluster目录下执行如下命令创建RGW:ceph-deploy rgw create ceph1 ceph2 ceph3执行完成后查看集群信息,可以看到已经启用了三个RGW:[root@ceph1 ceph-cluster]# ceph -s cluster: id: 38ea5f1d-d0bf-447e-9ef4-34def8e8db78 health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 mgr: ceph1(active), standbys: ceph2, ceph3 osd: 3 osds: 3 up, 3 in rgw: 3 daemons active data: pools: 5 pools, 160 pgs objects: 14 objects, 2.10KiB usage: 3.01GiB used, 1.46TiB / 1.46TiB avail pgs: 5.000% pgs unknown 152 active+clean 8 unknown client: 3.42KiB/s rd, 5op/s rd, 0op/s wr在ceph1、ceph2、ceph3节点上查看RGW监听的端口(默认为 7480):netstat -anplut | grep 7480netstat -anplut | grep 7480 tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 9540/radosgw5.2. RGW多节点代理配置RGW多节点代理:yum -y install nginx vi /etc/nginx/nginx.conf http { 中添加如下配置: upstream rgw { server 192.168.44.128:7480; server 192.168.44.129:7480; server 192.168.44.130:7480; server { listen 8080; server_name localhost; client_max_body_size 0; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; location / { proxy_pass http://rgw; systemctl restart nginx # 重启nginx服务使配置生效 systemctl status nginx # 查看nginx服务状态 systemctl enable nginx # 设置nginx服务开机自启动5.3. 创建访问S3的用户创建的用户有两种,一种是兼容S3风格,还有一种是Swift风格。5.3.1 创建访问S3的用户使用如下命令创建一个用于访问S3的用户:radosgw-admin user create --uid emr-s3-user --display-name "EMR S3 User Demo" 命令执行后会输出如下结果:{ "user_id": "emr-s3-user", "display_name": "EMR S3 User Demo", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ "user": "emr-s3-user", "access_key": "access_key", "secret_key": "secret_key" "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 "temp_url_keys": [], "type": "rgw" }上面的内容中显示了用户的key信息以及一些用户的配额信息。以上的信息也可以通过如下命令再次输出:radosgw-admin user info --uid emr-s3-user5.3.2 测试S3接口访问使用python程序来测试s3接口的访问,首先安装名为名称为:python-boto的python包: yum -y install python-boto 创建名为s3test.py的文件,内容如下:import boto.s3.connection access_key = 'access_key' secret_key = 'secret_key' conn = boto.connect_s3( aws_access_key_id=access_key, aws_secret_access_key=secret_key, host='192.168.44.128', port=8080, is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(), bucket = conn.create_bucket('emr-bucket') for bucket in conn.get_all_buckets(): print "{name} {created}".format( name=bucket.name, created=bucket.creation_date, )需要注意的是,要将程序中的access_key和secret_key修改为前面生成用户的相关信息。host 需要修改为Nginx服务的地址,port修改为相应代理端口。执行这个python程序,会输出如下信息:[root@ceph1 ~]# python s3test.py emr-bucket 2022-09-27T09:14:48.206Z这代表成功创建一个bucket。5.4 使用命令行工具访问s3接口5.4.1 使用命令行工具访问S3接口配置S3CMD在命令行中调用s3接口来管理对象存储,首先需要安装s3cmd软件包:yum -y install s3cmd 安装完成后需要对s3cmd进行配置,配置过程如下:s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: access_key # 设置访问用户的Access Key Secret Key: secret_key # 设置访问用户的Secret Key Default Region [US]: CN Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: 192.168.44.128:8080 # 设置RWG的代理地址和端口 Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 192.168.44.128:8080/%(bucket)s # 设置bucket的名称(可以将IP地址更换为域名) Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: # 不设置密码 Path to GPG program [/usr/bin/gpg]: # 使用gpg加密 When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: no # 不使用 HTTPS On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: # 不设置代理访问 New settings: Access Key: 3347B8YK03UDM8OCUVYV Secret Key: jpWtK9Ra09cKqQudBVyGbgPPEfncy24IjjxBrFyM Default Region: CN S3 Endpoint: 192.168.44.128:8080 DNS-style bucket+hostname:port template for accessing a bucket: 192.168.44.128:8080/%(bucket)s Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] y # 验证访问 Please wait, attempting to list all buckets... ERROR: Test failed: 403 (SignatureDoesNotMatch) Retry configuration? [Y/n] n Save settings? [y/N] y # 保存配置 Configuration saved to '/root/.s3cfg'配置完成后,配置文件存储在/root/.s3cfg文件中,需要将该文件中的signature_v2 配置项改为True,否则在后续执行命令的时候会触发 ERROR: S3 error: 403 (SignatureDoesNotMatch) 报错: signature_v2 = True 保存退出后,就可以使 s3cmd命令来管理对象存储,首先使用如下命令查看当前的bucket:[root@ceph1 ~]# s3cmd ls 2022-09-27 09:14 s3://emr-bucket 创建一个新的bucket:[root@ceph1 ~]# s3cmd mb s3://emr-s3-demo ERROR: S3 error: 400 (InvalidLocationConstraint): The specified location-constraint is not valid vi /root/.s3cfg bucket_location = CN bucket_location = US [root@ceph1 ~]# s3cmd mb s3://emr-s3-demo Bucket 's3://emr-s3-demo/' created5.4.2 上传文件将本地的/etc/fstab文件上传到对象存储中,并将存储的名称修改为fstab-demo:[root@ceph1 ~]# s3cmd put /etc/fstab s3://emr-s3-demo/fstab-demo upload: '/etc/fstab' -> 's3://emr-s3-demo/fstab-demo' [1 of 1] 743 of 743 100% in 1s 403.24 B/s done [root@ceph1 ~]# s3cmd ls s3://emr-s3-demo 2021-01-07 09:04 465 s3://emr-s3-demo/fstab-demo这样Ceph分布式集群安装配置就完成了,我们下篇文章再讲如何用Spark读写Ceph S36. 参考http://www.soolco.com/post/89854_1_1.htmlhttps://www.cnblogs.com/flytor/p/11380026.htmlhttps://blog.csdn.net/inrgihc/article/details/112005710https://www.cnblogs.com/freeitlzx/p/11281763.html问题解决主要是hostname导致的问题,因为我是在现有的HDP环境下安装的Ceph,而且已经配过了hosts和hostname,但是我不想用已有的hostname,我在/etc/hosts文件里新添加了ceph的host配置,但是没有修改hostname,这样就导致了问题的发生。问题出现在命令ceph-deploy mon create-initial 报错:ceph1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status [ceph1][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory报没有文件/var/run/ceph/ceph-mon.ceph1.asok,那么我们看一下路径下有什么ls /var/run/ceph/ ceph-mon.indata-192-168-44-128.asok 发现是有asok文件的,但是名字不一样,名字是和配的旧的host一致尝试解决思路:修改/etc/hosts配置顺序将ceph的host放在最前面,该思路无效,和顺序无关用旧的host重新ceph-deployceph-deploy new --public-network 192.168.44.0/24 --cluster-network 192.168.44.0/24 indata-192-168-44-128然后再执行ceph-deploy --overwrite-conf mon create-initial 但是会报错:[ceph_deploy.mon][WARNIN] mon.indata-192-168-44-128 monitor is not yet in quorum, tries left: 5 [ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying [indata-192-168-44-128][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.indata-192-168-44-128.asok mon_status [ceph_deploy.mon][WARNIN] mon.indata-192-168-44-128 monitor is not yet in quorum, tries left: 4 [ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying [indata-192-168-44-128][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.indata-192-168-44-128.asok mon_status [ceph_deploy.mon][WARNIN] mon.indata-192-168-44-128 monitor is not yet in quorum, tries left: 3 [ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying [indata-192-168-44-128][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.indata-192-168-44-128.asok mon_status [ceph_deploy.mon][WARNIN] mon.indata-192-168-44-128 monitor is not yet in quorum, tries left: 2 [ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying [indata-192-168-44-128][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.indata-192-168-44-128.asok mon_status [ceph_deploy.mon][WARNIN] mon.indata-192-168-44-128 monitor is not yet in quorum, tries left: 1 [ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying [ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum: [ceph_deploy.mon][ERROR ] indata-192-168-44-128原因是mons名称不一致,因为ceph-mon.indata-192-168-44-128.asok是通过ceph1创建的,所以它里面的内容也是ceph1,而不是indata-192-168-44-128"mons": [ "rank": 0, "name": "ceph1", "addr": "192.168.44.128:6789/0", "public_addr": "192.168.44.128:6789/0" ]那么我们需要重新生成asok,但是asok是在什么条件下生成的呢(只删asok是不行的),执行下面的命令就可以(捣鼓了半天,没有资料,有点坑~)sudo rm -r /var/lib/ceph/mon/ sudo rm -r /var/run/ceph/* ceph-deploy new --public-network 192.168.44.0/24 --cluster-network 192.168.44.0/24 indata-192-168-44-128这样就会重新生成,再用命令验证一下看内容是否变了,然后再执行ceph-deploy –overwrite-conf mon create-initial果然成功了,但是这样是用的旧的hostname,我还是想用ceph1试一下,因为上面已经搞懂了怎么重新生成asok首先删除一些生成的文件sudo rm -r /usr/local/ceph-cluster/* sudo rm -r /var/lib/ceph/mon/ sudo rm -r /var/run/ceph/*然后修改各个节点的hostnamehostnamectl set-hostname ceph1.bigdata.com 这样重新生成的asok文件名就为ceph-mon.ceph1.asok了然后再重新执行命令:ceph-deploy new --public-network 192.168.44.0/24 --cluster-network 192.168.44.0/24 ceph1 ceph-deploy --overwrite-conf mon create-initial这样问题就解决了,另外/etc/ceph下面也有ceph相关的配置文件,如果我们一开始不懂,把这个文件夹删了,我们需要手动创建该文件夹,然后重新执行上面的命令就可以了
Spark3.2.1Hudi支持不同的Spark版本,默认是Spark2.4.4,要想使用Spark3.2.1版本,可以通过如下命令编译打包:mvn clean package -DskipTests -Dspark3.2 -Dscala-2.12 要想调试Spark3.2.1,可以根据上面的命令先打包或者install到本地,再新建一个测试项目引用我们自己打的包用来调试,也可以直接在Hudi源码里修改配置Idea Spark3.2.1的环境,不过比较麻烦,本人用的第二种方法,同样的这里也只讲关键的不同点打印信息Spark3.2.1的打印信息会比Spark2更全一点,我们可以看到最终的Physical Plan与AtomicCreateTableAsSelect和HoodieCatalog有关== Parsed Logical Plan == 'CreateTableAsSelectStatement [h0], [dt], [hoodie.datasource.write.operation=upsert, primaryKey=id, hoodie.table.name=tableName, hoodie.database.name=databaseName, type=cow, preCombineField=ts], hudi, false, false +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- OneRowRelation == Analyzed Logical Plan == CreateTableAsSelect org.apache.spark.sql.hudi.catalog.HoodieCatalog@6dbbdf92, default.h0, [dt], [provider=hudi, hoodie.datasource.write.operation=upsert, primaryKey=id, hoodie.table.name=tableName, hoodie.database.name=databaseName, type=cow, preCombineField=ts], false +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- OneRowRelation == Optimized Logical Plan == CommandResult AtomicCreateTableAsSelect org.apache.spark.sql.hudi.catalog.HoodieCatalog@6dbbdf92, default.h0, [dt], Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4], [provider=hudi, hoodie.datasource.write.operation=upsert, primaryKey=id, hoodie.table.name=tableName, hoodie.database.name=databaseName, owner=dongkelun01, type=cow, preCombineField=ts], [], false +- CreateTableAsSelect org.apache.spark.sql.hudi.catalog.HoodieCatalog@6dbbdf92, default.h0, [dt], [provider=hudi, hoodie.datasource.write.operation=upsert, primaryKey=id, hoodie.table.name=tableName, hoodie.database.name=databaseName, type=cow, preCombineField=ts], false +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- OneRowRelation == Physical Plan == CommandResult <empty> +- AtomicCreateTableAsSelect org.apache.spark.sql.hudi.catalog.HoodieCatalog@6dbbdf92, default.h0, [dt], Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4], [provider=hudi, hoodie.datasource.write.operation=upsert, primaryKey=id, hoodie.table.name=tableName, hoodie.database.name=databaseName, owner=dongkelun01, type=cow, preCombineField=ts], [], false +- *(1) Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- *(1) Scan OneRowRelation[]PlanChangeLogger的日志比较多,会打印哪些规则没有生效,哪些规则生效了,具体怎么生效的等,由于比较多,这里只贴一小部分7720 [ScalaTest-run-running-TestCreateTable] INFO org.apache.spark.sql.catalyst.rules.PlanChangeLogger - Batch Substitution has no effect. 7721 [ScalaTest-run-running-TestCreateTable] INFO org.apache.spark.sql.catalyst.rules.PlanChangeLogger - Batch Disable Hints has no effect. 7724 [ScalaTest-run-running-TestCreateTable] INFO org.apache.spark.sql.catalyst.rules.PlanChangeLogger - Batch Hints has no effect. 7728 [ScalaTest-run-running-TestCreateTable] INFO org.apache.spark.sql.catalyst.rules.PlanChangeLogger - Batch Simple Sanity Check has no effect. 8309 [ScalaTest-run-running-TestCreateTable] INFO org.apache.spark.sql.catalyst.rules.PlanChangeLogger - === Applying Rule org.apache.spark.sql.catalyst.analysis.ResolveSessionCatalog === !'CreateTableAsSelectStatement [h0], [dt], [hoodie.datasource.write.operation=upsert, primaryKey=id, hoodie.table.name=tableName, hoodie.database.name=databaseName, type=cow, preCombineField=ts], hudi, false, false CreateTableAsSelect org.apache.spark.sql.hudi.catalog.HoodieCatalog@c3719e5, default.h0, [dt], [provider=hudi, hoodie.datasource.write.operation=upsert, primaryKey=id, hoodie.table.name=tableName, hoodie.database.name=databaseName, type=cow, preCombineField=ts], false +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- OneRowRelation +- OneRowRelation 8331 [ScalaTest-run-running-TestCreateTable] INFO org.apache.spark.sql.catalyst.rules.PlanChangeLogger - === Result of Batch Resolution === !'CreateTableAsSelectStatement [h0], [dt], [hoodie.datasource.write.operation=upsert, primaryKey=id, hoodie.table.name=tableName, hoodie.database.name=databaseName, type=cow, preCombineField=ts], hudi, false, false CreateTableAsSelect org.apache.spark.sql.hudi.catalog.HoodieCatalog@c3719e5, default.h0, [dt], [provider=hudi, hoodie.datasource.write.operation=upsert, primaryKey=id, hoodie.table.name=tableName, hoodie.database.name=databaseName, type=cow, preCombineField=ts], false +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- OneRowRelation +- OneRowRelation 8334 [ScalaTest-run-running-TestCreateTable] INFO org.apache.spark.sql.catalyst.rules.PlanChangeLogger - Batch Remove TempResolvedColumn has no effect. ...... === Metrics of Executed Rules === Total number of runs: 141 Total time: 0.6459626 seconds Total number of effective runs: 2 Total time of effective runs: 0.302878 seconds由于比较长,导致换行,oldPan和newPlan的对比效果不是很明显,不过可以大概看出来前后变化就行,也可以自己调试对比ANTLR上一篇讲到Hudi有三个g4文件,一个在hudi-spark模块下,另外两个在hudi-spark2模块下,同样的在hudi-spark3模块下也有两个同名的g4文件,不过内容和Spark2的不一样,具体为:hudi-spark模块下的 HoodieSqlCommon.g4hudi-spark3模块下的 SqlBase.g4,拷贝自的Spark3.2.0源码里的SqlBase.g4hudi-spark3模块下的 HoodieSqlBase.g4 其中导入了上面的SqlBase.g4HoodieSqlBase.g4:grammar HoodieSqlBase; import SqlBase; singleStatement : statement EOF statement : query #queryStatement | ctes? dmlStatementNoWith #dmlStatement | createTableHeader ('(' colTypeList ')')? tableProvider? createTableClauses (AS? query)? #createTable | .*? #passThrough ;parsing同样的parsePlan首先调用HoodieCommonSqlParser.parsePlan,这个是公共的,和Spark2一样,返回null,调用sparkExtendedParser.parsePlanprivate lazy val builder = new HoodieSqlCommonAstBuilder(session, delegate) private lazy val sparkExtendedParser = sparkAdapter.createExtendedSparkParser .map(_(session, delegate)).getOrElse(delegate) override def parsePlan(sqlText: String): LogicalPlan = parse(sqlText) { parser => builder.visit(parser.singleStatement()) match { case plan: LogicalPlan => plan case _=> sparkExtendedParser.parsePlan(sqlText) }上一篇讲到Spark2的sparkExtendedParser为HoodieSpark2ExtendedSqlParser,那么Spark3的是啥呢?很简单和Spark2的逻辑一样,来看一下:private lazy val sparkExtendedParser = sparkAdapter.createExtendedSparkParser .map(_(session, delegate)).getOrElse(delegate) lazy val sparkAdapter: SparkAdapter = { val adapterClass = if (HoodieSparkUtils.isSpark3_2) { "org.apache.spark.sql.adapter.Spark3_2Adapter" } else if (HoodieSparkUtils.isSpark3_0 || HoodieSparkUtils.isSpark3_1) { "org.apache.spark.sql.adapter.Spark3_1Adapter" } else { "org.apache.spark.sql.adapter.Spark2Adapter" getClass.getClassLoader.loadClass(adapterClass) .newInstance().asInstanceOf[SparkAdapter] }根据上面的代码可知sparkAdapter为Spark3_2Adapter,接着看一下Spark3_2Adapter.createExtendedSparkParseroverride def createExtendedSparkParser: Option[(SparkSession, ParserInterface) => ParserInterface] = { Some( (spark: SparkSession, delegate: ParserInterface) => new HoodieSpark3_2ExtendedSqlParser(spark, delegate) }所以这里的Spark3的sparkExtendedParser为HoodieSpark3_2ExtendedSqlParser,接着到了HoodieSpark3_2ExtendedSqlParser.parsePlan,这里和Spark2的逻辑不一样 override def parsePlan(sqlText: String): LogicalPlan = parse(sqlText) { parser => builder.visit(parser.singleStatement()) match { case plan: LogicalPlan => plan case _=> delegate.parsePlan(sqlText) protected def parse[T](command: String)(toResult: HoodieSqlBaseParser => T): T = { logDebug(s"Parsing command: $command") val lexer = new HoodieSqlBaseLexer(new UpperCaseCharStream(CharStreams.fromString(command))) lexer.removeErrorListeners() lexer.addErrorListener(ParseErrorListener) val tokenStream = new CommonTokenStream(lexer) val parser = new HoodieSqlBaseParser(tokenStream) parser.addParseListener(PostProcessor) parser.removeErrorListeners() parser.addErrorListener(ParseErrorListener) // parser.legacy_setops_precedence_enabled = conf.setOpsPrecedenceEnforced parser.legacy_exponent_literal_as_decimal_enabled = conf.exponentLiteralAsDecimalEnabled parser.SQL_standard_keyword_behavior = conf.ansiEnabled try { try { // first, try parsing with potentially faster SLL mode parser.getInterpreter.setPredictionMode(PredictionMode.SLL) toResult(parser) catch { case e: ParseCancellationException => // if we fail, parse with LL mode tokenStream.seek(0) // rewind input stream parser.reset() // Try Again. parser.getInterpreter.setPredictionMode(PredictionMode.LL) toResult(parser) catch { case e: ParseException if e.command.isDefined => throw e case e: ParseException => throw e.withCommand(command) case e: AnalysisException => val position = Origin(e.line, e.startPosition) throw new ParseException(Option(command), e.message, position, position) }可以看到这里parser同样为HoodieSqlBaseParser,但是对于builder.visit(parser.singleStatement()),Spark3和Spark2是不一样的,为啥不一样?我们接着往下看:我们在Spark3的HoodieSqlBase.g4中可以看到statement中是有#createTable的| createTableHeader ('(' colTypeList ')')? tableProvider? createTableClauses (AS? query)? #createTable 其中 createTableHeader、tableProvider、AS 、query都是引自SqlBase.g4,所以这里的CTAS能匹配上,这里的parser.singleStatement()和之前讲的一样,最终都会调用builder.visitCreateTable,不同的是,这里的builder为HoodieSpark3_2ExtendedSqlAstBuilder,所以需要看一下它的visitCreateTable有何不同override def visitCreateTable(ctx: CreateTableContext): LogicalPlan = withOrigin(ctx) { val (table, temp, ifNotExists, external) = visitCreateTableHeader(ctx.createTableHeader) val columns = Option(ctx.colTypeList()).map(visitColTypeList).getOrElse(Nil) val provider = Option(ctx.tableProvider).map(_.multipartIdentifier.getText) val (partTransforms, partCols, bucketSpec, properties, options, location, comment, serdeInfo) = visitCreateTableClauses(ctx.createTableClauses()) if (provider.isDefined && serdeInfo.isDefined) { operationNotAllowed(s"CREATE TABLE ... USING ... ${serdeInfo.get.describe}", ctx) if (temp) { val asSelect = if (ctx.query == null) "" else " AS ..." operationNotAllowed( s"CREATE TEMPORARY TABLE ...$asSelect, use CREATE TEMPORARY VIEW instead", ctx) val partitioning = partitionExpressions(partTransforms, partCols, ctx) Option(ctx.query).map(plan) match { case Some(_) if columns.nonEmpty => operationNotAllowed( "Schema may not be specified in a Create Table As Select (CTAS) statement", case Some(_) if partCols.nonEmpty => // non-reference partition columns are not allowed because schema can't be specified operationNotAllowed( "Partition column types may not be specified in Create Table As Select (CTAS)", case Some(query) => CreateTableAsSelectStatement( table, query, partitioning, bucketSpec, properties, provider, options, location, comment, writeOptions = Map.empty, serdeInfo, external = external, ifNotExists = ifNotExists) case _ => // Note: table schema includes both the table columns list and the partition columns // with data type. val schema = StructType(columns ++ partCols) CreateTableStatement(table, schema, partitioning, bucketSpec, properties, provider, options, location, comment, serdeInfo, external = external, ifNotExists = ifNotExists) }这里会匹配到case Some(query),返回CreateTableAsSelectStatement,这就是和Spark2(或者说Spark源码里的visitCreateTable)不同之处,Spark2返回的是CreateTable(tableDesc, mode, Some(query)),那么又是在哪里对CreateTableAsSelectStatement进行处理的呢ResolveCatalogs 和 ResolveSessionCatalog有两个规则类会匹配CreateTableAsSelectStatement,ResolveCatalogs是在Analyzer的batches中定义的,ResolveSessionCatalog是在BaseSessionStateBuilder.analyzer重写的的extendedResolutionRules中定义的(我们在PlanChangeLogger的日志中可知,真正起作用的是ResolveSessionCatalog)override def batches: Seq[Batch] = Seq( Batch("Substitution", fixedPoint, // This rule optimizes `UpdateFields` expression chains so looks more like optimization rule. // However, when manipulating deeply nested schema, `UpdateFields` expression tree could be // very complex and make analysis impossible. Thus we need to optimize `UpdateFields` early // at the beginning of analysis. OptimizeUpdateFields, CTESubstitution, WindowsSubstitution, EliminateUnions, SubstituteUnresolvedOrdinals), Batch("Disable Hints", Once, new ResolveHints.DisableHints), Batch("Hints", fixedPoint, ResolveHints.ResolveJoinStrategyHints, ResolveHints.ResolveCoalesceHints), Batch("Simple Sanity Check", Once, LookupFunctions), Batch("Resolution", fixedPoint, ResolveTableValuedFunctions(v1SessionCatalog) :: ResolveNamespace(catalogManager) :: new ResolveCatalogs(catalogManager) :: ResolveUserSpecifiedColumns :: ResolveInsertInto :: ResolveRelations :: ...... extendedResolutionRules : _*), ....... protected def analyzer: Analyzer = new Analyzer(catalogManager) { override val extendedResolutionRules: Seq[Rule[LogicalPlan]] = new FindDataSourceTable(session) +: new ResolveSQLOnFile(session) +: new FallBackFileSourceV2(session) +: ResolveEncodersInScalaAgg +: new ResolveSessionCatalog(catalogManager) +: ResolveWriteToStream +: customResolutionRules override val postHocResolutionRules: Seq[Rule[LogicalPlan]] = DetectAmbiguousSelfJoin +: PreprocessTableCreation(session) +: PreprocessTableInsertion +: DataSourceAnalysis +: customPostHocResolutionRules override val extendedCheckRules: Seq[LogicalPlan => Unit] = PreWriteCheck +: PreReadCheck +: HiveOnlyCheck +: TableCapabilityCheck +: CommandCheck +: customCheckRules }他们两个的apply方法分别为:class ResolveCatalogs(val catalogManager: CatalogManager) extends Rule[LogicalPlan] with LookupCatalog { import org.apache.spark.sql.connector.catalog.CatalogV2Implicits._ import org.apache.spark.sql.connector.catalog.CatalogV2Util._ override def apply(plan: LogicalPlan): LogicalPlan = plan resolveOperators { case c @ CreateTableStatement( NonSessionCatalogAndTable(catalog, tbl), _, _, _, _, _, _, _, _, _, _, _) => CreateV2Table( catalog.asTableCatalog, tbl.asIdentifier, c.tableSchema, // convert the bucket spec and add it as a transform c.partitioning ++ c.bucketSpec.map(_.asTransform), convertTableProperties(c), ignoreIfExists = c.ifNotExists) case c @ CreateTableAsSelectStatement( NonSessionCatalogAndTable(catalog, tbl), _, _, _, _, _, _, _, _, _, _, _, _) => CreateTableAsSelect( catalog.asTableCatalog, tbl.asIdentifier, // convert the bucket spec and add it as a transform c.partitioning ++ c.bucketSpec.map(_.asTransform), c.asSelect, convertTableProperties(c), writeOptions = c.writeOptions, ignoreIfExists = c.ifNotExists) class ResolveSessionCatalog(val catalogManager: CatalogManager) extends Rule[LogicalPlan] with LookupCatalog { import org.apache.spark.sql.connector.catalog.CatalogV2Implicits._ import org.apache.spark.sql.connector.catalog.CatalogV2Util._ import org.apache.spark.sql.execution.datasources.v2.DataSourceV2Implicits._ override def apply(plan: LogicalPlan): LogicalPlan = plan.resolveOperatorsUp { case AddColumns(ResolvedV1TableIdentifier(ident), cols) => cols.foreach { c => assertTopLevelColumn(c.name, "AlterTableAddColumnsCommand") if (!c.nullable) { throw QueryCompilationErrors.addColumnWithV1TableCannotSpecifyNotNullError AlterTableAddColumnsCommand(ident.asTableIdentifier, cols.map(convertToStructField)) ...... case c @ CreateTableAsSelectStatement( SessionCatalogAndTable(catalog, tbl), _, _, _, _, _, _, _, _, _, _, _, _) => val (storageFormat, provider) = getStorageFormatAndProvider( c.provider, c.options, c.location, c.serde, ctas = true) if (!isV2Provider(provider)) { val tableDesc = buildCatalogTable(tbl.asTableIdentifier, new StructType, c.partitioning, c.bucketSpec, c.properties, provider, c.location, c.comment, storageFormat, c.external) val mode = if (c.ifNotExists) SaveMode.Ignore else SaveMode.ErrorIfExists CreateTable(tableDesc, mode, Some(c.asSelect)) } else { CreateTableAsSelect( catalog.asTableCatalog, tbl.asIdentifier, // convert the bucket spec and add it as a transform c.partitioning ++ c.bucketSpec.map(_.asTransform), c.asSelect, convertTableProperties(c), writeOptions = c.writeOptions, ignoreIfExists = c.ifNotExists) ......可以看到这两个规则都试图去匹配CreateTableAsSelectStatement,区别是一个匹配NonSessionCatalogAndTable,另一个匹配SessionCatalogAndTable,根据名字判断,他们两个的判断是反过来的,总有一个会匹配上,那么这俩具体实现是啥呢,我们以SessionCatalogAndTable为例,我们知道scala在匹配样例类对象时回去调用它的unapply方法,这里的参数nameParts为CreateTableAsSelectStatement的第一个参数,在上面的visitCreateTable可知,它是由visitCreateTableHeader(ctx.createTableHeader)返回的tableobject SessionCatalogAndTable { def unapply(nameParts: Seq[String]): Option[(CatalogPlugin, Seq[String])] = nameParts match { case SessionCatalogAndIdentifier(catalog, ident) => Some(catalog -> ident.asMultipartIdentifier) case _ => None object SessionCatalogAndIdentifier { def unapply(parts: Seq[String]): Option[(CatalogPlugin, Identifier)] = parts match { case CatalogAndIdentifier(catalog, ident) if CatalogV2Util.isSessionCatalog(catalog) => Some(catalog, ident) case _ => None object CatalogAndIdentifier { import org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper private val globalTempDB = SQLConf.get.getConf(StaticSQLConf.GLOBAL_TEMP_DATABASE) def unapply(nameParts: Seq[String]): Option[(CatalogPlugin, Identifier)] = { assert(nameParts.nonEmpty) if (nameParts.length == 1) { // nameParts.length等于1,代表只有表名,没有库名 Some((currentCatalog, Identifier.of(catalogManager.currentNamespace, nameParts.head))) } else if (nameParts.head.equalsIgnoreCase(globalTempDB)) { //nameParts.head为库名,判断库名是否等于globalTempDB // Conceptually global temp views are in a special reserved catalog. However, the v2 catalog // API does not support view yet, and we have to use v1 commands to deal with global temp // views. To simplify the implementation, we put global temp views in a special namespace // in the session catalog. The special namespace has higher priority during name resolution. // For example, if the name of a custom catalog is the same with `GLOBAL_TEMP_DATABASE`, // this custom catalog can't be accessed. Some((catalogManager.v2SessionCatalog, nameParts.asIdentifier)) } else { try { // 否则,通过catalogManager.catalog(nameParts.head)获取catalog Some((catalogManager.catalog(nameParts.head), nameParts.tail.asIdentifier)) } catch { case _: CatalogNotFoundException => Some((currentCatalog, nameParts.asIdentifier)) def currentCatalog: CatalogPlugin = catalogManager.currentCatalog最终会调用到CatalogAndIdentifier.unapply,因为我们建表语句中没有加库名限定,所以这里的nameParts为Seq(tableName),也就是length等于,所以返回Some((currentCatalog, Identifier.of(catalogManager.currentNamespace, nameParts.head))),值为:Some((HoodieCatalog, Identifier.of(default., tableName)),具体为啥为HoodieCatalog,原因和我们在开头提到的一个配置有关def sparkConf(): SparkConf = { val sparkConf = new SparkConf() if (HoodieSparkUtils.gteqSpark3_2) { sparkConf.set("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.hudi.catalog.HoodieCatalog") sparkConf }再看CatalogV2Util.isSessionCatalogdef isSessionCatalog(catalog: CatalogPlugin): Boolean = { catalog.name().equalsIgnoreCase(CatalogManager.SESSION_CATALOG_NAME) private[sql] object CatalogManager { val SESSION_CATALOG_NAME: String = "spark_catalog" }这里的HoodieCatalog.name()是在V2SessionCatalog实现的class V2SessionCatalog(catalog: SessionCatalog) extends TableCatalog with SupportsNamespaces with SQLConfHelper { import V2SessionCatalog._ override val defaultNamespace: Array[String] = Array("default") override def name: String = CatalogManager.SESSION_CATALOG_NAME所以CatalogV2Util.isSessionCatalog(catalog)为ture,NonSessionCatalogAndTable正好相反,!CatalogV2Util.isSessionCatalog(catalog)返回 falseobject NonSessionCatalogAndTable { def unapply(nameParts: Seq[String]): Option[(CatalogPlugin, Seq[String])] = nameParts match { case NonSessionCatalogAndIdentifier(catalog, ident) => Some(catalog -> ident.asMultipartIdentifier) case _ => None object NonSessionCatalogAndIdentifier { def unapply(parts: Seq[String]): Option[(CatalogPlugin, Identifier)] = parts match { case CatalogAndIdentifier(catalog, ident) if !CatalogV2Util.isSessionCatalog(catalog) => Some(catalog, ident) case _ => None }所以最终在ResolveSessionCatalog中匹配成功case c @ CreateTableAsSelectStatement( SessionCatalogAndTable(catalog, tbl), _, _, _, _, _, _, _, _, _, _, _, _) => val (storageFormat, provider) = getStorageFormatAndProvider( c.provider, c.options, c.location, c.serde, ctas = true) if (!isV2Provider(provider)) { val tableDesc = buildCatalogTable(tbl.asTableIdentifier, new StructType, c.partitioning, c.bucketSpec, c.properties, provider, c.location, c.comment, storageFormat, c.external) val mode = if (c.ifNotExists) SaveMode.Ignore else SaveMode.ErrorIfExists CreateTable(tableDesc, mode, Some(c.asSelect)) } else { CreateTableAsSelect( catalog.asTableCatalog, tbl.asIdentifier, // convert the bucket spec and add it as a transform c.partitioning ++ c.bucketSpec.map(_.asTransform), c.asSelect, convertTableProperties(c), writeOptions = c.writeOptions, ignoreIfExists = c.ifNotExists) }这一块的逻辑是判断是否为V2Provider,如果不是的话返回CreateTable,是的话返回CreateTableAsSelect,关于判断是否为V2Provider的逻辑比较多,这里先不讲,我们放在后面讲,我们这个版本的代码,是V2Provider,所以返回CreateTableAsSelect,这就是和Spark2不同的关键点,如果不是V2Provider,那么和Saprk2一样返回CreateTable 。我们会在下面讲到:CreateTableAsSelect 最终调用HoodieCatalog创建Hudi表,而CreateTable我们在上面讲Spark2时已知最终调用CreateHoodieTableAsSelectCommandcase class CreateTableAsSelect( catalog: TableCatalog, tableName: Identifier, partitioning: Seq[Transform], query: LogicalPlan, properties: Map[String, String], writeOptions: Map[String, String], ignoreIfExists: Boolean) extends UnaryCommand with V2CreateTablePlan { trait UnaryCommand extends Command with UnaryLike[LogicalPlan]那么CreateTableAsSelect又是在哪里被转化,最后调用Hudi的逻辑的呢?
前言上一篇文章Hudi Spark SQL源码学习总结-Create Table总结了Create Table的源码执行逻辑,这一篇继续总结CTAS,之所以总结CTAS,是之前在我提交的一个PR中发现,Spark2和Spark3.2.1版本的CTAS的逻辑不一样,最终走的Hudi实现类也不一样,所以本文分Spark2和Spark3.2.1两个版本分析不同点先总结一下Spark2和Spark3.2.1的整体逻辑的不同点Spark2: visitCreateTable->CreateTable->CreateHoodieTableAsSelectCommand.runSpark3.2.1: 前提配置了:spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog,如果没有配置则和Spark2一样visitCreateTable->CreateTableAsSelectStatement->isV2Provider->true->CreateTableAsSelect->HoodieCatalog.createHoodieTablevisitCreateTable->CreateTableAsSelectStatement->isV2Provider->false->CreateTable->CreateHoodieTableAsSelectCommand.runSpark2和Spark3.2.1不同的关键点有两个:1、配置spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog2、isV2Provider(“hudi”)返回ture只要有一个不满足,Spark3.2.1的逻辑就和Spark2一样,引进HoodieCatalog和令hudi为V2Provider的PR为: [HUDI-3254] Introduce HoodieCatalog to manage tables for Spark Datasource V2目前master最新代码已将spark3.2.1的isV2Provider(“hudi”)改为了false,也就是Spark2和Saprk3.2.1的逻辑又一致了,PR:[HUDI-4178] Addressing performance regressions in Spark DataSourceV2 Integration版本Hudi https://github.com/apache/hudi/pull/5592 本文基于这个PR对应的代码进行调试分析,因为我就是在贡献这个PR时才发现Spark3.2.1和Saprk2的CTAS的逻辑不同的示例代码还是直接拿源码里的TestCreateTable的测试语句spark.sql( s""" | create table $tableName using hudi | partitioned by (dt) | tblproperties( | hoodie.database.name = "databaseName", | hoodie.table.name = "tableName", | primaryKey = 'id', | preCombineField = 'ts', | hoodie.datasource.write.operation = 'upsert', | type = '$tableType' | select 1 as id, 'a1' as name, 10 as price, '2021-04-01' as dt, 1000 as ts """.stripMargin )不过需要提一下, 这里的spark是如何创建的,因为在分析Spark3.2.1的逻辑时会用到,先贴在这里:protected lazy val spark: SparkSession = SparkSession.builder() .master("local[1]") .appName("hoodie sql test") .withExtensions(new HoodieSparkSessionExtension) .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") .config("hoodie.insert.shuffle.parallelism", "4") .config("hoodie.upsert.shuffle.parallelism", "4") .config("hoodie.delete.shuffle.parallelism", "4") .config("spark.sql.warehouse.dir", sparkWareHouse.getCanonicalPath) .config("spark.sql.session.timeZone", "CTT") .config(sparkConf()) .getOrCreate() def sparkConf(): SparkConf = { val sparkConf = new SparkConf() if (HoodieSparkUtils.gteqSpark3_2) { sparkConf.set("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.hudi.catalog.HoodieCatalog") sparkConf }打印执行计划和上篇文章一样我们先打印一下计划,方便我们分析config("spark.sql.planChangeLog.level", "INFO") val df = spark.sql(ctasSql) df.explain(true)和上一篇文章的不同点是加了配置"spark.sql.planChangeLog.level", "INFO",之所以上篇文章不加这篇文章加,是因为这个配置在Spark3.1.0才有得,所以对于Spark2的代码不生效,不过在我们分析Spark3.2.1的执行计划会比较有用,另外提一下,开启这个配置是通过logBasedOnLevel(message)来打印信息的,一共有三个方法调用了logBasedOnLevel,分别为logRule:如果rule生效,打印oldPlan和newPlan,logBatch:打印Batch的前后信息,logMetrics:打印整体指标,但是在planner.plan中没有调用这几个方法,所以对于分析哪些strategies会生效是没用的,不过对于分析analysis阶段的哪些规则会生效还是非常有用的private def logBasedOnLevel(f: => String): Unit = { logLevel match { case "TRACE" => logTrace(f) case "DEBUG" => logDebug(f) case "INFO" => logInfo(f) case "WARN" => logWarning(f) case "ERROR" => logError(f) case _ => logTrace(f) def logRule(ruleName: String, oldPlan: TreeType, newPlan: TreeType): Unit = { if (!newPlan.fastEquals(oldPlan)) { if (logRules.isEmpty || logRules.get.contains(ruleName)) { def message(): String = { s""" |=== Applying Rule $ruleName === |${sideBySide(oldPlan.treeString, newPlan.treeString).mkString("\n")} """.stripMargin logBasedOnLevel(message) def logBatch(batchName: String, oldPlan: TreeType, newPlan: TreeType): Unit = { if (logBatches.isEmpty || logBatches.get.contains(batchName)) { def message(): String = { if (!oldPlan.fastEquals(newPlan)) { s""" |=== Result of Batch $batchName === |${sideBySide(oldPlan.treeString, newPlan.treeString).mkString("\n")} """.stripMargin } else { s"Batch $batchName has no effect." logBasedOnLevel(message) def logMetrics(metrics: QueryExecutionMetrics): Unit = { val totalTime = metrics.time / NANOS_PER_SECOND.toDouble val totalTimeEffective = metrics.timeEffective / NANOS_PER_SECOND.toDouble val message = s""" |=== Metrics of Executed Rules === |Total number of runs: ${metrics.numRuns} |Total time: $totalTime seconds |Total number of effective runs: ${metrics.numEffectiveRuns} |Total time of effective runs: $totalTimeEffective seconds """.stripMargin logBasedOnLevel(message) }Spark2Spark2的逻辑和上一篇文章差不多,由于上一篇已经总结过了,所以本文只讲不同的地方,如果掌握了上一篇文章的逻辑的话,再看CTAS的逻辑还是比较简单的。打印信息== Parsed Logical Plan == 'CreateTable `h0`, ErrorIfExists +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- OneRowRelation == Analyzed Logical Plan == CreateHoodieTableAsSelectCommand `h0`, ErrorIfExists +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- OneRowRelation == Optimized Logical Plan == CreateHoodieTableAsSelectCommand `h0`, ErrorIfExists +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- OneRowRelation == Physical Plan == Execute CreateHoodieTableAsSelectCommand +- CreateHoodieTableAsSelectCommand `h0`, ErrorIfExists +- Project [1 AS id#0, a1 AS name#1, 10 AS price#2, 2021-04-01 AS dt#3, 1000 AS ts#4] +- OneRowRelationsingleStatement根据上篇文章中的逻辑,可知这里的CTAS语句同样对应Spark源码里的SqlBase.g4singleStatement : statement EOF statement : query #statementDefault | USE db=identifier #use | CREATE DATABASE (IF NOT EXISTS)? identifier (COMMENT comment=STRING)? locationSpec? (WITH DBPROPERTIES tablePropertyList)? #createDatabase | ALTER DATABASE identifier SET DBPROPERTIES tablePropertyList #setDatabaseProperties | DROP DATABASE (IF EXISTS)? identifier (RESTRICT | CASCADE)? #dropDatabase | createTableHeader ('(' colTypeList ')')? tableProvider ((OPTIONS options=tablePropertyList) | (PARTITIONED BY partitionColumnNames=identifierList) | bucketSpec | locationSpec | (COMMENT comment=STRING) | (TBLPROPERTIES tableProps=tablePropertyList))* (AS? query)? #createTable | createTableHeader ('(' columns=colTypeList ')')? ((COMMENT comment=STRING) | (PARTITIONED BY '(' partitionColumns=colTypeList ')') | bucketSpec | skewSpec | rowFormat | createFileFormat | locationSpec | (TBLPROPERTIES tableProps=tablePropertyList))* (AS? query)? #createHiveTable ...... tableProvider : USING qualifiedName ;不过这里有点不同的是:query不为空 (AS? query) ,所以在visitCreateTable中返回CreateTable(tableDesc, mode, Some(query))override def visitCreateTable(ctx: CreateTableContext): LogicalPlan = withOrigin(ctx) { val (table, temp, ifNotExists, external) = visitCreateTableHeader(ctx.createTableHeader) if (external) { operationNotAllowed("CREATE EXTERNAL TABLE ... USING", ctx) checkDuplicateClauses(ctx.TBLPROPERTIES, "TBLPROPERTIES", ctx) checkDuplicateClauses(ctx.OPTIONS, "OPTIONS", ctx) checkDuplicateClauses(ctx.PARTITIONED, "PARTITIONED BY", ctx) checkDuplicateClauses(ctx.COMMENT, "COMMENT", ctx) checkDuplicateClauses(ctx.bucketSpec(), "CLUSTERED BY", ctx) checkDuplicateClauses(ctx.locationSpec, "LOCATION", ctx) val options = Option(ctx.options).map(visitPropertyKeyValues).getOrElse(Map.empty) // provider为hudi val provider = ctx.tableProvider.qualifiedName.getText val schema = Option(ctx.colTypeList()).map(createSchema) val partitionColumnNames = Option(ctx.partitionColumnNames) .map(visitIdentifierList(_).toArray) .getOrElse(Array.empty[String]) val properties = Option(ctx.tableProps).map(visitPropertyKeyValues).getOrElse(Map.empty) val bucketSpec = ctx.bucketSpec().asScala.headOption.map(visitBucketSpec) val location = ctx.locationSpec.asScala.headOption.map(visitLocationSpec) val storage = DataSource.buildStorageFormatFromOptions(options) if (location.isDefined && storage.locationUri.isDefined) { throw new ParseException( "LOCATION and 'path' in OPTIONS are both used to indicate the custom table path, " + "you can only specify one of them.", ctx) val customLocation = storage.locationUri.orElse(location.map(CatalogUtils.stringToURI)) val tableType = if (customLocation.isDefined) { CatalogTableType.EXTERNAL } else { CatalogTableType.MANAGED val tableDesc = CatalogTable( identifier = table, tableType = tableType, storage = storage.copy(locationUri = customLocation), schema = schema.getOrElse(new StructType), provider = Some(provider), partitionColumnNames = partitionColumnNames, bucketSpec = bucketSpec, properties = properties, comment = Option(ctx.comment).map(string)) // Determine the storage mode. val mode = if (ifNotExists) SaveMode.Ignore else SaveMode.ErrorIfExists if (ctx.query != null) { // Get the backing query. val query = plan(ctx.query) if (temp) { operationNotAllowed("CREATE TEMPORARY TABLE ... USING ... AS query", ctx) // Don't allow explicit specification of schema for CTAS if (schema.nonEmpty) { operationNotAllowed( "Schema may not be specified in a Create Table As Select (CTAS) statement", CreateTable(tableDesc, mode, Some(query)) } else { if (temp) { if (ifNotExists) { operationNotAllowed("CREATE TEMPORARY TABLE IF NOT EXISTS", ctx) logWarning(s"CREATE TEMPORARY TABLE ... USING ... is deprecated, please use " + "CREATE TEMPORARY VIEW ... USING ... instead") // Unlike CREATE TEMPORARY VIEW USING, CREATE TEMPORARY TABLE USING does not support // IF NOT EXISTS. Users are not allowed to replace the existing temp table. CreateTempViewUsing(table, schema, replace = false, global = false, provider, options) } else { CreateTable(tableDesc, mode, None) }analysis那么在analysis阶段中Hudi的自定义规则customResolutionRules中的HoodieAnalysis的apply方法中会被匹配到case class HoodieAnalysis(sparkSession: SparkSession) extends Rule[LogicalPlan] with SparkAdapterSupport { override def apply(plan: LogicalPlan): LogicalPlan = { plan match { // Convert to MergeIntoHoodieTableCommand case m @ MergeIntoTable(target, _, _, _, _) if m.resolved && sparkAdapter.isHoodieTable(target, sparkSession) => MergeIntoHoodieTableCommand(m) // Convert to UpdateHoodieTableCommand case u @ UpdateTable(table, _, _) if u.resolved && sparkAdapter.isHoodieTable(table, sparkSession) => UpdateHoodieTableCommand(u) // Convert to DeleteHoodieTableCommand case d @ DeleteFromTable(table, _) if d.resolved && sparkAdapter.isHoodieTable(table, sparkSession) => DeleteHoodieTableCommand(d) // Convert to InsertIntoHoodieTableCommand case l if sparkAdapter.isInsertInto(l) => val (table, partition, query, overwrite, _) = sparkAdapter.getInsertIntoChildren(l).get table match { case relation: LogicalRelation if sparkAdapter.isHoodieTable(relation, sparkSession) => new InsertIntoHoodieTableCommand(relation, query, partition, overwrite) case _ => // Convert to CreateHoodieTableAsSelectCommand case CreateTable(table, mode, Some(query)) if query.resolved && sparkAdapter.isHoodieTable(table) => CreateHoodieTableAsSelectCommand(table, mode, query) // Convert to CompactionHoodieTableCommand case CompactionTable(table, operation, options) if table.resolved && sparkAdapter.isHoodieTable(table, sparkSession) => val tableId = getTableIdentifier(table) val catalogTable = sparkSession.sessionState.catalog.getTableMetadata(tableId) CompactionHoodieTableCommand(catalogTable, operation, options) // Convert to CompactionHoodiePathCommand case CompactionPath(path, operation, options) => CompactionHoodiePathCommand(path, operation, options) // Convert to CompactionShowOnTable case CompactionShowOnTable(table, limit) if sparkAdapter.isHoodieTable(table, sparkSession) => val tableId = getTableIdentifier(table) val catalogTable = sparkSession.sessionState.catalog.getTableMetadata(tableId) CompactionShowHoodieTableCommand(catalogTable, limit) // Convert to CompactionShowHoodiePathCommand case CompactionShowOnPath(path, limit) => CompactionShowHoodiePathCommand(path, limit) // Convert to HoodieCallProcedureCommand case c@CallCommand(_, _) => val procedure: Option[Procedure] = loadProcedure(c.name) val input = buildProcedureArgs(c.args) if (procedure.nonEmpty) { CallProcedureHoodieCommand(procedure.get, input) } else { case _ => plan }这里转化为CreateHoodieTableAsSelectCommand,它和CreateHoodieTableCommand一样是是Command的子类case class CreateHoodieTableAsSelectCommand( table: CatalogTable, mode: SaveMode, query: LogicalPlan) extends HoodieLeafRunnableCommand { override def innerChildren: Seq[QueryPlan[_]] = Seq(query) override def run(sparkSession: SparkSession): Seq[Row] = { assert(table.tableType != CatalogTableType.VIEW) assert(table.provider.isDefined) val sessionState = sparkSession.sessionState val db = table.identifier.database.getOrElse(sessionState.catalog.getCurrentDatabase) val tableIdentWithDB = table.identifier.copy(database = Some(db)) val tableName = tableIdentWithDB.unquotedString if (sessionState.catalog.tableExists(tableIdentWithDB)) { assert(mode != SaveMode.Overwrite, s"Expect the table $tableName has been dropped when the save mode is Overwrite") if (mode == SaveMode.ErrorIfExists) { throw new RuntimeException(s"Table $tableName already exists. You need to drop it first.") if (mode == SaveMode.Ignore) { // Since the table already exists and the save mode is Ignore, we will just return. // scalastyle:off return Seq.empty // scalastyle:on ......所以会最终会调用ExecutedCommandExec.executeCollect,触发CreateHoodieTableAsSelectCommand重写的run方法,实现Hudi自己的逻辑,Hudi自己的逻辑可以在Hudi源码里调试跟踪,本文就不总结了。
analyzer首先看一下sparkSession.sessionState.analyzerlazy val analyzer: Analyzer = analyzerBuilder() 上面提到过sessionState是由BaseSessionStateBuilder.build创建的,那么同样去BaseSessionStateBuilder里面看一下这个analyzerBuilder是啥protected def analyzer: Analyzer = new Analyzer(catalog, conf) { override val extendedResolutionRules: Seq[Rule[LogicalPlan]] = new FindDataSourceTable(session) +: new ResolveSQLOnFile(session) +: customResolutionRules override val postHocResolutionRules: Seq[Rule[LogicalPlan]] = PreprocessTableCreation(session) +: PreprocessTableInsertion(conf) +: DataSourceAnalysis(conf) +: customPostHocResolutionRules override val extendedCheckRules: Seq[LogicalPlan => Unit] = PreWriteCheck +: PreReadCheck +: HiveOnlyCheck +: customCheckRules }它是一个匿名内部类重写了Analyzer的几个扩展规则,这几个扩展规则在Analyzer中均为空,其中extendedCheckRules是在Analyzer的父接口CheckAnalysis中定义的,我们上面提到了,Hudi第一个自定义扩展是扩展了自己的sqlParser,那么剩下的两个都是作用在analysis阶段HoodieAnalysis.customResolutionRules.foreach { ruleBuilder => extensions.injectResolutionRule { session => ruleBuilder(session) HoodieAnalysis.customPostHocResolutionRules.foreach { rule => extensions.injectPostHocResolutionRule { session => rule(session) }injectResolutionRule先看一下injectResolutionRuledef injectResolutionRule(builder: RuleBuilder): Unit = { resolutionRuleBuilders += builder }结合BaseSessionStateBuilder中的customResolutionRules,可知,这第二个自定义扩展是实现了extendedResolutionRules中的customResolutionRules,对应Analyzer的规则批batches的 Resolutionprotected def customResolutionRules: Seq[Rule[LogicalPlan]] = { extensions.buildResolutionRules(session) private[sql] def buildResolutionRules(session: SparkSession): Seq[Rule[LogicalPlan]] = { resolutionRuleBuilders.map(_.apply(session)) }injectPostHocResolutionRule再看一下injectPostHocResolutionRuledef injectPostHocResolutionRule(builder: RuleBuilder): Unit = { postHocResolutionRuleBuilders += builder protected def customPostHocResolutionRules: Seq[Rule[LogicalPlan]] = { extensions.buildPostHocResolutionRules(session) private[sql] def buildPostHocResolutionRules(session: SparkSession): Seq[Rule[LogicalPlan]] = { postHocResolutionRuleBuilders.map(_.apply(session)) }可知第三个自定义扩展是实现了postHocResolutionRules中的customPostHocResolutionRules,对应Analyzer的规则批batches的 Post-Hoc ResolutionHudi自定义analysis规则Hudi自定义扩展具体的规则有extendedResolutionRules: session => HoodieResolveReferences(session) 和 session => HoodieAnalysis(session)postHocResolutionRules: session => HoodiePostAnalysisRule(session)具体代码为:def customResolutionRules: Seq[RuleBuilder] = { val rules: ListBuffer[RuleBuilder] = ListBuffer( // Default rules session => HoodieResolveReferences(session), session => HoodieAnalysis(session) if (HoodieSparkUtils.gteqSpark3_2) { val dataSourceV2ToV1FallbackClass = "org.apache.spark.sql.hudi.analysis.HoodieDataSourceV2ToV1Fallback" val dataSourceV2ToV1Fallback: RuleBuilder = session => ReflectionUtils.loadClass(dataSourceV2ToV1FallbackClass, session).asInstanceOf[Rule[LogicalPlan]] val spark3AnalysisClass = "org.apache.spark.sql.hudi.analysis.HoodieSpark3Analysis" val spark3Analysis: RuleBuilder = session => ReflectionUtils.loadClass(spark3AnalysisClass, session).asInstanceOf[Rule[LogicalPlan]] val spark3ResolveReferencesClass = "org.apache.spark.sql.hudi.analysis.HoodieSpark3ResolveReferences" val spark3ResolveReferences: RuleBuilder = session => ReflectionUtils.loadClass(spark3ResolveReferencesClass, session).asInstanceOf[Rule[LogicalPlan]] val spark32ResolveAlterTableCommandsClass = "org.apache.spark.sql.hudi.ResolveHudiAlterTableCommandSpark32" val spark32ResolveAlterTableCommands: RuleBuilder = session => ReflectionUtils.loadClass(spark32ResolveAlterTableCommandsClass, session).asInstanceOf[Rule[LogicalPlan]] // NOTE: PLEASE READ CAREFULLY // It's critical for this rules to follow in this order, so that DataSource V2 to V1 fallback // is performed prior to other rules being evaluated rules ++= Seq(dataSourceV2ToV1Fallback, spark3Analysis, spark3ResolveReferences, spark32ResolveAlterTableCommands) } else if (HoodieSparkUtils.gteqSpark3_1) { val spark31ResolveAlterTableCommandsClass = "org.apache.spark.sql.hudi.ResolveHudiAlterTableCommand312" val spark31ResolveAlterTableCommands: RuleBuilder = session => ReflectionUtils.loadClass(spark31ResolveAlterTableCommandsClass, session).asInstanceOf[Rule[LogicalPlan]] rules ++= Seq(spark31ResolveAlterTableCommands) rules def customPostHocResolutionRules: Seq[RuleBuilder] = { val rules: ListBuffer[RuleBuilder] = ListBuffer( // Default rules session => HoodiePostAnalysisRule(session) if (HoodieSparkUtils.gteqSpark3_2) { val spark3PostHocResolutionClass = "org.apache.spark.sql.hudi.analysis.HoodieSpark3PostAnalysisRule" val spark3PostHocResolution: RuleBuilder = session => ReflectionUtils.loadClass(spark3PostHocResolutionClass, session).asInstanceOf[Rule[LogicalPlan]] rules += spark3PostHocResolution rules }可以看出来如果是spark3的话,还有别的规则,因为这里是spark2,所以只看默认的规则就可以了看完sparkSession.sessionState.analyzer的定义,接下来就看analysis具体的实现方法了executeAndCheckdef executeAndCheck(plan: LogicalPlan): LogicalPlan = AnalysisHelper.markInAnalyzer { val analyzed = execute(plan) try { checkAnalysis(analyzed) analyzed } catch { case e: AnalysisException => val ae = new AnalysisException(e.message, e.line, e.startPosition, Option(analyzed)) ae.setStackTrace(e.getStackTrace) throw ae }根据参考文章中的学习,我们知道,主要分为2 步:1、执行 execute(plan)2、校验 checkAnalysis(analyzed)这里主要讲Hudi的实现,我们知道在执行阶段实际是调用父类RuleExecutor的execute方法,它会遍历规则批中的规则,并应用规则,调用规则的apply方法override def execute(plan: LogicalPlan): LogicalPlan = { AnalysisContext.reset() try { executeSameContext(plan) } finally { AnalysisContext.reset() private def executeSameContext(plan: LogicalPlan): LogicalPlan = super.execute(plan def execute(plan: TreeType): TreeType = { var curPlan = plan val queryExecutionMetrics = RuleExecutor.queryExecutionMeter batches.foreach { batch => val batchStartPlan = curPlan var iteration = 1 var lastPlan = curPlan var continue = true // Run until fix point (or the max number of iterations as specified in the strategy. while (continue) { curPlan = batch.rules.foldLeft(curPlan) { case (plan, rule) => val startTime = System.nanoTime() // 调用规则的`apply`方法 val result = rule(plan) val runTime = System.nanoTime() - startTime if (!result.fastEquals(plan)) { queryExecutionMetrics.incNumEffectiveExecution(rule.ruleName) queryExecutionMetrics.incTimeEffectiveExecutionBy(rule.ruleName, runTime) logTrace( s""" |=== Applying Rule ${rule.ruleName} === |${sideBySide(plan.treeString, result.treeString).mkString("\n")} """.stripMargin) }应用Hudi规则规则批中有很多规则,这里只跟踪规则如何作用到Hudi的自定义规则中,并最终执行Hudi的建表的,首先遍历到extendedResolutionRules的customResolutionRules,具体有两个第一个规则:case class HoodieResolveReferences(sparkSession: SparkSession) extends Rule[LogicalPlan] with SparkAdapterSupport { private lazy val analyzer = sparkSession.sessionState.analyzer def apply(plan: LogicalPlan): LogicalPlan = plan resolveOperatorsUp { // Resolve merge into case mergeInto @ MergeIntoTable(target, source, mergeCondition, matchedActions, notMatchedActions) if sparkAdapter.isHoodieTable(target, sparkSession) && target.resolved => val resolver = sparkSession.sessionState.conf.resolver val resolvedSource = analyzer.execute(source) def isInsertOrUpdateStar(assignments: Seq[Assignment]): Boolean = { if (assignments.isEmpty) { } else { ......因为这里为create table,所以没有匹配到 mergeInto等第二个规则:override def apply(plan: LogicalPlan): LogicalPlan = { plan match { // Convert to MergeIntoHoodieTableCommand case m @ MergeIntoTable(target, _, _, _, _) if m.resolved && sparkAdapter.isHoodieTable(target, sparkSession) => MergeIntoHoodieTableCommand(m) // Convert to UpdateHoodieTableCommand case u @ UpdateTable(table, _, _) if u.resolved && sparkAdapter.isHoodieTable(table, sparkSession) => UpdateHoodieTableCommand(u) // Convert to DeleteHoodieTableCommand case d @ DeleteFromTable(table, _) if d.resolved && sparkAdapter.isHoodieTable(table, sparkSession) => DeleteHoodieTableCommand(d) // Convert to InsertIntoHoodieTableCommand case l if sparkAdapter.isInsertInto(l) => val (table, partition, query, overwrite, _) = sparkAdapter.getInsertIntoChildren(l).get table match { case relation: LogicalRelation if sparkAdapter.isHoodieTable(relation, sparkSession) => new InsertIntoHoodieTableCommand(relation, query, partition, overwrite) case _ => // Convert to CreateHoodieTableAsSelectCommand case CreateTable(table, mode, Some(query)) if query.resolved && sparkAdapter.isHoodieTable(table) => CreateHoodieTableAsSelectCommand(table, mode, query) // Convert to CompactionHoodieTableCommand case CompactionTable(table, operation, options) if table.resolved && sparkAdapter.isHoodieTable(table, sparkSession) => val tableId = getTableIdentifier(table) val catalogTable = sparkSession.sessionState.catalog.getTableMetadata(tableId) CompactionHoodieTableCommand(catalogTable, operation, options) // Convert to CompactionHoodiePathCommand case CompactionPath(path, operation, options) =>虽然这里为CreateTable,但是query为空,所以没有匹配到,不会走到CreateHoodieTableAsSelectCommand,也就是customResolutionRules的两个规则都没有起到实际作用,但是如果是其他的语句,比如merge into、ctas等到是会起作用的。接下来遍历postHocResolutionRules,其中有两个规则比较重要DataSourceAnalysis(conf)和customPostHocResolutionRules,首先应用规则DataSourceAnalysiscase class DataSourceAnalysis(conf: SQLConf) extends Rule[LogicalPlan] with CastSupport { override def apply(plan: LogicalPlan): LogicalPlan = plan resolveOperators { case CreateTable(tableDesc, mode, None) if DDLUtils.isDatasourceTable(tableDesc) => DDLUtils.checkDataColNames(tableDesc) CreateDataSourceTableCommand(tableDesc, ignoreIfExists = mode == SaveMode.Ignore) case CreateTable(tableDesc, mode, Some(query)) if query.resolved && DDLUtils.isDatasourceTable(tableDesc) => DDLUtils.checkDataColNames(tableDesc.copy(schema = query.schema)) CreateDataSourceTableAsSelectCommand(tableDesc, mode, query, query.output.map(_.name)) case InsertIntoTable(l @ LogicalRelation(_: InsertableRelation, _, _, _), parts, query, overwrite, false) if parts.isEmpty => InsertIntoDataSourceCommand(l, query, overwrite) case InsertIntoDir(_, storage, provider, query, overwrite) if provider.isDefined && provider.get.toLowerCase(Locale.ROOT) != DDLUtils.HIVE_PROVIDER => val outputPath = new Path(storage.locationUri.get) if (overwrite) DDLUtils.verifyNotReadPath(query, outputPath) InsertIntoDataSourceDirCommand(storage, provider.get, query, overwrite) ...... def isDatasourceTable(table: CatalogTable): Boolean = { table.provider.isDefined && table.provider.get.toLowerCase(Locale.ROOT) != HIVE_PROVIDER val HIVE_PROVIDER = "hive"这里会匹配到第一个case,根据前面讲的我们知道它是CreateTable(tableDesc, mode, None),它的provider为hudi,所以是datasourceTable,也就是经过这个规则应用后CreateTable转变为了CreateDataSourceTableCommand再看规则customPostHocResolutionRulescase class HoodiePostAnalysisRule(sparkSession: SparkSession) extends Rule[LogicalPlan] { override def apply(plan: LogicalPlan): LogicalPlan = { plan match { // Rewrite the CreateDataSourceTableCommand to CreateHoodieTableCommand case CreateDataSourceTableCommand(table, ignoreIfExists) if sparkAdapter.isHoodieTable(table) => CreateHoodieTableCommand(table, ignoreIfExists) // Rewrite the DropTableCommand to DropHoodieTableCommand case DropTableCommand(tableName, ifExists, isView, purge) if sparkAdapter.isHoodieTable(tableName, sparkSession) => DropHoodieTableCommand(tableName, ifExists, isView, purge) // Rewrite the AlterTableDropPartitionCommand to AlterHoodieTableDropPartitionCommand case AlterTableDropPartitionCommand(tableName, specs, ifExists, purge, retainData) if sparkAdapter.isHoodieTable(tableName, sparkSession) => AlterHoodieTableDropPartitionCommand(tableName, specs, ifExists, purge, retainData) // Rewrite the AlterTableRenameCommand to AlterHoodieTableRenameCommand // Rewrite the AlterTableAddColumnsCommand to AlterHoodieTableAddColumnsCommand case AlterTableAddColumnsCommand(tableId, colsToAdd) if sparkAdapter.isHoodieTable(tableId, sparkSession) => AlterHoodieTableAddColumnsCommand(tableId, colsToAdd) // Rewrite the AlterTableRenameCommand to AlterHoodieTableRenameCommand case AlterTableRenameCommand(oldName, newName, isView) ...... // 判断是否是Hudi表,通过比较provider是否等于"hudi" def isHoodieTable(table: CatalogTable): Boolean = { table.provider.map(_.toLowerCase(Locale.ROOT)).orNull == "hudi" }匹配到CreateDataSourceTableCommand,并判断是Hudi表,将其转变为CreateHoodieTableCommand,这个CreateHoodieTableCommand的run方法实现了创建Hudi表的逻辑case class CreateHoodieTableCommand(table: CatalogTable, ignoreIfExists: Boolean) extends HoodieLeafRunnableCommand with SparkAdapterSupport { override def run(sparkSession: SparkSession): Seq[Row] = { val tableIsExists = sparkSession.sessionState.catalog.tableExists(table.identifier) if (tableIsExists) { if (ignoreIfExists) { // scalastyle:off return Seq.empty[Row] // scalastyle:on } else { throw new IllegalArgumentException(s"Table ${table.identifier.unquotedString} already exists.") val hoodieCatalogTable = HoodieCatalogTable(sparkSession, table) // check if there are conflict between table configs defined in hoodie table and properties defined in catalog. CreateHoodieTableCommand.validateTblProperties(hoodieCatalogTable) // init hoodie table hoodieCatalogTable.initHoodieTable() try { // create catalog table for this hoodie table CreateHoodieTableCommand.createTableInCatalog(sparkSession, hoodieCatalogTable, ignoreIfExists) } catch { case NonFatal(e) => logWarning(s"Failed to create catalog table in metastore: ${e.getMessage}") Seq.empty[Row] }optimization 和 planning那么又是在哪里执行了run方法呢,其实是在后面的planning阶段,根据前面的学习,我们知道analysis阶段后面还有optimization和planning阶段,但是参考文章中提到对于 SELECT 从句,sql() 方法只会触发 parsing 和 analysis 阶段。optimization和planning阶段是在show() 方法触发的但是run方法又是在planning阶段,这就很费解,经过自己阅读源码发现,对于select语句确实是这样的,但是对于Command类型的,不用show方法也会有optimization和planning,入口为new Dataset[Row](sparkSession, qe, RowEncoder(qe.analyzed.schema)) class Dataset[T] private[sql]( @transient val sparkSession: SparkSession, @DeveloperApi @InterfaceStability.Unstable @transient val queryExecution: QueryExecution, encoder: Encoder[T]) extends Serializable { queryExecution.assertAnalyzed() // Note for Spark contributors: if adding or updating any action in `Dataset`, please make sure // you wrap it with `withNewExecutionId` if this actions doesn't call other action. def this(sparkSession: SparkSession, logicalPlan: LogicalPlan, encoder: Encoder[T]) = { this(sparkSession, sparkSession.sessionState.executePlan(logicalPlan), encoder) def this(sqlContext: SQLContext, logicalPlan: LogicalPlan, encoder: Encoder[T]) = { this(sqlContext.sparkSession, logicalPlan, encoder) @transient private[sql] val logicalPlan: LogicalPlan = { // For various commands (like DDL) and queries with side effects, we force query execution // to happen right away to let these side effects take place eagerly. queryExecution.analyzed match { case c: Command => LocalRelation(c.output, withAction("command", queryExecution)(_.executeCollect())) case u @ Union(children) if children.forall(_.isInstanceOf[Command]) => LocalRelation(u.output, withAction("command", queryExecution)(_.executeCollect())) case _ => queryExecution.analyzed }LogicalPlanDataset有一个变量LogicalPlan,它不是懒加载的,在new的时候就会触发,我们知道在analysis阶段返回的是CreateHoodieTableCommand,它是Command的子类,所以会匹配成功,调用LocalRelation(c.output, withAction("command", queryExecution)(_.executeCollect())) private def withAction[U](name: String, qe: QueryExecution)(action: SparkPlan => U) = { try { qe.executedPlan.foreach { plan => plan.resetMetrics() val start = System.nanoTime() val result = SQLExecution.withNewExecutionId(sparkSession, qe) { action(qe.executedPlan) val end = System.nanoTime() sparkSession.listenerManager.onSuccess(name, qe, end - start) result } catch { case e: Exception => sparkSession.listenerManager.onFailure(name, qe, e) throw e lazy val executedPlan: SparkPlan = prepareForExecution(sparkPlan) lazy val sparkPlan: SparkPlan = { SparkSession.setActiveSession(sparkSession) // TODO: We use next(), i.e. take the first plan returned by the planner, here for now, // but we will implement to choose the best plan. planner.plan(ReturnAnswer(optimizedPlan)).next() lazy val optimizedPlan: LogicalPlan = sparkSession.sessionState.optimizer.execute(withCachedData) protected def planner = sparkSession.sessionState.planner其中的executedPlan就会先触发optimization接着触发planning,具体的逻辑也是应用各种规则/策略,参考文章已经讲的很详细了,因为这两个阶段我们没有自定义扩展规则,所以主要看一下run方法是在哪里执行的在上面的代码中我们看到executedPlan会用到sparkPlan,继而触发planner.plan,对应planning阶段,那么需要看一下planner在哪定义的protected def planner: SparkPlanner = { new SparkPlanner(session.sparkContext, conf, experimentalMethods) { override def extraPlanningStrategies: Seq[Strategy] = super.extraPlanningStrategies ++ customPlanningStrategies }同样这里的planner是在BaseSessionStateBuilder定义的,它重写了extraPlanningStrategies,接着看一下planner.planplanner.plan/** A list of execution strategies that can be used by the planner */ def strategies: Seq[GenericStrategy[PhysicalPlan]] def plan(plan: LogicalPlan): Iterator[PhysicalPlan] = { // Obviously a lot to do here still... // Collect physical plan candidates. val candidates = strategies.iterator.flatMap(_(plan)) // The candidates may contain placeholders marked as [[planLater]], // so try to replace them by their child plans. val plans = candidates.flatMap { candidate => val placeholders = collectPlaceholders(candidate) if (placeholders.isEmpty) { // Take the candidate as is because it does not contain placeholders. Iterator(candidate) } else { // Plan the logical plan marked as [[planLater]] and replace the placeholders. placeholders.iterator.foldLeft(Iterator(candidate)) { case (candidatesWithPlaceholders, (placeholder, logicalPlan)) => // Plan the logical plan for the placeholder. val childPlans = this.plan(logicalPlan) candidatesWithPlaceholders.flatMap { candidateWithPlaceholders => childPlans.map { childPlan => // Replace the placeholder by the child plan candidateWithPlaceholders.transformUp { case p if p.eq(placeholder) => childPlan val pruned = prunePlans(plans) assert(pruned.hasNext, s"No plan for $plan") pruned }这个方法是将逻辑计划LogicalPlan转为物理计划PhysicalPlan,根据参考文章我们知道上面源码的核心在于这一行:val candidates = strategies.iterator.flatMap(_(plan))部分同学可能看不懂这里的语法,实际上问题的答案在GenericStrategy的源码中:策略序列的迭代器扁平化后对其中每一次都调用它的apply方法,这个方法会将逻辑计划转换成物理计划的序列。/** * 给定一个 LogicalPlan,返回可用于执行的物理计划列表。 * 如果此策略不适用于给定的逻辑操作,则应返回空列表 abstract class GenericStrategy[PhysicalPlan <: TreeNode[PhysicalPlan]] extends Logging { * 返回执行计划的物理计划的占位符。 * QueryPlanner将使用其他可用的执行策略自动填写此占位符。 protected def planLater(plan: LogicalPlan): PhysicalPlan def apply(plan: LogicalPlan): Seq[PhysicalPlan] }也就是这里会遍历strategies并调用它的apply方法,strategies是在SparkPlanner中定义的class SparkPlanner(val session: SparkSession, val experimentalMethods: ExperimentalMethods) extends SparkStrategies with SQLConfHelper { def numPartitions: Int = conf.numShufflePartitions override def strategies: Seq[Strategy] = experimentalMethods.extraStrategies ++ extraPlanningStrategies ++ ( LogicalQueryStageStrategy :: PythonEvals :: new DataSourceV2Strategy(session) :: FileSourceStrategy :: DataSourceStrategy :: SpecialLimits :: Aggregation :: Window :: JoinSelection :: InMemoryScans :: SparkScripts :: WithCTEStrategy :: BasicOperators :: Nil)BasicOperators其中有一个BasicOperators,它的apply方法为 object BasicOperators extends Strategy { def apply(plan: LogicalPlan): Seq[SparkPlan] = plan match { case d: DataWritingCommand => DataWritingCommandExec(d, planLater(d.query)) :: Nil case r: RunnableCommand => ExecutedCommandExec(r) :: Nil case MemoryPlan(sink, output) => val encoder = RowEncoder(StructType.fromAttributes(output)) val toRow = encoder.createSerializer() LocalTableScanExec(output, sink.allData.map(r => toRow(r).copy())) :: Nil我们这里的plan为CreateHoodieTableCommand是RunnableCommand的子类,所以返回ExecutedCommandExec,它是SparkPlan的子类回到上面的withAction方法,其中会调用withNewExecutionIddef withNewExecutionId[T]( sparkSession: SparkSession, queryExecution: QueryExecution)(body: => T): T = { val sc = sparkSession.sparkContext val oldExecutionId = sc.getLocalProperty(EXECUTION_ID_KEY) val executionId = SQLExecution.nextExecutionId sc.setLocalProperty(EXECUTION_ID_KEY, executionId.toString) executionIdToQueryExecution.put(executionId, queryExecution) try { // sparkContext.getCallSite() would first try to pick up any call site that was previously // set, then fall back to Utils.getCallSite(); call Utils.getCallSite() directly on // streaming queries would give us call site like "run at <unknown>:0" val callSite = sc.getCallSite() withSQLConfPropagated(sparkSession) { sc.listenerBus.post(SparkListenerSQLExecutionStart( executionId, callSite.shortForm, callSite.longForm, queryExecution.toString, SparkPlanInfo.fromSparkPlan(queryExecution.executedPlan), System.currentTimeMillis())) try { } finally { sc.listenerBus.post(SparkListenerSQLExecutionEnd( executionId, System.currentTimeMillis())) } finally { executionIdToQueryExecution.remove(executionId) sc.setLocalProperty(EXECUTION_ID_KEY, oldExecutionId) }withNewExecutionId方法会调用方法体body,这里的body为executeCollect,根据withAction方法的定义可知executeCollect为SparkPlan中的方法executeCollectSparkPlan.executeCollect/** * Runs this query returning the result as an array. def executeCollect(): Array[InternalRow] = { val byteArrayRdd = getByteArrayRdd() val results = ArrayBuffer[InternalRow]() byteArrayRdd.collect().foreach { countAndBytes => decodeUnsafeRows(countAndBytes._2).foreach(results.+=) results.toArray }但是这里实际调用的是子类ExecutedCommandExec重写的方法ExecutedCommandExec.executeCollectcase class ExecutedCommandExec(cmd: RunnableCommand) extends LeafExecNode { override lazy val metrics: Map[String, SQLMetric] = cmd.metrics * A concrete command should override this lazy field to wrap up any side effects caused by the * command or any other computation that should be evaluated exactly once. The value of this field * can be used as the contents of the corresponding RDD generated from the physical plan of this * command. * The `execute()` method of all the physical command classes should reference `sideEffectResult` * so that the command can be executed eagerly right after the command query is created. protected[sql] lazy val sideEffectResult: Seq[InternalRow] = { val converter = CatalystTypeConverters.createToCatalystConverter(schema) cmd.run(sqlContext.sparkSession).map(converter(_).asInstanceOf[InternalRow]) override def executeCollect(): Array[InternalRow] = sideEffectResult.toArray在调用executeCollect方法时先执行sideEffectResult的逻辑,其中会执行cmd.run方法,这里的run方法是CreateHoodieTableCommand重写的run方法,实现了创建Hudi表的逻辑普通Hive表现在我们知道通过using hudi创建Hudi表的整个逻辑了,那么如果我们扩展了Hudi但是不是用using hudi,就想建一个普通的表,那么它的逻辑是啥,会不会因为扩展了Hudi而有问题呢?代码中我们首先要启用Hive:.enableHiveSupport()上面我们讲到,#createTable,#createHiveTable,他俩的区别为#createTable多了一个tableProvider,tableProvider包含关键字USING,如果sql里没有using,那么statement应该匹配为#createHiveTable,在后面的statement匹配时应该匹配到CreateHiveTableContextpublic static class CreateHiveTableContext extends StatementContext { @Override public <T> T accept(ParseTreeVisitor<? extends T> visitor) { if ( visitor instanceof SqlBaseVisitor ) return ((SqlBaseVisitor<? extends T>)visitor).visitCreateHiveTable(this); else return visitor.visitChildren(this); }接着调用visitCreateHiveTable,同样是在SparkSqlAstBuilderoverride def visitCreateHiveTable(ctx: CreateHiveTableContext): LogicalPlan = withOrigin(ctx) { val (name, temp, ifNotExists, external) = visitCreateTableHeader(ctx.createTableHeader) // TODO: implement temporary tables if (temp) { throw new ParseException( "CREATE TEMPORARY TABLE is not supported yet. " + "Please use CREATE TEMPORARY VIEW as an alternative.", ctx) if (ctx.skewSpec.size > 0) { operationNotAllowed("CREATE TABLE ... SKEWED BY", ctx) ...... // TODO support the sql text - have a proper location for this! val tableDesc = CatalogTable( identifier = name, tableType = tableType, storage = storage, schema = schema, bucketSpec = bucketSpec, provider = Some(DDLUtils.HIVE_PROVIDER), partitionColumnNames = partitionCols.map(_.name), properties = properties, comment = Option(ctx.comment).map(string))和visitCreateTable的逻辑差不多,其中一个区别是provider为默认为的”hive”,同样返回CreateTable因为这里启用了Hive,我们需要看一下HiveSessionStateBuilder中的规则有啥不同override protected def analyzer: Analyzer = new Analyzer(catalog, conf) { override val extendedResolutionRules: Seq[Rule[LogicalPlan]] = new ResolveHiveSerdeTable(session) +: new FindDataSourceTable(session) +: new ResolveSQLOnFile(session) +: customResolutionRules override val postHocResolutionRules: Seq[Rule[LogicalPlan]] = new DetermineTableStats(session) +: RelationConversions(conf, catalog) +: PreprocessTableCreation(session) +: PreprocessTableInsertion(conf) +: DataSourceAnalysis(conf) +: HiveAnalysis +: customPostHocResolutionRules override val extendedCheckRules: Seq[LogicalPlan => Unit] = PreWriteCheck +: PreReadCheck +: customCheckRules }可见多了Hive相关的规则ResolveHiveSerdeTable 和 HiveAnalysis,我们知道在上面讲的几个规则中,因为它不是datasourceTable,所以都没有匹配上,在DataSourceAnalysis后面有一个HiveAnalysis,看一下它的apply方法object HiveAnalysis extends Rule[LogicalPlan] { override def apply(plan: LogicalPlan): LogicalPlan = plan resolveOperators { case InsertIntoTable(r: HiveTableRelation, partSpec, query, overwrite, ifPartitionNotExists) if DDLUtils.isHiveTable(r.tableMeta) => InsertIntoHiveTable(r.tableMeta, partSpec, query, overwrite, ifPartitionNotExists, query.output.map(_.name)) case CreateTable(tableDesc, mode, None) if DDLUtils.isHiveTable(tableDesc) => DDLUtils.checkDataColNames(tableDesc) CreateTableCommand(tableDesc, ignoreIfExists = mode == SaveMode.Ignore) case CreateTable(tableDesc, mode, Some(query)) if DDLUtils.isHiveTable(tableDesc) => DDLUtils.checkDataColNames(tableDesc) CreateHiveTableAsSelectCommand(tableDesc, query, query.output.map(_.name), mode) case InsertIntoDir(isLocal, storage, provider, child, overwrite) if DDLUtils.isHiveTable(provider) => val outputPath = new Path(storage.locationUri.get) if (overwrite) DDLUtils.verifyNotReadPath(child, outputPath) InsertIntoHiveDirCommand(isLocal, storage, child, overwrite, child.output.map(_.name)) val HIVE_PROVIDER = "hive" def isHiveTable(table: CatalogTable): Boolean = { isHiveTable(table.provider) }因为他是hiveTable所以匹配到了第二个,那么转化为CreateTableCommand,在后面的customPostHocResolutionRules也就是HoodiePostAnalysisRule,也不会匹配到CreateTableCommand,也就是没有用到Hudi的逻辑,所以最终执行CreateTableCommand的run方法,也就是执行sparkSession.sessionState.catalog.createTablecase class CreateTableCommand( table: CatalogTable, ignoreIfExists: Boolean) extends RunnableCommand { override def run(sparkSession: SparkSession): Seq[Row] = { sparkSession.sessionState.catalog.createTable(table, ignoreIfExists) Seq.empty[Row] }可见扩展了Hudi规则并不会影响Hive普通表的创建总结本文通过结合Hudi源码和Spark源码,以Create Table为例,理清了从SQL解析到最后执行的整个流程,搞懂了其中的几个关键步骤,知道了如何从Spark SQL源码跳转到Hudi Spark SQL源码并执行Hudi的建表逻辑的,并且明白了创建Hudi表和普通Hive表的区别,这样对于其他的SQL语句比如CTAS、INSERT、UPDATE、DELETE、MERGE等,也就能很容易的知道它们的逻辑了,后面的只要关注一些关键点,比如应该模式匹配到哪一类、版本之间的区别,最后再看Hudi本身的逻辑就好了
前言简要总结Hudi Spark Sql源码执行逻辑,从建表开始。其实从去年开始接触Hudi的时候就研究学习了Hudi Spark SQL的部分源码,并贡献了几个PR,但是完整的逻辑有些地方还没有完全梳理清楚,所以现在想要从头开始学习,搞懂一些知识难点,这样以后看相关源码的时候就不会导致因为一些关键点不懂影响进度。由于本人能力和精力有限,本文只讲解自己觉得比较关键的点,主要目的是梳理整个流程。Spark SQL源码既然是学习Hudi Spark SQL源码,那么肯定离不开Spark SQL源码,所以需要先学习了解Spark SQL的源码,在CSDN上发现一位作者写的几篇文章不错,这几天我也主要是参考他写的这几篇文章并结合源码进行学习的,我把它们放在后面的参考文章中,大家可以参考一下。版本Spark 2.4.4Hudi master分支 0.12.0-SNAPSHOT虽然在学习Spark SQL源码的时候用的是Spark3.3,但是因为Hudi源码默认的Spark版本是2.4.4,如果改版本在IDEA调试的话比较麻烦,所以是用Spark2.4.4版本,但我和Spark3.3对比了一下,大致逻辑是一样的。示例代码直接拿源码里的TestCreateTable的第一个测试语句spark.sql( s""" | create table $tableName ( | id int, | name string, | price double, | ts long | ) using hudi | tblproperties ( | hoodie.database.name = "databaseName", | hoodie.table.name = "tableName", | primaryKey = 'id', | preCombineField = 'ts', | hoodie.datasource.write.operation = 'upsert' """.stripMargin)打印计划我们先看一下上面的代码的执行计划val df = spark.sql(createTableSql) df.explain(true)打印结果== Parsed Logical Plan == 'CreateTable `h0`, ErrorIfExists == Analyzed Logical Plan == CreateHoodieTableCommand `h0`, false == Optimized Logical Plan == CreateHoodieTableCommand `h0`, false == Physical Plan == Execute CreateHoodieTableCommand +- CreateHoodieTableCommand `h0`, false 分别对应阶段parsing、analysis、optimization 、planning扩展 HoodieSparkSessionExtension要想使Spark SQL支持Hudi,必须在启动spark的时候扩展 HoodieSparkSessionExtension,先放在这里,后面分析的时候会用到== Parsed Logical Plan == 'CreateTable `h0`, ErrorIfExists == Analyzed Logical Plan == CreateHoodieTableCommand `h0`, false == Optimized Logical Plan == CreateHoodieTableCommand `h0`, false == Physical Plan == Execute CreateHoodieTableCommand +- CreateHoodieTableCommand `h0`, false ANTLR学习Spark SQL的时候我们知道SQL的识别、编译、解释是由ANTLR实现的,所以我们需要看一下Hudi源码里的.g4文件,一共有三个hudi-spark模块下的 HoodieSqlCommon.g4hudi-spark2模块下的 SqlBase.g4,这个其实是拷贝的Spark2.4.5源码里的SqlBase.g4hudi-spark2模块下的 HoodieSqlBase.g4 其中导入了上面的SqlBase.g4:import SqlBase入口spark.sqldef sql(sqlText: String): DataFrame = { Dataset.ofRows(self, sessionState.sqlParser.parsePlan(sqlText)) }sessionState.sqlParser首先看一下sessionState是由哪个类实现的lazy val sessionState: SessionState = { parentSessionState .map(_.clone(this)) .getOrElse { val state = SparkSession.instantiateSessionState( SparkSession.sessionStateClassName(sparkContext.conf), self) initialSessionOptions.foreach { case (k, v) => state.conf.setConfString(k, v) } state private def sessionStateClassName(conf: SparkConf): String = { conf.get(CATALOG_IMPLEMENTATION) match { case "hive" => HIVE_SESSION_STATE_BUILDER_CLASS_NAME case "in-memory" => classOf[SessionStateBuilder].getCanonicalName private val HIVE_SESSION_STATE_BUILDER_CLASS_NAME = "org.apache.spark.sql.hive.HiveSessionStateBuilder" def enableHiveSupport(): Builder = synchronized { if (hiveClassesArePresent) { config(CATALOG_IMPLEMENTATION.key, "hive") } else { throw new IllegalArgumentException( "Unable to instantiate SparkSession with Hive support because " + "Hive classes are not found.") }可以看到在不启用Hive的情况下,使用的是SessionStateBuilder,启用了Hive使用HiveSessionStateBuilder,我们本地测试代码默认不启用Hive,那么看一下SessionStateBuilderclass SessionStateBuilder( session: SparkSession, parentState: Option[SessionState] = None) extends BaseSessionStateBuilder(session, parentState) { override protected def newBuilder: NewBuilder = new SessionStateBuilder(_, _) }SessionStateBuilder继承自BaseSessionStateBuilder,也就是sessionState是通过BaseSessionStateBuilder.build创建的,其中定义了sqlParserprotected lazy val sqlParser: ParserInterface = { extensions.buildParser(session, new SparkSqlParser(conf)) }extensions 是在SparkSession.Builder中定义的: private[this] val extensions = new SparkSessionExtensions SparkSessionExtensionstype ParserBuilder = (SparkSession, ParserInterface) => ParserInterface private[this] val parserBuilders = mutable.Buffer.empty[ParserBuilder] private[sql] def buildParser( session: SparkSession, initial: ParserInterface): ParserInterface = { parserBuilders.foldLeft(initial) { (parser, builder) => builder(session, parser) }默认情况下parserBuilders为空,那么foldLeft返回的值为默认的SparkSqlParser,一开始没搞懂parserBuilders为空的情况下为啥会返回SparkSqlParser,直到看了foldLeft的源码:def foldLeft[B](z: B)(op: (B, A) => B): B = { var result = z this foreach (x => result = op(result, x)) result }首先将参数z赋值为result,然后再foreach利用op函数改变result的值,因为这里parserBuilders为空,所以没有执行foreach里面的逻辑,那么返回结果result为参数z,也就是这里的new SparkSqlParser(conf)Hudi自定义sqlParser搞懂了Spark SQL默认的sqlParser为SparkSqlParser,那么Hudi是一样的吗?那我们就需要看一下开始的HoodieSparkSessionExtension,我们在程序里是这样扩展的.withExtensions(new HoodieSparkSessionExtension)def withExtensions(f: SparkSessionExtensions => Unit): Builder = synchronized { f(extensions) }这里的f为HoodieSparkSessionExtension(extends (SparkSessionExtensions => Unit),继承Function1 ),也就是一开始withExtensions时会调用HoodieSparkSessionExtension的apply方法,其中会执行extensions.injectParser { (session, parser) => new HoodieCommonSqlParser(session, parser) }这里的injectParser会将HoodieCommonSqlParser添加到parserBuilders中def injectParser(builder: ParserBuilder): Unit = { parserBuilders += builder }再根据上面对buildParser的理解,那么在Hudi Spark SQL中sqlParser为HoodieCommonSqlParserparsingparsePlan接下来看parsePlan方法,这个方法对应parsing阶段,这里实际为HoodieCommonSqlParser.parsePlanprivate lazy val builder = new HoodieSqlCommonAstBuilder(session, delegate) private lazy val sparkExtendedParser = sparkAdapter.createExtendedSparkParser .map(_(session, delegate)).getOrElse(delegate) override def parsePlan(sqlText: String): LogicalPlan = parse(sqlText) { parser => builder.visit(parser.singleStatement()) match { case plan: LogicalPlan => plan case _=> sparkExtendedParser.parsePlan(sqlText) }先看parse方法protected def parse[T](command: String)(toResult: HoodieSqlCommonParser => T): T = { logDebug(s"Parsing command: $command") val lexer = new HoodieSqlCommonLexer(new UpperCaseCharStream(CharStreams.fromString(command))) lexer.removeErrorListeners() lexer.addErrorListener(ParseErrorListener) val tokenStream = new CommonTokenStream(lexer) val parser = new HoodieSqlCommonParser(tokenStream) parser.removeErrorListeners() parser.addErrorListener(ParseErrorListener) try { try { // first, try parsing with potentially faster SLL mode parser.getInterpreter.setPredictionMode(PredictionMode.SLL) toResult(parser) catch { case e: ParseCancellationException => // if we fail, parse with LL mode tokenStream.seek(0) // rewind input stream parser.reset() // Try Again. parser.getInterpreter.setPredictionMode(PredictionMode.LL) toResult(parser) catch { case e: ParseException if e.command.isDefined => throw e case e: ParseException => throw e.withCommand(command) case e: AnalysisException => val position = Origin(e.line, e.startPosition) throw new ParseException(Option(command), e.message, position, position) }根据上面的代码可知函数体中的parser为HoodieSqlCommonParser,这里的HoodieSqlCommonParser为HoodieSqlCommon.g4编译生成的,根据在参考文章中学习Spark SQL源码时可知parser.singleStatement()的核心代码是调用方法statement,而statement根据我的理解对应g4文件中的statement,根据g4文件中定义的语句匹配我们传入的sql语句,如果匹配上返回对应的LogicalPlan,匹配不成功则返回nullpublic final SingleStatementContext singleStatement() throws RecognitionException { SingleStatementContext _localctx = new SingleStatementContext(_ctx, getState()); enterRule(_localctx, 0, RULE_singleStatement); int _la; try { enterOuterAlt(_localctx, 1); setState(40); statement(); ...... public final StatementContext statement() throws RecognitionException { StatementContext _localctx = new StatementContext(_ctx, getState()); enterRule(_localctx, 2, RULE_statement); int _la; try { int _alt; setState(124); _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,12,_ctx) ) { case 1: ....而HoodieSqlCommon.g4文件中的statement没有对应create table相关的语法:singleStatement : statement ';'* EOF statement : compactionStatement #compactionCommand | CALL multipartIdentifier '(' (callArgument (',' callArgument)*)? ')' #call | CREATE INDEX (IF NOT EXISTS)? identifier ON TABLE? tableIdentifier (USING indexType=identifier)? LEFT_PAREN columns=multipartIdentifierPropertyList RIGHT_PAREN (OPTIONS indexOptions=propertyList)? #createIndex | DROP INDEX (IF EXISTS)? identifier ON TABLE? tableIdentifier #dropIndex | SHOW INDEXES (FROM | IN) TABLE? tableIdentifier #showIndexes | REFRESH INDEX identifier ON TABLE? tableIdentifier #refreshIndex | .*? #passThrough ;可以看出HoodieSqlCommonParser的singleStatement的实现也是和g4文件中的定义一致所以在这里parser.singleStatement()返回null,那么builder.visit(parser.singleStatement())模式匹配会走到sparkExtendedParser.parsePlan(sqlText)sparkExtendedParserprivate lazy val sparkExtendedParser = sparkAdapter.createExtendedSparkParser .map(_(session, delegate)).getOrElse(delegate) lazy val sparkAdapter: SparkAdapter = { val adapterClass = if (HoodieSparkUtils.isSpark3_2) { "org.apache.spark.sql.adapter.Spark3_2Adapter" } else if (HoodieSparkUtils.isSpark3_0 || HoodieSparkUtils.isSpark3_1) { "org.apache.spark.sql.adapter.Spark3_1Adapter" } else { "org.apache.spark.sql.adapter.Spark2Adapter" getClass.getClassLoader.loadClass(adapterClass) .newInstance().asInstanceOf[SparkAdapter] }Spark2Adapter.createExtendedSparkParseroverride def createExtendedSparkParser: Option[(SparkSession, ParserInterface) => ParserInterface] = { Some( (spark: SparkSession, delegate: ParserInterface) => new HoodieSpark2ExtendedSqlParser(spark, delegate) }HoodieSpark2ExtendedSqlParser.parsePlan override def parsePlan(sqlText: String): LogicalPlan = parse(sqlText) { parser => builder.visit(parser.singleStatement()) match { case plan: LogicalPlan => plan case _=> delegate.parsePlan(sqlText) }这里的parsePlan和上面的差不多,还是要先看parse方法protected def parse[T](command: String)(toResult: HoodieSqlBaseParser => T): T = { logDebug(s"Parsing command: $command") val lexer = new HoodieSqlBaseLexer(new UpperCaseCharStream(CharStreams.fromString(command))) lexer.removeErrorListeners() lexer.addErrorListener(ParseErrorListener) lexer.legacy_setops_precedence_enbled = conf.setOpsPrecedenceEnforced val tokenStream = new CommonTokenStream(lexer) val parser = new HoodieSqlBaseParser(tokenStream) parser.addParseListener(PostProcessor) parser.removeErrorListeners() parser.addErrorListener(ParseErrorListener) parser.legacy_setops_precedence_enbled = conf.setOpsPrecedenceEnforced try { try { // first, try parsing with potentially faster SLL mode parser.getInterpreter.setPredictionMode(PredictionMode.SLL) toResult(parser) catch { case e: ParseCancellationException => // if we fail, parse with LL mode tokenStream.seek(0) // rewind input stream parser.reset() // Try Again. parser.getInterpreter.setPredictionMode(PredictionMode.LL) toResult(parser) catch { case e: ParseException if e.command.isDefined => throw e case e: ParseException => throw e.withCommand(command) case e: AnalysisException => val position = Origin(e.line, e.startPosition) throw new ParseException(Option(command), e.message, position, position) }这里的parser为HoodieSqlBaseParser,对应HoodieSqlBase.g4singleStatement : statement EOF statement : mergeInto #mergeIntoTable | updateTableStmt #updateTable | deleteTableStmt #deleteTable | .*? #passThrough ;同样的逻辑这里的statement也没有create table,同样返回null,调用delegate.parsePlan(sqlText),这里的delegate由最开始的extensions.buildParser传进来的extensions.buildParser(session, new SparkSqlParser(conf)) 所以这里的delegate为SparkSqlParserSparkSqlParserSparkSqlParser为Spark的源码,继承了抽象类AbstractSqlParser,本身并没有实现parsePlan方法,所以需要看一下父类AbstractSqlParser的parsePlan方法override def parsePlan(sqlText: String): LogicalPlan = parse(sqlText) { parser => astBuilder.visitSingleStatement(parser.singleStatement()) match { case plan: LogicalPlan => plan case _ => val position = Origin(None, None) throw new ParseException(Option(sqlText), "Unsupported SQL statement", position, position) }同样的先看parse方法protected def parse[T](command: String)(toResult: SqlBaseParser => T): T = { logDebug(s"Parsing command: $command") val lexer = new SqlBaseLexer(new UpperCaseCharStream(CharStreams.fromString(command))) lexer.removeErrorListeners() lexer.addErrorListener(ParseErrorListener) lexer.legacy_setops_precedence_enbled = SQLConf.get.setOpsPrecedenceEnforced val tokenStream = new CommonTokenStream(lexer) val parser = new SqlBaseParser(tokenStream) parser.addParseListener(PostProcessor) parser.removeErrorListeners() parser.addErrorListener(ParseErrorListener) parser.legacy_setops_precedence_enbled = SQLConf.get.setOpsPrecedenceEnforced try { try { // first, try parsing with potentially faster SLL mode parser.getInterpreter.setPredictionMode(PredictionMode.SLL) toResult(parser) catch { case e: ParseCancellationException => // if we fail, parse with LL mode tokenStream.seek(0) // rewind input stream parser.reset() // Try Again. parser.getInterpreter.setPredictionMode(PredictionMode.LL) toResult(parser) catch { case e: ParseException if e.command.isDefined => throw e case e: ParseException => throw e.withCommand(command) case e: AnalysisException => val position = Origin(e.line, e.startPosition) throw new ParseException(Option(command), e.message, position, position) }这里的parser为SqlBaseParser,它是Spark源码里的,同样的Spark源码里也有一个SqlBase.g4,其实Hudi的SqlBase.g4文件是拷贝自Spark源码里的,对比两个文件的内容和从Hudi的SqlBase.g4里的一行注释就可以知晓:// The parser file is forked from spark 2.4.5's SqlBase.g4. 我们在SqlBase.g4里查看singleStatement和statement的定义singleStatement : statement EOF statement : query #statementDefault | USE db=identifier #use | CREATE DATABASE (IF NOT EXISTS)? identifier (COMMENT comment=STRING)? locationSpec? (WITH DBPROPERTIES tablePropertyList)? #createDatabase | ALTER DATABASE identifier SET DBPROPERTIES tablePropertyList #setDatabaseProperties | DROP DATABASE (IF EXISTS)? identifier (RESTRICT | CASCADE)? #dropDatabase | createTableHeader ('(' colTypeList ')')? tableProvider ((OPTIONS options=tablePropertyList) | (PARTITIONED BY partitionColumnNames=identifierList) | bucketSpec | locationSpec | (COMMENT comment=STRING) | (TBLPROPERTIES tableProps=tablePropertyList))* (AS? query)? #createTable | createTableHeader ('(' columns=colTypeList ')')? ((COMMENT comment=STRING) | (PARTITIONED BY '(' partitionColumns=colTypeList ')') | bucketSpec | skewSpec | rowFormat | createFileFormat | locationSpec | (TBLPROPERTIES tableProps=tablePropertyList))* (AS? query)? #createHiveTable ...... tableProvider : USING qualifiedName ;statement内容比较多,只截取其中一部分,可以看到statement有两个和sql create table相关的,一个是#createTable另一个是#createHiveTable,他俩的区别为#createTable多了一个tableProvider,tableProvider 包含 关键字USING,我们的sql里有using hudi,所以这里的statement应该匹配为#createTable,具体看一下它的部分源码singleStatement和上面的差不多,核心逻辑还是statement()public final SingleStatementContext singleStatement() throws RecognitionException { SingleStatementContext _localctx = new SingleStatementContext(_ctx, getState()); enterRule(_localctx, 0, RULE_singleStatement); try { enterOuterAlt(_localctx, 1); setState(202); statement(); setState(203); match(EOF); catch (RecognitionException re) { _localctx.exception = re; _errHandler.reportError(this, re); _errHandler.recover(this, re); finally { exitRule(); return _localctx; }statementpublic final StatementContext statement() throws RecognitionException { StatementContext _localctx = new StatementContext(_ctx, getState()); enterRule(_localctx, 12, RULE_statement); int _la; try { int _alt; setState(826); _errHandler.sync(this); switch ( getInterpreter().adaptivePredict(_input,95,_ctx) ) { case 1: _localctx = new StatementDefaultContext(_localctx); ...... case 6: _localctx = new CreateTableContext(_localctx); enterOuterAlt(_localctx, 6); setState(260); createTableHeader(); setState(265); _errHandler.sync(this); _la = _input.LA(1); if (_la==T__0) { setState(261); match(T__0); setState(262); colTypeList(); setState(263); match(T__1); setState(267); tableProvider(); setState(281); _errHandler.sync(this); _la = _input.LA(1); while (_la==COMMENT || ((((_la - 183)) & ~0x3f) == 0 && ((1L << (_la - 183)) & ((1L << (OPTIONS - 183)) | (1L << (TBLPROPERTIES - 183)) | (1L << (LOCATION - 183)) | (1L << (CLUSTERED - 183)) | (1L << (PARTITIONED - 183)))) != 0)) { setState(279); _errHandler.sync(this); switch (_input.LA(1)) { case OPTIONS: setState(268); match(OPTIONS); setState(269); ((CreateTableContext)_localctx).options = tablePropertyList(); break; case PARTITIONED: setState(270); match(PARTITIONED); setState(271); match(BY); setState(272); ((CreateTableContext)_localctx).partitionColumnNames = identifierList(); break; case CLUSTERED: setState(273); bucketSpec(); break; case LOCATION: setState(274); locationSpec(); break; case COMMENT: setState(275); match(COMMENT); setState(276); ((CreateTableContext)_localctx).comment = match(STRING); break; case TBLPROPERTIES: setState(277); match(TBLPROPERTIES); setState(278); ((CreateTableContext)_localctx).tableProps = tablePropertyList(); break; default: throw new NoViableAltException(this); setState(283); _errHandler.sync(this); _la = _input.LA(1); setState(288); _errHandler.sync(this); _la = _input.LA(1); if ((((_la) & ~0x3f) == 0 && ((1L << _la) & ((1L << T__0) | (1L << SELECT) | (1L << FROM) | (1L << AS))) != 0) || ((((_la - 77)) & ~0x3f) == 0 && ((1L << (_la - 77)) & ((1L << (WITH - 77)) | (1L << (VALUES - 77)) | (1L << (TABLE - 77)) | (1L << (INSERT - 77)) | (1L << (MAP - 77)))) != 0) || _la==REDUCE) { setState(285); _errHandler.sync(this); _la = _input.LA(1); if (_la==AS) { setState(284); match(AS); setState(287); query(); break; case 7: _localctx = new CreateHiveTableContext(_localctx); enterOuterAlt(_localctx, 7); setState(290); createTableHeader(); setState(295);这里应该匹配到 6 :CreateTableContext,然后去匹配g4文件中定义的 createTableHeader、tableProvider、PARTITIONED、TBLPROPERTIES等,主要是匹配sql关键字解析语法解释一下为啥会匹配到6 :CreateTableContext,我没有专门去学习ANTLR的原理,根据我的理解,statement定义了很多语法,其中#createTable是第6个,所以会匹配到6,至于CreateTableContext,是将#createTable首字母大写并在后面加个Context,里面的其他内容也是和g4文件中的定义一一对应的,可以看一下其他的匹配都是这个规律parser.singleStatement() 其实返回的是SingleStatementContext,接下来看一下visitSingleStatement,先看一下参数ctx.statementoverride def visitSingleStatement(ctx: SingleStatementContext): LogicalPlan = withOrigin(ctx) { visit(ctx.statement).asInstanceOf[LogicalPlan] public static class SingleStatementContext extends ParserRuleContext { public StatementContext statement() { return getRuleContext(StatementContext.class,0); ...... // 这里的参数i等于0 public <T extends ParserRuleContext> T getRuleContext(Class<? extends T> ctxType, int i) { return getChild(ctxType, i); public <T extends ParseTree> T getChild(Class<? extends T> ctxType, int i) { // 如果Statement的子节点为null或者i>0子节点的大小 if ( children==null || i < 0 || i >= children.size() ) { return null; int j = -1; // what element have we found with ctxType? // 遍历children for (ParseTree o : children) { // 如果o是StatementContext的子类 // 这里的第一个子节点为CreateTableContext,它是StatementContext if ( ctxType.isInstance(o) ) { j++; // j=0 if ( j == i ) { //相等返回 CreateTableContext return ctxType.cast(o); return null; }根据上面的分析和我在代码中的注释我们知道这里的ctx.statement返回的是CreateTableContext,再看一下visitpublic T visit(ParseTree tree) { return tree.accept(this); }这里的tree为CreateTableContext,所以实际调用CreateTableContext的accept方法public static class CreateTableContext extends StatementContext { public <T> T accept(ParseTreeVisitor<? extends T> visitor) { if ( visitor instanceof SqlBaseVisitor ) return ((SqlBaseVisitor<? extends T>)visitor).visitCreateTable(this); else return visitor.visitChildren(this); }accept又调用visitCreateTable,这里的visitor为前面讲到的SparkSqlParser里面的SparkSqlAstBuilder ,它是AstBuilder的子类,AstBuilder是SqlBaseBaseVisitor的子类class SparkSqlAstBuilder extends AstBuilder class AstBuilder extends SqlBaseBaseVisitor[AnyRef] with SQLConfHelper with Logging override def visitCreateTable(ctx: CreateTableContext): LogicalPlan = withOrigin(ctx) { val (table, temp, ifNotExists, external) = visitCreateTableHeader(ctx.createTableHeader) if (external) { operationNotAllowed("CREATE EXTERNAL TABLE ... USING", ctx) checkDuplicateClauses(ctx.TBLPROPERTIES, "TBLPROPERTIES", ctx) checkDuplicateClauses(ctx.OPTIONS, "OPTIONS", ctx) checkDuplicateClauses(ctx.PARTITIONED, "PARTITIONED BY", ctx) checkDuplicateClauses(ctx.COMMENT, "COMMENT", ctx) checkDuplicateClauses(ctx.bucketSpec(), "CLUSTERED BY", ctx) checkDuplicateClauses(ctx.locationSpec, "LOCATION", ctx) val options = Option(ctx.options).map(visitPropertyKeyValues).getOrElse(Map.empty) // provider为hudi val provider = ctx.tableProvider.qualifiedName.getText val schema = Option(ctx.colTypeList()).map(createSchema) val partitionColumnNames = Option(ctx.partitionColumnNames) .map(visitIdentifierList(_).toArray) .getOrElse(Array.empty[String]) val properties = Option(ctx.tableProps).map(visitPropertyKeyValues).getOrElse(Map.empty) val bucketSpec = ctx.bucketSpec().asScala.headOption.map(visitBucketSpec) val location = ctx.locationSpec.asScala.headOption.map(visitLocationSpec) val storage = DataSource.buildStorageFormatFromOptions(options) if (location.isDefined && storage.locationUri.isDefined) { throw new ParseException( "LOCATION and 'path' in OPTIONS are both used to indicate the custom table path, " + "you can only specify one of them.", ctx) val customLocation = storage.locationUri.orElse(location.map(CatalogUtils.stringToURI)) val tableType = if (customLocation.isDefined) { CatalogTableType.EXTERNAL } else { CatalogTableType.MANAGED val tableDesc = CatalogTable( identifier = table, tableType = tableType, storage = storage.copy(locationUri = customLocation), schema = schema.getOrElse(new StructType), provider = Some(provider), partitionColumnNames = partitionColumnNames, bucketSpec = bucketSpec, properties = properties, comment = Option(ctx.comment).map(string)) // Determine the storage mode. val mode = if (ifNotExists) SaveMode.Ignore else SaveMode.ErrorIfExists if (ctx.query != null) { // Get the backing query. val query = plan(ctx.query) if (temp) { operationNotAllowed("CREATE TEMPORARY TABLE ... USING ... AS query", ctx) // Don't allow explicit specification of schema for CTAS if (schema.nonEmpty) { operationNotAllowed( "Schema may not be specified in a Create Table As Select (CTAS) statement", CreateTable(tableDesc, mode, Some(query)) } else { if (temp) { if (ifNotExists) { operationNotAllowed("CREATE TEMPORARY TABLE IF NOT EXISTS", ctx) logWarning(s"CREATE TEMPORARY TABLE ... USING ... is deprecated, please use " + "CREATE TEMPORARY VIEW ... USING ... instead") // Unlike CREATE TEMPORARY VIEW USING, CREATE TEMPORARY TABLE USING does not support // IF NOT EXISTS. Users are not allowed to replace the existing temp table. CreateTempViewUsing(table, schema, replace = false, global = false, provider, options) } else { CreateTable(tableDesc, mode, None) }具体细节就不多做解释,可以自己看,主要的点一个是provider为hudi这个后面会用到,它是在前面提到的tableProvider得到的,返回结果为CreateTable,它是LogicalPlan的子类case class CreateTable( tableDesc: CatalogTable, mode: SaveMode, query: Option[LogicalPlan]) extends LogicalPlan { assert(tableDesc.provider.isDefined, "The table to be created must have a provider.") if (query.isEmpty) { assert( mode == SaveMode.ErrorIfExists || mode == SaveMode.Ignore, "create table without data insertion can only use ErrorIfExists or Ignore as SaveMode.") override def children: Seq[LogicalPlan] = query.toSeq override def output: Seq[Attribute] = Seq.empty override lazy val resolved: Boolean = false }这样parsePlan最终返回的结果为 CreateTable,parsing阶段完成,接下来到了analysis阶段analysisdef sql(sqlText: String): DataFrame = { Dataset.ofRows(self, sessionState.sqlParser.parsePlan(sqlText)) def ofRows(sparkSession: SparkSession, logicalPlan: LogicalPlan): DataFrame = { val qe = sparkSession.sessionState.executePlan(logicalPlan) qe.assertAnalyzed() new Dataset[Row](sparkSession, qe, RowEncoder(qe.analyzed.schema)) }analysis阶段的入口在qe.assertAnalyzed()def assertAnalyzed(): Unit = analyzed lazy val analyzed: LogicalPlan = { SparkSession.setActiveSession(sparkSession) sparkSession.sessionState.analyzer.executeAndCheck(logical)
前言记录Spark3.1.2+Kyuubi1.5.2从源码打包到部署配置过程,虽然之前总结过一篇Kyuubi配置的文章:Kyuubi 安装配置总结,但是这次和之前还是有不同的:1、Kyuubi版本升级 当时最新版本1.4.0,现在要升级到最新版1.5.2,并且1.4.0打包的时候很快完成没有任何问题,1.5.2打包时比较慢,且遇到了比较棘手的问题,这里记录一下解决过程2、当时没有配置Spark的权限,虽然之前总结了一篇利用Submarin集成Spark-Ranger,但是这次用的不是Submarin,用的是kyuubi自带的kyuubi-spark-authz插件,而且解决了当时没有解决的问题,正好更新一下,之所以不用submarin,是因为submarin在新版本已经被去掉了,不再维护,而kyuubi-spark-authz和submarin的作者是同一个人,且kyuubi一直在维护,即使有问题也可以在社区提问题解决,这样还少学习配置一个组件,减少了维护成本配置先讲完整的配置,再讲如何编译打包Spark3.1.2前提:我的环境上有一个ambari自带的Spark2.4.5,路径/usr/hdp/3.1.0.0-78/spark2/解压Spark3.1.2的tgz包,将其放到路径 /opt/hdp(新建临时目录),并改名为spark3,然后拷贝之前spark2的配置到spark3的配置目录中,已经存在的模板,输入no不用覆盖cp /usr/hdp/3.1.0.0-78/spark2/conf/* /opt/hdp/spark3/conf/ 如果是在windows上自己打的包,传到服务器Linux上可能存在编码不一致问题,执行一下命令:yum install -y dos2unix dos2unix /opt/hdp/spark3/bin/* 这里因为有一个Spark2版本了,所以不配置环境变量,直接使用全路径验证测试验证1、验证测试用例SparkPi/opt/hdp/spark3/bin/spark-submit --master yarn --deploy-mode client --class org.apache.spark.examples.SparkPi --principal spark/indata-10-110-105-164.indata.com@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytab /opt/hdp/spark3/examples/jars/spark-examples_2.12-3.1.2.jar正确输出 Pi is roughly 3.1389356946784734 2、验证spark-shell,登录成功,可以成功的读取hive表中数据即可/opt/hdp/spark3/bin/spark-shell --principal spark/indata-10-110-105-164.indata.com@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytab spark.sql("select * from test").show3、验证Spark Thrfit server/opt/hdp/spark3/bin/spark-submit --master yarn --deploy-mode client --executor-memory 2G --num-executors 3 --executor-cores 2 --driver-memory 3G --driver-cores 2 --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name spark3-thrift-server --hiveconf hive.server2.thrift.http.port=20003 --principal spark/indata-10-110-105-164.indata.com@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytab /opt/hdp/spark3/bin/beeline -u "jdbc:hive2://10.110.105.164:20003/default;principal=HTTP/indata-10-110-105-164.indata.com@INDATA.COM?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice" select * from test简单的验证成功后,就可以认为我们编译安装的Spark3没有问题kyuubi-spark-authzkyuubi-spark-authz,在最新发布的1.5.2版本并没有,我们需要拉取master分支(1.6.0-SNAPSHOT)的代码,自己打包,这里ranger版本为1.2.0mvn clean package -pl :kyuubi-spark-authz_2.12 -Dspark.version=3.1.2 -Dranger.version=1.2.0 -DskipTests打包完成后,需要将 /incubator-kyuubi/extensions/spark/kyuubi-spark-authz/target/scala-2.12/jars/* 和 /incubator-kyuubi/extensions/spark/kyuubi-spark-authz/target/kyuubi-spark-authz_2.12-1.6.0-SNAPSHOT.jar两个路径下的包拷贝到我们上面安装的spark3/jars目中,具体为/opt/hdp/spark3/jars/配置RANGER策略和之前的文章利用Submarin集成Spark-Ranger一样,添加sparkServer策略,用户名密码随便填写,需要注意的是下面三个key对应的value值需要填写*tag.download.auth.users policy.download.auth.users policy.grantrevoke.auth.users 之前讲的是必须填spark,这样就造成了为啥spark用户没有问题,而其他用户有问题,然后保留两个all-database,table,column,一个对应spark用户,只有hudi库权限,一个hive用户,只有default库权限配置SPARK根据官网文档,在$SPARK_HOME/conf创建下面两个配置文件ranger-spark-security.xml<configuration> <property> <name>ranger.plugin.spark.policy.rest.url</name> <value>http://yourIp:16080</value> </property> <property> <name>ranger.plugin.spark.service.name</name> <value>sparkServer</value> </property> <property> <name>ranger.plugin.spark.policy.cache.dir</name> <value>/etc/ranger/sparkServer/policycache</value> </property> <property> <name>ranger.plugin.spark.policy.pollIntervalMs</name> <value>30000</value> </property> <property> <name>ranger.plugin.spark.policy.source.impl</name> <value>org.apache.ranger.admin.client.RangerAdminRESTClient</value> </property> </configuration>ranger-spark-audit.xml<configuration> <property> <name>xasecure.audit.is.enabled</name> <value>true</value> </property> <property> <name>xasecure.audit.destination.db</name> <value>false</value> </property> <property> <name>xasecure.audit.destination.db.jdbc.driver</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>xasecure.audit.destination.db.jdbc.url</name> <value>jdbc:mysql://yourIp:3306/ranger</value> </property> <property> <name>xasecure.audit.destination.db.password</name> <value>ranger</value> </property> <property> <name>xasecure.audit.destination.db.user</name> <value>ranger</value> </property> <property> <name>xasecure.audit.jaas.Client.option.keyTab</name> <value>/etc/security/keytabs/hive.service.keytab</value> </property> <property> <name>xasecure.audit.jaas.Client.option.principal</name> <value>hive/_HOST@INDATA.COM</value> </property> </configuration>至于具体的ip、端口、用户名、密码信息可以在之前已经配置好的hive-ranger插件配置文件里查看,也可以在ambari界面搜索需要注意的是16080端口对应的地址可能填ip有问题,需要填写ip对应的域名hostname,具体取决于自己的环境配置验证验证spark用户/opt/hdp/spark3/bin/spark-sql --master yarn --deploy-mode client --conf 'spark.sql.extensions=org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension' --principal spark/indata-10-110-105-164.indata.com@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytab没有default权限:use default; Permission denied: user [spark] does not have [_any] privilege on [default] hudi库可以正常读取use hudi; show tables;验证hive用户切换hive认证用户 /opt/hdp/spark3/bin/spark-sql --master yarn --deploy-mode client --conf 'spark.sql.extensions=org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension' --principal hive/indata-10-110-105-164.indata.com@INDATA.COM --keytab /etc/security/keytabs/hive.service.keytabdefault库可以正常读取:use default; show tables; 没有Hudi库权限use hudi; Permission denied: user [hive] does not have [_any] privilege on [hudi] 支持HUDI首先将对应版本的Hudi包,拷贝到/opt/hdp/spark3/jars/,这里用的版本为:hudi-spark3.1.2-bundle_2.12-0.10.1.jar/opt/hdp/spark3/bin/spark-sql --master yarn --deploy-mode client --conf 'spark.sql.extensions=org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension,org.apache.spark.sql.hudi.HoodieSparkSessionExtension' --principal spark/indata-10-110-105-164.indata.com@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytab use hudi; create table test_hudi_table_kyuubi ( id int, name string, price double, ts long, dt string ) using hudi partitioned by (dt) options ( primaryKey = 'id', preCombineField = 'ts', type = 'cow' );kyuubi1.5.2目前最新版为1.5.2,但是默认spark版本为spark3.2.1,和我们的spark3.1.2大版本不一样,为了避免潜在问题,我们选择自己打包,打包流程在后面的部分,打包完成后,将其上传到服务器并解压到/opt/hdp/kyuubi-1.5.2/配置KYUUBIdos2unix kyuubi-1.5.2/bin/*kyuubi-env.sh和之前的文章一样的配置export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64 export SPARK_HOME=/opt/hdp/spark3 export HADOOP_CONF_DIR=/usr/hdp/3.1.0.0-78/hadoop/etc/hadoop export KYUUBI_JAVA_OPTS="-Xmx10g -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=4096 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark -XX:MaxDirectMemorySize=1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./logs -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -Xloggc:./logs/kyuubi-server-gc-%t.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=5M -XX:NewRatio=3 -XX:MetaspaceSize=512m"kyuubi-defaults.conf和之前1.4.0版本不同的是加了一个配置kyuubi.ha.zookeeper.publish.configs true,不配置的话beeline连接HA的时候有问题,另外还改了端口号和zookeeper的namespace,因为一个环境上有多个版本的kyuubi,避免冲突kyuubi.frontend.bind.host indata-10-110-105-164.indata.com kyuubi.frontend.bind.port 10099 #kerberos kyuubi.authentication KERBEROS kyuubi.kinit.principal hive/indata-10-110-105-164.indata.com@INDATA.COM kyuubi.kinit.keytab /etc/security/keytabs/hive.service.keytab kyuubi.ha.enabled true kyuubi.ha.zookeeper.quorum indata-10-110-105-162.indata.com:2181,indata-10-110-105-163.indata.com:2181,indata-10-110-105-164.indata.com:2181 kyuubi.ha.zookeeper.client.port 2181 kyuubi.ha.zookeeper.session.timeout 600000 kyuubi.ha.zookeeper.namespace=kyuubi1.5.2 kyuubi.session.engine.initialize.timeout=180000 kyuubi.ha.zookeeper.publish.configs true验证启动kyuubibin/kyuubi start 连接HA/opt/hdp/spark3/bin/beeline !connect jdbc:hive2://indata-10-110-105-162.indata.com,indata-10-110-105-163.indata.com,indata-10-110-105-164.indata.com/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi1.5.2;hive.server2.proxy.user=spark show tables;集成KYUUBI-SPARK-AUTHZ上面讲了用原生的spark-sql集成kyuubi-spark-authz,那么通过kyuubi怎么集成呢?有两种方法,一种是在jdbc连接串里,通过spark.sql.extensions扩展org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension!connect jdbc:hive2://indata-10-110-105-162.indata.com,indata-10-110-105-163.indata.com,indata-10-110-105-164.indata.com/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi1.5.2;hive.server2.proxy.user=spark#spark.sql.extensions=org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension另一种是通过配置spark的默认值,既然我们要用ranger来控制权限,那么所有的连接应该都要控制,所以通过修改spark默认值来统一配置比较合理vi /opt/hdp/spark3/conf/spark-defaults.conf spark.sql.extensions org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension 需要将之前启动的提交到yarn的spark程序kill掉,然后重新连接验证同时支持HUDI和原生spark-sql类似,通过逗号分隔扩展多个,但是不能一个通过配置文件配置,一个通过jdbc参数配置,经验证,如果在配置文件里配置了RangerSparkExtension,而通过jdbc参数配置了HoodieSparkSessionExtension,那么jdbc参数配置的会覆盖掉配置文件配置的,所以只能配置一个,建议在配置文件里同时扩展两个spark-defaults.conf spark.sql.extensions org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension,org.apache.spark.sql.hudi.HoodieSparkSessionExtension如果有和默认配置不一样的需求的话,可以在jdbc参数里修改配置覆盖掉默认配置,但是在项目上给用户用的话,这个参数不应该暴露给用户使用,而是应该规定死的优化SPARK启动时间通过连接Kyuubi第一次启动Spark程序时,会比较慢,我们可以通过配置spark.yarn.archive或者spark.yarn.jars(二选一),来优化启动时间创建archive(spark.yarn.archive)jar cv0f spark-libs.jar -C $SPARK_HOME/jars/ . 上传jar包到hdfsspark.yarn.archive:hdfs dfs -put ./spark-libs.jar /hdp/apps/3.1.0.0-78/spark3/ spark.yarn.jars: hdfs dfs -put $SPARK_HOME/jars /hdp/apps/3.1.0.0-78/spark3/配置 spark-defaults.conf添加参数:spark.yarn.archive hdfs://cluster1/hdp/apps/3.1.0.0-78/spark3/spark-libs.jar #spark.yarn.jars hdfs://cluster1/hdp/apps/3.1.0.0-78/spark3/jars/*.jar动态资源申请spark-defaults.confspark.dynamicAllocation.enabled true spark.dynamicAllocation.initialExecutors 0 spark.dynamicAllocation.maxExecutors 10 spark.dynamicAllocation.minExecutors 0 spark.shuffle.service.enabled trueHA同样的配置在另外一台机器配置好Spark3.12+Kyuubi1.5.2+kyuubi-spark-authz,其实将这台机器的文件夹拷贝过去,然后改一下ip,然后启动kyuubi即可,最后和之前的文章一样用beeline验证一下HA的效果,是否可以通过一个连接地址随机分到两个kyuubi的地址,也可以在zookeeper上查看是否zooKeeperNamespace kyuubi1.5.2是否有两个ip地址注册成功。编译打包Spark3.1.2git clone Spark源码,切换到3.1.2 tag文档地址:文档地址: https://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn## 编译打包 ./build/mvn -Pyarn -Phive -Phive-thriftserver -DskipTests clean package ## 打分布式tgz安装包 ./dev/make-distribution.sh --tgz -Phive -Phive-thriftserver -Pyarn -Dhadoop.version=3.1.1 -DskipTests打出来的包为spark-3.1.2-bin-3.1.1.tgz,因为我们的环境的hadoop的版本为3.1.1,所以这里指定了hadoop的版本,其他的都是自己常用,比如yarn等,不过spark支持很多比如Python/R等,如果所有的都加上打包,那么会比较麻烦,也比较慢,但是不同的项目会有不同的需求,所以我们直接用官网提供的spark-3.1.2-bin-hadoop3.2.tgz也可以,版本差别不大,下载地址https://archive.apache.org/dist/spark编译打包 Kyuubi 1.5.2切到 tag:v1.5.2-incubating,pom spark-3.1 spark的版本为spark3.1.3,我们将其改为spark3.1.2打分布式tgz安装包根据官网文档命令:./build/dist --tgz -Pspark-3.1 但是在打第二个projectKyuubi Project Common时有异常,异常为:[ERROR] ## Exception when compiling 78 sources to D:\workspace\learning\incubator-kyuubi\kyuubi-common\target\scala-2.12\classes java.lang.NoSuchMethodError: org.fusesource.jansi.AnsiConsole.wrapOutputStream(Ljava/io/OutputStream;)Ljava/io/OutputStream; jline.AnsiWindowsTerminal.detectAnsiSupport(AnsiWindowsTerminal.java:57) ...... [ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:4.3.0:compile (scala-compile-first) on project kyuubi-common_2.12: Execution scala-compile-first of goal net.alchim31.maven:scala-maven-plugin:4.3.0:compile failed: An API incompatibility was encountered while executing net.alchim31.maven:scala-maven-plugin:4.3.0:compile: java.lang.NoSuchMethodError: org.fusesource.jansi.AnsiConsole.wrapOutputStream(Ljava/io/OutputStream;)Ljava/io/OutputStream; [ERROR] ----------------------------------------------------- [ERROR] realm = plugin>net.alchim31.maven:scala-maven-plugin:4.3.0 [ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy [ERROR] urls[0] = file:/D:/workspace/m2/repository/net/alchim31/maven/scala-maven-plugin/4.3.0/scala-maven-plugin-4.3.0.jar ......用mvn命令打包一样的错误: mvn clean package -Pspark-3.1 -DskipTests 异常解决过程网上查找资料,根据文章:https://blog.csdn.net/jxlxxxmz/article/details/99624261的第三个方法,添加参数scala:compilemvn clean scala:compile package -Pspark-3.1 -DskipTestsmvn 命令打包是可行的,但是在打tgz包的命令中./build/dist添加这个参数是不行的,那么试着先用mvn命令打完包,把依赖下载下来再打tgz包,看看行不行。开头提到1.5.2版本和之前的1.4.0版本打包不太一样,比较慢,原因是因为在打包过程中默认下载spark和flink,会下载到\incubator-kyuubi\externals\kyuubi-download\target目录中,并且默认从Apache官网https://archive.apache.org/dist下载,所以比较漫长,经过漫长的等待,终于下载完成,结果在倒数第二个project报了异常:[ERROR] [Error] D:\workspace\learning\incubator-kyuubi\dev\kyuubi-extension-spark-common\src\main\scala\org\apache\kyuubi\sql\zorder\ZorderSqlAstBuilderBase.scala:46: not found: type ZorderSqlExtensionsBaseVisitor [ERROR] [Error] D:\workspace\learning\incubator-kyuubi\dev\kyuubi-extension-spark-common\src\main\scala\org\apache\kyuubi\sql\zorder\ZorderSparkSqlExtensionsParserBase.scala:42: not found: type ZorderSqlExtensionsParser [ERROR] [Error] D:\workspace\learning\incubator-kyuubi\dev\kyuubi-extension-spark-common\src\main\scala\org\apache\kyuubi\sql\zorder\ZorderSparkSqlExtensionsParserBase.scala:36: value visit is not a member of org.apache.kyuubi.sql.zorder.ZorderSqlAstBuilderBase [ERROR] [Error] D:\workspace\learning\incubator-kyuubi\dev\kyuubi-extension-spark-common\src\main\scala\org\apache\kyuubi\sql\zorder\ZorderSparkSqlExtensionsParserBase.scala:43: not found: type ZorderSqlExtensionsLexer [ERROR] [Error] D:\workspace\learning\incubator-kyuubi\dev\kyuubi-extension-spark-common\src\main\scala\org\apache\kyuubi\sql\zorder\ZorderSparkSqlExtensionsParserBase.scala:49: not found: type ZorderSqlExtensionsParser [ERROR] [Error] D:\workspace\learning\incubator-kyuubi\dev\kyuubi-extension-spark-common\src\main\scala\org\apache\kyuubi\sql\zorder\ZorderSqlAstBuilderBase.scala:44: object ZorderSqlExtensionsParser is not a member of package org.apache.kyuubi.sql.zorder ......通过搜索源码,确实没有这些类,不过对应的模块的antlr4目录中有一个ZorderSqlExtensions.g4文件,根据经验,应该是这个文件没有成功编译,不了解antlr技术的可以自己网上查询资料了解,那么猜测可能是因为加了scala:compile的原因导致的,于是再去掉这个参数,用最初的命令试一下:mvn clean package -Pspark-3.1 -DskipTests 果然打包成功,然后再尝试用./build/dist --tgz -Pspark-3.1命令打tgz包,结果还是和开始一样的错误,这就很尴尬了,于是查看 ./build/dist脚本源码,发现支持--mvn参数,经过多次尝试,最终用下面这个命令打包成功! ./build/dist --tgz --mvn /d/program/company/apache-maven-3.6.3/bin/mvn -Pspark-3.1 -DskipTests也就是指定自己本地的maven的路径加上maven的参数即可,另外我在kyuubi群里提问这个问题,pmc告诉我用wsl可以解决,查了一下是适用于Linux的Windows子系统的意思,因为我没用过wsl,毕竟已经打包成功了也就没有再尝试dist命令还支持 参数--flink-provided和 --spark-provided,加上这个参数就不会将spark和flink打到包里了,这样也就不会下载spark和flink导致打包很慢了,但是在最新的1.6.0-SNAPSHOT版本还要下载hive,也比较慢,没有研究怎么解决~ ./build/dist --tgz --flink-provided --spark-provided --mvn /d/program/company/apache-maven-3.6.3/bin/mvn -Pspark-3.1 -DskipTests最终打出的包名也不一样为apache-kyuubi-1.5.2-incubating-bin.tgz,而开始打出来的包名为apache-kyuubi-1.5.2-incubating-bin-spark-3.1.tgz大小分别为221M和763M,官方提供的tgz包也不包含spark和flink的安装包默认打包spark和flink的路径为kyuubi-1.5.2/externals/flink-1.14.3和kyuubi-1.5.2/externals/spark-3.1.2-bin-hadoop3.2,如果不在kyuubi-env.sh里配置SPARK_HOME、FLINK_HOME的话,默认就是这个路径,具体的逻辑可以kyuubi-1.5.2/bin/load-kyuubi-env.sh查看,既然提到flink,顺便说一下,kyuubi是支持flink的,在我们连接kyuubi时默认连接Spark SQL 引擎,若要连接Flink SQL,可以在连接串最后加上参数?kyuubi.engine.type=FLINK_SQL;,就会连接Flink SQL了,当然这里用的默认的Flink路径,可以自己配一下FLINK_HOME,另外应该还可以在kyuubi-defaults.conf 添加kyuubi.engine.type FLINK_SQL修改默认的engine type打包成功更新2022年8月4日更新:Spark3.1.2虽然支持用逗号分隔以同时支持多个sql扩展规则,但是要注意顺序问题我们开始是这样配置的spark.sql.extensions=org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension,org.apache.spark.sql.hudi.HoodieSparkSessionExtension 这样可以同时支持RangerSparkExtension和HoodieSparkSessionExtension,在控制权限的同时,也可以成功创建Hudi表并可以成功插入Hudi数据,但是在用update和delete时会有问题update: java.lang.UnsupportedOperationException: UPDATE TABLE is not supported temporarilydelete: Error in query: DELETE is only supported with v2 tables 应该还存在其他问题,就不一一列举了解决方案:改变规则顺序,先扩展Hudi,再扩展Ranger,这样就可以在正确控制权限的同时,不影响Hudi Spark SQL的使用spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension,org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension
前言上面文章Hive增量查询Hudi表提到Hudi表有读优化视图和实时视图,其实当时并没有完全掌握,所以现在单独学习总结。Hudi官网文档中文称之为视图,其实英文为query types翻译过来为查询类型Query typesHudi 支持下面三种视图Snapshot Queries 快照查询/实时视图 Queries see the latest snapshot of the table as of a given commit or compaction action. In case of merge on read table, it exposes near-real time data(few mins) by merging the base and delta files of the latest file slice on-the-fly. For copy on write table, it provides a drop-in replacement for existing parquet tables, while providing upsert/delete and other write side features. 在此视图上的查询可以查看给定提交或压缩操作时表的最新快照。对于读时合并表(MOR表) 该视图通过动态合并最新文件切片的基本文件(例如parquet)和增量文件(例如avro)来提供近实时数据集(几分钟的延迟)。对于写时复制表(COW表),它提供了现有parquet表的插入式替换,同时提供了插入/删除和其他写侧功能。Incremental Queries 增量查询/增量视图,也就是上篇文章讲的增量查询 Queries only see new data written to the table, since a given commit/compaction. This effectively provides change streams to enable incremental data pipelines. 对该视图的查询只能看到从某个提交/压缩后写入数据集的新数据。该视图有效地提供了更改流,来支持增量数据管道。Read Optimized Queries 读优化查询/读优化视图 : Queries see the latest snapshot of table as of a given commit/compaction action. Exposes only the base/columnar files in latest file slices and guarantees the same columnar query performance compared to a non-hudi columnar table. 在此视图上的查询将查看给定提交或压缩操作中数据集的最新快照。 该视图仅将最新文件切片中的基本/列文件暴露给查询,并保证与非Hudi列式数据集相比,具有相同的列式查询性能。表类型Table TypeSupported Query typesCopy On WriteSnapshot Queries + Incremental QueriesMerge On ReadSnapshot Queries + Incremental Queries + Read Optimized Queries也就是读优化视图只有在MOR表中存在,这点在上篇文章中也提到过,这次会从源码层面分析两种表类型的区别以及如何实现的。另外关于这一点官网中文文档写错了,大家注意别被误导,估计是因为旧版本,且中文文档没有人维护贡献,就没人贡献修改了~,稍后我有时间会尝试提个PR修复一下,错误截图:2022.06.30更新:已提交PR https://github.com/apache/hudi/pull/6008源码简单从源码层面分析同步Hive表时两种表类型的区别,Hudi同步Hive元数据的工具类为HiveSyncTool,如何利用HiveSyncTool同步元数据,先进行一个简单的示例,这里用Spark进行示例,因为Spark有获取hadoopConf的API,代码较少,方便示例,其实纯Java也是可以实现的val basePath = new Path(pathStr) val fs = basePath.getFileSystem(spark.sessionState.newHadoopConf()) val hiveConf: HiveConf = new HiveConf() hiveConf.addResource(fs.getConf) val tableMetaClient = HoodieTableMetaClient.builder.setConf(fs.getConf).setBasePath(pathStr).build val recordKeyFields = tableMetaClient.getTableConfig.getRecordKeyFields var keys = "" if (recordKeyFields.isPresent) { keys = recordKeyFields.get().mkString(",") var partitionPathFields: util.List[String] = null val partitionFields = tableMetaClient.getTableConfig.getPartitionFields if (partitionFields.isPresent) { import scala.collection.JavaConverters._ partitionPathFields = partitionFields.get().toList.asJava val hiveSyncConfig = getHiveSyncConfig(pathStr, hiveDatabaseName, tableName, partitionPathFields, keys) new HiveSyncTool(hiveSyncConfig, hiveConf, fs).syncHoodieTable() def getHiveSyncConfig(basePath: String, dbName: String, tableName: String, partitionPathFields: util.List[String] = null, keys: String = null): HiveSyncConfig = { val hiveSyncConfig: HiveSyncConfig = new HiveSyncConfig hiveSyncConfig.syncMode = HiveSyncMode.HMS.name hiveSyncConfig.createManagedTable = true hiveSyncConfig.databaseName = dbName hiveSyncConfig.tableName = tableName hiveSyncConfig.basePath = basePath hiveSyncConfig.partitionValueExtractorClass = classOf[MultiPartKeysValueExtractor].getName if (partitionPathFields != null && !partitionPathFields.isEmpty) hiveSyncConfig.partitionFields = partitionPathFields if (!StringUtils.isNullOrEmpty(keys)) hiveSyncConfig.serdeProperties = "primaryKey = " + keys //Spark SQL 更新表时需要该属性确认主键字段 hiveSyncConfig }这里利用tableMetaClient来获取表的主键和分区字段,因为同步元数据时Hudi表文件肯定已经存在了,当然如果知道表的主键和分区字段也可以自己指定,这里自动获取会更方便一些。其实主要是获取配置文件,构造同步工具类HiveSyncTool,然后利用syncHoodieTable同步元数据,建Hive表接下来看一下源码,首先new HiveSyncTool时,会根据表类型,当表类型为COW时,this.snapshotTableName = cfg.tableName,snapshotTableName 也就是实时视图等于表名,而读优化视图为空,当为MOR表示,实时视图为tableName_rt,而对于读优化视图,默认情况下为tableName_ro,当配置skipROSuffix=true时,等于表名,这里可以发现当skipROSuffix=true时,MOR表的读优化视图为表名而COW表的实时视图为表名,感觉这里有点矛盾,可能是因为MOR表的读优化视图和COW表的实时视图查询均由HoodieParquetInputFormat实现,具体看后面的源码分析private static final Logger LOG = LogManager.getLogger(HiveSyncTool.class); public static final String SUFFIX_SNAPSHOT_TABLE = "_rt"; public static final String SUFFIX_READ_OPTIMIZED_TABLE = "_ro"; protected final HiveSyncConfig cfg; protected HoodieHiveClient hoodieHiveClient = null; protected String snapshotTableName = null; protected Option<String> roTableName = null; public HiveSyncTool(HiveSyncConfig cfg, HiveConf configuration, FileSystem fs) { super(configuration.getAllProperties(), fs); try { this.hoodieHiveClient = new HoodieHiveClient(cfg, configuration, fs); } catch (RuntimeException e) { if (cfg.ignoreExceptions) { LOG.error("Got runtime exception when hive syncing, but continuing as ignoreExceptions config is set ", e); } else { throw new HoodieHiveSyncException("Got runtime exception when hive syncing", e); this.cfg = cfg; // Set partitionFields to empty, when the NonPartitionedExtractor is used if (NonPartitionedExtractor.class.getName().equals(cfg.partitionValueExtractorClass)) { LOG.warn("Set partitionFields to empty, since the NonPartitionedExtractor is used"); cfg.partitionFields = new ArrayList<>(); if (hoodieHiveClient != null) { switch (hoodieHiveClient.getTableType()) { case COPY_ON_WRITE: // 快照查询/实时视图等于表名 this.snapshotTableName = cfg.tableName; // 读优化查询/读优化视图为空 this.roTableName = Option.empty(); break; case MERGE_ON_READ: // 快照查询/实时视图等于 表名+SUFFIX_SNAPSHOT_TABLE即 tableName_rt this.snapshotTableName = cfg.tableName + SUFFIX_SNAPSHOT_TABLE; // 读优化查询/读优化视图 skipROSuffix默认为false 默认情况下 tableName_ro // 当配置skipROSuffix=true时,等于表名 this.roTableName = cfg.skipROSuffix ? Option.of(cfg.tableName) : Option.of(cfg.tableName + SUFFIX_READ_OPTIMIZED_TABLE); break; default: LOG.error("Unknown table type " + hoodieHiveClient.getTableType()); throw new InvalidTableException(hoodieHiveClient.getBasePath()); }接下来再看一下,上篇文章中提到的两个视图的实现类HoodieParquetInputFormat和HoodieParquetRealtimeInputFormat@Override public void syncHoodieTable() { try { if (hoodieHiveClient != null) { doSync(); } catch (RuntimeException re) { throw new HoodieException("Got runtime exception when hive syncing " + cfg.tableName, re); } finally { if (hoodieHiveClient != null) { hoodieHiveClient.close(); protected void doSync() { switch (hoodieHiveClient.getTableType()) { case COPY_ON_WRITE: // COW表只有snapshotTableName,也就是实时视图,查询时是由`HoodieParquetInputFormat`实现 syncHoodieTable(snapshotTableName, false, false); break; case MERGE_ON_READ: // sync a RO table for MOR // MOR 表的读优化视图,以`_RO`结尾,`READ_OPTIMIZED`的缩写,查询时由`HoodieParquetInputFormat`实现 syncHoodieTable(roTableName.get(), false, true); // sync a RT table for MOR // MOR 表的实时视图,以`_RT`结尾,`REAL_TIME`的缩写,查询时由`HoodieParquetRealtimeInputFormat`实现 syncHoodieTable(snapshotTableName, true, false); break; default: LOG.error("Unknown table type " + hoodieHiveClient.getTableType()); throw new InvalidTableException(hoodieHiveClient.getBasePath()); }可以看到,两个表的区别为:1、COW只同步1个表的元数据:实时视图,MOR表同步两个表的元数据,读优化视图和实时视图 2、除了表名外,参数也不一样,这也就决定了查询时用哪个实现类来实现由于这篇文章不是主要讲解同步Hive元数据的源码,所以这里只贴主要实现部分,以后会单独总结一篇同步Hive元数据源码的文章。protected void syncHoodieTable(String tableName, boolean useRealtimeInputFormat, boolean readAsOptimized) { syncSchema(tableName, tableExists, useRealtimeInputFormat, readAsOptimized, schema); private void syncSchema(String tableName, boolean tableExists, boolean useRealTimeInputFormat, boolean readAsOptimized, MessageType schema) { Map<String, String> sparkSerdeProperties = getSparkSerdeProperties(readAsOptimized); String inputFormatClassName = HoodieInputFormatUtils.getInputFormatClassName(baseFileFormat, useRealTimeInputFormat); public static String getInputFormatClassName(HoodieFileFormat baseFileFormat, boolean realtime) { switch (baseFileFormat) { case PARQUET: if (realtime) { return HoodieParquetRealtimeInputFormat.class.getName(); } else { return HoodieParquetInputFormat.class.getName(); case HFILE: if (realtime) { return HoodieHFileRealtimeInputFormat.class.getName(); } else { return HoodieHFileInputFormat.class.getName(); case ORC: return OrcInputFormat.class.getName(); default: throw new HoodieIOException("Hoodie InputFormat not implemented for base file format " + baseFileFormat); }可以看到对于存储类型为PARQUET时,当useRealtimeInputFormat为true时,那么inputFormat的实现类为HoodieParquetRealtimeInputFormat,当为false时,实现类为HoodieParquetInputFormat,至于另外一个参数readAsOptimized,是否为读优化,这个参数是Spark SQL读取时用来判断该表为实时视图还是读优化视图,相关源码// 同步元数据建表时添加参数:`hoodie.query.as.ro.table=true/false` sparkSerdeProperties.put(ConfigUtils.IS_QUERY_AS_RO_TABLE, String.valueOf(readAsOptimized)); Spark读取Hive表时,用来判断,在类`org.apache.hudi.DataSourceOptionsHelper` def parametersWithReadDefaults(parameters: Map[String, String]): Map[String, String] = { // First check if the ConfigUtils.IS_QUERY_AS_RO_TABLE has set by HiveSyncTool, // or else use query type from QUERY_TYPE. val queryType = parameters.get(ConfigUtils.IS_QUERY_AS_RO_TABLE) .map(is => if (is.toBoolean) QUERY_TYPE_READ_OPTIMIZED_OPT_VAL else QUERY_TYPE_SNAPSHOT_OPT_VAL) .getOrElse(parameters.getOrElse(QUERY_TYPE.key, QUERY_TYPE.defaultValue())) QUERY_TYPE.key -> queryType ) ++ translateConfigurations(parameters) }体现在建表语句里则为:WITH SERDEPROPERTIES ( | | 'hoodie.query.as.ro.table'='false',inputFormat的语句:TORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat'完整的建表语句在后面的示例中示例DF这里利用Apache Hudi 入门学习总结中写Hudi并同步到Hive表的程序来验证COW表由于之前的文章中已经有COW表的建表语句了,这里直接copy过来+----------------------------------------------------+ | createtab_stmt | +----------------------------------------------------+ | CREATE TABLE `test_hudi_table_1`( | | `_hoodie_commit_time` string, | | `_hoodie_commit_seqno` string, | | `_hoodie_record_key` string, | | `_hoodie_partition_path` string, | | `_hoodie_file_name` string, | | `id` int, | | `name` string, | | `value` int, | | `ts` int) | | PARTITIONED BY ( | | `dt` string) | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' | | WITH SERDEPROPERTIES ( | | 'hoodie.query.as.ro.table'='false', | | 'path'='/tmp/test_hudi_table_1', | | 'primaryKey'='id') | | STORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.HoodieParquetInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' | | LOCATION | | 'hdfs://cluster1/tmp/test_hudi_table_1' | | TBLPROPERTIES ( | | 'last_commit_time_sync'='20220512101500', | | 'spark.sql.sources.provider'='hudi', | | 'spark.sql.sources.schema.numPartCols'='1', | | 'spark.sql.sources.schema.numParts'='1', | | 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":[{"name":"_hoodie_commit_time","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_commit_seqno","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_record_key","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_partition_path","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_file_name","type":"string","nullable":true,"metadata":{}},{"name":"id","type":"integer","nullable":false,"metadata":{}},{"name":"name","type":"string","nullable":true,"metadata":{}},{"name":"value","type":"integer","nullable":false,"metadata":{}},{"name":"ts","type":"integer","nullable":false,"metadata":{}},{"name":"dt","type":"string","nullable":true,"metadata":{}}]}', | | 'spark.sql.sources.schema.partCol.0'='dt', | | 'transient_lastDdlTime'='1652320902') | +----------------------------------------------------+可以看到'hoodie.query.as.ro.table'='false',对于COW表的视图为实时视图,inputFormat为org.apache.hudi.hadoop.HoodieParquetInputFormatMOR表我们将之前的save2HudiSyncHiveWithPrimaryKey方法中加个表类型的参数option(TABLE_TYPE.key(), MOR_TABLE_TYPE_OPT_VAL),将表名库名修改一下:val databaseName = "test" val tableName1 = "test_hudi_table_df_mor" val primaryKey = "id" val preCombineField = "ts" val partitionField = "dt" val tablePath1 = "/tmp/test_hudi_table_df_mor" 同步Hive表成功后,show tables,发现建了两张表test_hudi_table_df_mor_ro和test_hudi_table_df_mor_rt,通过上面的源码分析部分,我们知道_ro为读优化表,_rt为实时表,我们再看一下建表语句:+----------------------------------------------------+ | createtab_stmt | +----------------------------------------------------+ | CREATE TABLE `test_hudi_table_df_mor_ro`( | | `_hoodie_commit_time` string, | | `_hoodie_commit_seqno` string, | | `_hoodie_record_key` string, | | `_hoodie_partition_path` string, | | `_hoodie_file_name` string, | | `id` int, | | `name` string, | | `value` int, | | `ts` int) | | PARTITIONED BY ( | | `dt` string) | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' | | WITH SERDEPROPERTIES ( | | 'hoodie.query.as.ro.table'='true', | | 'path'='/tmp/test_hudi_table_df_mor', | | 'primaryKey'='id') | | STORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.HoodieParquetInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' | | LOCATION | | 'hdfs://cluster1/tmp/test_hudi_table_df_mor' | | TBLPROPERTIES ( | | 'last_commit_time_sync'='20220629145934', | | 'spark.sql.sources.provider'='hudi', | | 'spark.sql.sources.schema.numPartCols'='1', | | 'spark.sql.sources.schema.numParts'='1', | | 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":[{"name":"_hoodie_commit_time","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_commit_seqno","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_record_key","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_partition_path","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_file_name","type":"string","nullable":true,"metadata":{}},{"name":"id","type":"integer","nullable":false,"metadata":{}},{"name":"name","type":"string","nullable":true,"metadata":{}},{"name":"value","type":"integer","nullable":false,"metadata":{}},{"name":"ts","type":"integer","nullable":false,"metadata":{}},{"name":"dt","type":"string","nullable":true,"metadata":{}}]}', | | 'spark.sql.sources.schema.partCol.0'='dt', | | 'transient_lastDdlTime'='1656486059') | +----------------------------------------------------+ +----------------------------------------------------+ | createtab_stmt | +----------------------------------------------------+ | CREATE TABLE `test_hudi_table_df_mor_rt`( | | `_hoodie_commit_time` string, | | `_hoodie_commit_seqno` string, | | `_hoodie_record_key` string, | | `_hoodie_partition_path` string, | | `_hoodie_file_name` string, | | `id` int, | | `name` string, | | `value` int, | | `ts` int) | | PARTITIONED BY ( | | `dt` string) | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' | | WITH SERDEPROPERTIES ( | | 'hoodie.query.as.ro.table'='false', | | 'path'='/tmp/test_hudi_table_df_mor', | | 'primaryKey'='id') | | STORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' | | LOCATION | | 'hdfs://cluster1/tmp/test_hudi_table_df_mor' | | TBLPROPERTIES ( | | 'last_commit_time_sync'='20220629145934', | | 'spark.sql.sources.provider'='hudi', | | 'spark.sql.sources.schema.numPartCols'='1', | | 'spark.sql.sources.schema.numParts'='1', | | 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":[{"name":"_hoodie_commit_time","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_commit_seqno","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_record_key","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_partition_path","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_file_name","type":"string","nullable":true,"metadata":{}},{"name":"id","type":"integer","nullable":false,"metadata":{}},{"name":"name","type":"string","nullable":true,"metadata":{}},{"name":"value","type":"integer","nullable":false,"metadata":{}},{"name":"ts","type":"integer","nullable":false,"metadata":{}},{"name":"dt","type":"string","nullable":true,"metadata":{}}]}', | | 'spark.sql.sources.schema.partCol.0'='dt', | | 'transient_lastDdlTime'='1656486059') | +----------------------------------------------------+可以看到_ro和_rt有两个区别,一个是hoodie.query.as.ro.table,另外一个是INPUTFORMAT,对于Hive查询来说,只有INPUTFORMAT有用,hoodie.query.as.ro.table是Spark查询时用来判断是否为读优化表的,因为MOR表只有一次写入,所以只有parquet文件,没有增量文件.log,所以两个表查询出来的结构是一样的,后面用Spark SQL示例两者的区别Spark SQLHudi Spark SQL建表,不了解的可以参考:Hudi Spark SQL总结,之所以再提一下Spark SQL建表,是因为我发现他和DF写数据再同步建表有些许差别COW表create table test_hudi_table_cow ( id int, name string, price double, ts long, dt string ) using hudi partitioned by (dt) options ( primaryKey = 'id', preCombineField = 'ts', type = 'cow' );建表完成后,在Hive里查看Hive表的建表语句show create table test_hudi_table_cow; +----------------------------------------------------+ | createtab_stmt | +----------------------------------------------------+ | CREATE TABLE `test_hudi_table_cow`( | | `_hoodie_commit_time` string, | | `_hoodie_commit_seqno` string, | | `_hoodie_record_key` string, | | `_hoodie_partition_path` string, | | `_hoodie_file_name` string, | | `id` int, | | `name` string, | | `price` double, | | `ts` bigint) | | PARTITIONED BY ( | | `dt` string) | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' | | WITH SERDEPROPERTIES ( | | 'path'='hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_cow', | | 'preCombineField'='ts', | | 'primaryKey'='id', | | 'type'='cow') | | STORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.HoodieParquetInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' | | LOCATION | | 'hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_cow' | | TBLPROPERTIES ( | | 'last_commit_time_sync'='20220628152846', | | 'spark.sql.create.version'='2.4.5', | | 'spark.sql.sources.provider'='hudi', | | 'spark.sql.sources.schema.numPartCols'='1', | | 'spark.sql.sources.schema.numParts'='1', | | 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":[{"name":"_hoodie_commit_time","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_commit_seqno","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_record_key","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_partition_path","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_file_name","type":"string","nullable":true,"metadata":{}},{"name":"id","type":"integer","nullable":true,"metadata":{}},{"name":"name","type":"string","nullable":true,"metadata":{}},{"name":"price","type":"double","nullable":true,"metadata":{}},{"name":"ts","type":"long","nullable":true,"metadata":{}},{"name":"dt","type":"string","nullable":true,"metadata":{}}]}', | | 'spark.sql.sources.schema.partCol.0'='dt', | | 'transient_lastDdlTime'='1656401195') | +----------------------------------------------------+我们发现,Spark SQL建的表中没有hoodie.query.as.ro.table,我看了一下源码发现(上面有提到),Spark查询时val queryType = parameters.get(ConfigUtils.IS_QUERY_AS_RO_TABLE) .map(is => if (is.toBoolean) QUERY_TYPE_READ_OPTIMIZED_OPT_VAL else QUERY_TYPE_SNAPSHOT_OPT_VAL) .getOrElse(parameters.getOrElse(QUERY_TYPE.key, QUERY_TYPE.defaultValue()))QUERY_TYPE的默认值为QUERY_TYPE_SNAPSHOT_OPT_VAL,也就是快照查询,COW只有快照查询也就是默认值没有问题,QUERY_TYPE有三种类型:QUERY_TYPE_SNAPSHOT_OPT_VAL, QUERY_TYPE_READ_OPTIMIZED_OPT_VAL, QUERY_TYPE_INCREMENTAL_OPT_VAL,分别对应实时查询,读优化查询,增量查询,至于怎么利用Spark实现这些查询,这里不涉及MOR表create table test_hudi_table_mor ( id int, name string, price double, ts long, dt string ) using hudi partitioned by (dt) options ( primaryKey = 'id', preCombineField = 'ts', type = 'mor' );我们用Spark创建MOR表后,show tables看一下发现只有test_hudi_table_mor表,没有对应的_rt、_ro表,其实SparkSQL建表的时候还没用到Hive同步工具类HiveSyncTool,SparkSQL有自己的一套建表逻辑,而只有在写数据时才会用到HiveSyncTool,这也就是上面讲到的SparkSQL和DF同步建出来的表有差异的原因,接下来我们插入一条数据,来看一下结果insert into test_hudi_table_mor values (1,'hudi',10,100,'2021-05-05'); 我们发现多了两张表,因为这两张表,是insert 数据然后利用同步工具类HiveSyncTool创建的表,所以和程序中用DF写数据同步建的表是一样的,区别是内部表和外部表的区别,其实SparkSQL的逻辑如果表路径不等于库路径+表名,那么为外部表,这是合理的,而我们用DF建的表是因为我们程序中指定了内部表的参数,这样我们drop其中一张表就可以删掉数据,而用SparkSQL建的表,其实多了一张表内部表test_hudi_table_mor,我们可以通过drop这张表来删除数据。+----------------------------------------------------+ | createtab_stmt | +----------------------------------------------------+ | CREATE EXTERNAL TABLE `test_hudi_table_mor_ro`( | | `_hoodie_commit_time` string COMMENT '', | | `_hoodie_commit_seqno` string COMMENT '', | | `_hoodie_record_key` string COMMENT '', | | `_hoodie_partition_path` string COMMENT '', | | `_hoodie_file_name` string COMMENT '', | | `id` int COMMENT '', | | `name` string COMMENT '', | | `price` double COMMENT '', | | `ts` bigint COMMENT '') | | PARTITIONED BY ( | | `dt` string COMMENT '') | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' | | WITH SERDEPROPERTIES ( | | 'hoodie.query.as.ro.table'='true', | | 'path'='hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_mor') | | STORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.HoodieParquetInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' | | LOCATION | | 'hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_mor' | | TBLPROPERTIES ( | | 'last_commit_time_sync'='20220629153816', | | 'spark.sql.sources.provider'='hudi', | | 'spark.sql.sources.schema.numPartCols'='1', | | 'spark.sql.sources.schema.numParts'='1', | | 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":[{"name":"_hoodie_commit_time","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_commit_seqno","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_record_key","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_partition_path","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_file_name","type":"string","nullable":true,"metadata":{}},{"name":"id","type":"integer","nullable":true,"metadata":{}},{"name":"name","type":"string","nullable":true,"metadata":{}},{"name":"price","type":"double","nullable":true,"metadata":{}},{"name":"ts","type":"long","nullable":true,"metadata":{}},{"name":"dt","type":"string","nullable":true,"metadata":{}}]}', | | 'spark.sql.sources.schema.partCol.0'='dt', | | 'transient_lastDdlTime'='1656488248') | +----------------------------------------------------+ +----------------------------------------------------+ | createtab_stmt | +----------------------------------------------------+ | CREATE EXTERNAL TABLE `test_hudi_table_mor_rt`( | | `_hoodie_commit_time` string COMMENT '', | | `_hoodie_commit_seqno` string COMMENT '', | | `_hoodie_record_key` string COMMENT '', | | `_hoodie_partition_path` string COMMENT '', | | `_hoodie_file_name` string COMMENT '', | | `id` int COMMENT '', | | `name` string COMMENT '', | | `price` double COMMENT '', | | `ts` bigint COMMENT '') | | PARTITIONED BY ( | | `dt` string COMMENT '') | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' | | WITH SERDEPROPERTIES ( | | 'hoodie.query.as.ro.table'='false', | | 'path'='hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_mor') | | STORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' | | LOCATION | | 'hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_mor' | | TBLPROPERTIES ( | | 'last_commit_time_sync'='20220629153816', | | 'spark.sql.sources.provider'='hudi', | | 'spark.sql.sources.schema.numPartCols'='1', | | 'spark.sql.sources.schema.numParts'='1', | | 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":[{"name":"_hoodie_commit_time","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_commit_seqno","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_record_key","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_partition_path","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_file_name","type":"string","nullable":true,"metadata":{}},{"name":"id","type":"integer","nullable":true,"metadata":{}},{"name":"name","type":"string","nullable":true,"metadata":{}},{"name":"price","type":"double","nullable":true,"metadata":{}},{"name":"ts","type":"long","nullable":true,"metadata":{}},{"name":"dt","type":"string","nullable":true,"metadata":{}}]}', | | 'spark.sql.sources.schema.partCol.0'='dt', | | 'transient_lastDdlTime'='1656488248') | +----------------------------------------------------+我们再插入一条数据和更新一条数据,目的是为了生成log文件,来看两个表的不同insert into test_hudi_table_mor values (2,'hudi',11,110,'2021-05-05'); update test_hudi_table_mor set name='hudi_update' where id =1; select * from test_hudi_table_mor_ro; +----------------------+-----------------------+---------------------+-------------------------+----------------------------------------------------+-----+-------+--------+------+-------------+ | _hoodie_commit_time | _hoodie_commit_seqno | _hoodie_record_key | _hoodie_partition_path | _hoodie_file_name | id | name | price | ts | dt | +----------------------+-----------------------+---------------------+-------------------------+----------------------------------------------------+-----+-------+--------+------+-------------+ | 20220629153718 | 20220629153718_0_1 | id:1 | dt=2021-05-05 | bc415cdb-2b21-4d09-a3f6-a779357aa819-0_0-125-7240_20220629153718.parquet | 1 | hudi | 10.0 | 100 | 2021-05-05 | | 20220629153803 | 20220629153803_0_2 | id:2 | dt=2021-05-05 | bc415cdb-2b21-4d09-a3f6-a779357aa819-0_0-154-8848_20220629153803.parquet | 2 | hudi | 11.0 | 110 | 2021-05-05 | +----------------------+-----------------------+---------------------+-------------------------+----------------------------------------------------+-----+-------+--------+------+-------------+ select * from test_hudi_table_mor_rt; +----------------------+-----------------------+---------------------+-------------------------+----------------------------------------------------+-----+--------------+--------+------+-------------+ | _hoodie_commit_time | _hoodie_commit_seqno | _hoodie_record_key | _hoodie_partition_path | _hoodie_file_name | id | name | price | ts | dt | +----------------------+-----------------------+---------------------+-------------------------+----------------------------------------------------+-----+--------------+--------+------+-------------+ | 20220629153816 | 20220629153816_0_1 | id:1 | dt=2021-05-05 | bc415cdb-2b21-4d09-a3f6-a779357aa819-0 | 1 | hudi_update | 10.0 | 100 | 2021-05-05 | | 20220629153803 | 20220629153803_0_2 | id:2 | dt=2021-05-05 | bc415cdb-2b21-4d09-a3f6-a779357aa819-0_0-154-8848_20220629153803.parquet | 2 | hudi | 11.0 | 110 | 2021-05-05 | +----------------------+-----------------------+---------------------+-------------------------+----------------------------------------------------+-----+--------------+--------+------+-------------+我们发现_ro只能将新插入的查出来,而没有将更新的那条数据查出来,而_rt是将最新的数据都查出来,我们再插入和更新时看一下存储文件hadoop fs -ls hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_mor/dt=2021-05-05 Found 4 items -rw-rw----+ 3 spark hadoop 975 2022-06-29 15:38 hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_mor/dt=2021-05-05/.bc415cdb-2b21-4d09-a3f6-a779357aa819-0_20220629153803.log.1_0-186-10660 -rw-rw----+ 3 spark hadoop 93 2022-06-29 15:37 hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_mor/dt=2021-05-05/.hoodie_partition_metadata -rw-rw----+ 3 spark hadoop 435283 2022-06-29 15:37 hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_mor/dt=2021-05-05/bc415cdb-2b21-4d09-a3f6-a779357aa819-0_0-125-7240_20220629153718.parquet -rw-rw----+ 3 spark hadoop 434991 2022-06-29 15:38 hdfs://cluster1/warehouse/tablespace/managed/hive/test.db/test_hudi_table_mor/dt=2021-05-05/bc415cdb-2b21-4d09-a3f6-a779357aa819-0_0-154-8848_20220629153803.parquet发现,insert时是生成新的parquet文件,而更新时是生成.log文件,所以_ro表将新插入的数据也出来了,因为_ro只能查parquet文件(基本文件)中的数据,而_rt表可以动态合并最新文件切片的基本文件(例如parquet)和增量文件(例如avro)来提供近实时数据集(几分钟的延迟),至于MOR表的写入逻辑(什么条件下写增量文件)和合并逻辑(什么情况下合并增量文件为parquet),这里不深入讲解,以后我会单独总结。
前言简单总结如何利用Hive增量查询Hudi表同步Hive我们在写数据时,可以配置同步Hive参数,生成对应的Hive表,用来查询Hudi表,具体来说,在写入过程中传递了两个由table name命名的Hive表。 例如,如果table name = hudi_tbl,我们得到hudi_tbl 实现了由 HoodieParquetInputFormat 支持的数据集的读优化视图,从而提供了纯列式数据。hudi_tbl_rt 实现了由 HoodieParquetRealtimeInputFormat 支持的数据集的实时视图,从而提供了基础数据和日志数据的合并视图。上面的两条对比摘自官网,这里解释一下:其中实时视图_rt表只有在MOR表同步Hive元数据时才会有,并且hudi_tbl在表类型为MOR时并且为配置skipROSuffix=true时才为读优化视图,当为false(默认为false)时,读优化视图应该为hudi_tbl_ro,当表类型为COW时,hudi_tbl应该为实时视图,所以官网对这一块解释有一点问题大家注意Hive查询Hudi按照我之前总结的Apache Hudi 入门学习总结中Hive和Tez部分配置,就可以在Hive命令行里用Hive SQL查询Hudi表了增量查询修改配置hive-site.xml在Hive SQL白名单里添加hoodie.,其他均为已存在的配置,还可以根据需要添加其他白名单,如:`tez.|parquet.|planner.`hive.security.authorization.sqlstd.confwhitelist.append hoodie.*|mapred.*|hive.*|mapreduce.*|spark.*设置参数以表名为hudi_tbl为例:连接Hive connect/Hive Shell设置该表为增量表set hoodie.hudi_tbl.consume.mode=INCREMENTAL;设置增量开始的时间戳(不包含),作用:起到文件级别过滤,减少MAP数。set hoodie.hudi_tbl.consume.start.timestamp=20211015182330; 设置增量消费的COMMIT次数,默认设置为-1即可,表示增量消费到目前新数据。 set hoodie.hudi_tbl.consume.max.commits=-1; 自己根据需要修改commit次数查询语句select * from hudi_tbl where `_hoodie_commit_time` > "20211015182330"; 因小文件合并机制,在新的commit时间戳的文件中,包含旧数据,因此需要再加where做二次过滤。注:这里的设置设置参数有效范围为connect sessionHudi 0.9.0版本只支持表名参数,不支持数据库限定,这样设置了hudi_tbl为增量表后,所有数据库的该表名的表查询时都为增量查询模式,起始时间等参数为最后一次设定值,在后面的新版本中,添加了数据库限定,如hudi数据库
前言记录总结Hadoop源码编译打包过程,根据源码里的文档,一开始以为不支持在Windows系统上打包,只支持Unix和Mac,所以这里我在自己虚拟机centos7系统上编译,后来在文档后面部分才发现也支持在Windows上编译,不过还需要安装Visual Studio 2010,可能不如还不如在虚拟机上编译简单,如果想尝试在Windows上编译,可以看源码里的文档BUILDING.txt中Building on Windows的部分代码因之前没有下载过hadoop的源码,所以需要先下载hadoop的源码 git clone https://github.com/apache/hadoop.git git命令克隆源码,克隆的过程中可能会有异常:error: unable to create file hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/mockframework/ProportionalCapacityPreemptionPolicyMockFramework.java: Filename too long error: unable to create file hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/src/main/java/org/apache/hadoop/yarn/server/timelineservice/documentstore/collection/document/flowactivity/FlowActivityDocument.java: Filename too long error: unable to create file hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/TimelineFilterUtils.java: Filename too long error: unable to create file hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/package-info.java: Filename too long error: unable to create file hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseStorageMonitor.java: Filename too long error: unable to create file hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineReaderImpl.java: Filename too long error: unable to create file hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineSchemaCreator.java: Filename too long error: unable to create file hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java: Filename too long fatal: cannot create directory at 'hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/application': Filename too long因文件名过长,不能创建文件,只需要修改配置git config --global core.longpaths true删除clone失败的文件,重新clone就可以了,然后切换自己想要打包的版本分支,我这里使用的是分支3.3.1: branch-3.3.1这里最好在虚拟机上clone代码,如果在Windows系统上克隆代码再上传到虚拟机上,则在编译打包时会出现脚本因行结尾字符不一致导致的问题,后面有说明解决方法环境BUILDING.txt源码BUILDING.txt里,对必要环境依赖做了说明:---------------------------------------------------------------------------------- Requirements: * Unix System * JDK 1.8 * Maven 3.3 or later * Protocol Buffers 3.7.1 (if compiling native code) * CMake 3.1 or newer (if compiling native code) * Zlib devel (if compiling native code) * Cyrus SASL devel (if compiling native code) * One of the compilers that support thread_local storage: GCC 4.8.1 or later, Visual Studio, Clang (community version), Clang (version for iOS 9 and later) (if compiling native code) * openssl devel (if compiling native hadoop-pipes and to get the best HDFS encryption performance) * Linux FUSE (Filesystem in Userspace) version 2.6 or above (if compiling fuse_dfs) * Doxygen ( if compiling libhdfspp and generating the documents ) * Internet connection for first build (to fetch all Maven and Hadoop dependencies) * python (for releasedocs) * bats (for shell code testing) * Node.js / bower / Ember-cli (for YARN UI v2 building) ----------------------------------------------------------------------------------已有环境依赖Unix System Centos7系统JDK1.8 开发常用 (1.8.0_45)Maven 3.3 or later 开发常用 (3.8.1)git (clone代码用)Python 3.8.0 (之前有使用需求,非必要)Node v12.16.3Native libraries可以看到如果要编译Hadoop Native libraries,需要安装很多依赖,如果选择不编译则会简单很多,这里选择编译,关于Hadoop Native libraries,大家如果有不懂的可以自己查资料了解Hadoop源码文档里提供了Ubuntu 14.04的安装命令,因为这里是centos系统,将apt-get改成yum试一下(其实也提供了centos的安装命令,不过是centos8,不知centos8和7的差别大不大,我没有再尝试,因为我已经安装完依赖并打包成功了)yum install -y build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev libsasl2-dev结果发现autoconf和automake已安装,build-essential zlib1g-dev pkg-config libssl-dev libsasl2-dev找不到相关的包,这样只安装了cmakeNo package build-essential available. Package autoconf-2.69-11.el7.noarch already installed and latest version Package automake-1.13.4-3.el7.noarch already installed and latest version No package zlib1g-dev available. No package pkg-config available. No package libssl-dev available. No package libsasl2-dev available Installed: cmake.x86_64 0:2.8.12.2-2.el7 Dependency Installed: libarchive.x86_64 0:3.1.2-14.el7_7 Updated: libtool.x86_64 0:2.4.2-22.el7_3但是cmake的版本为2.8:$ cmake -version cmake version 2.8.12.2而所需要的版本是>=3.1 版本不符合,需要升级版本,网上查资料,记录安装过程:升级CMAKE首先卸载原先2.8版本的cmake yum -y remove cmake 下载cmakge的安装包:https://cmake.org/files/v3.23/cmake-3.23.0-rc1.tar.gz解压tar -zxvf cmake-3.23.0-rc1.tar.gz 编译cd cmake-3.23.0-rc1 ./configure编译过程中报以下异常:-- Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR)没有OpenSSL,上面的Requirements里也需要安装OpenSSL,那么先安装OpenSSL yum install openssl openssl-devel -y 安装完后,重新编译cmake编译成功后安装make -j$(nproc) (比较慢) sudo make install安装完成后验证cmake版本:$ cmake -version cmake version 3.23.0-rc1 CMake suite maintained and supported by Kitware (kitware.com/cmake). 参考:https://blog.csdn.net/qq_22938603/article/details/122964218ZLIB因按照文档上的命令很多依赖没有安装成功,那么我们单独尝试安装,这里发现zlib已经安装过了$ yum list installed | grep zlib-devel zlib-devel.x86_64 1.2.7-18.el7 @base OPENSSLopenssl 在升级cmake时也安装过了$ yum list installed | grep openssl-devel openssl-devel.x86_64 1:1.0.2k-25.el7_9 @updates安装 Protocol Buffers 3.7.1curl -L -s -S https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz -o protobuf-3.7.1.tar.gz tar xzf protobuf-3.7.1.tar.gz --strip-components 1 -C protobuf-3.7-src && cd protobuf-3.7-src ./configure make -j$(nproc) (比较慢) sudo make install如果命令下载比较慢,也可以直接在浏览器上下载再上传,安装完成后验证一下protoc的版本$ protoc --version libprotoc 3.7.1安装fuse不清楚fuse_dfs是干啥的,这里也没用到,不过试着装了一下$ yum list installed | grep fuse 发现没有安装,再查找fuse $ yum list | grep fuse https://sbt.bintray.com/rpm/repodata/repomd.xml: [Errno 14] HTTPS Error 502 - Bad Gateway Trying other mirror. fuse.x86_64 2.9.2-11.el7 base fuse-devel.i686 2.9.2-11.el7 base fuse-devel.x86_64 2.9.2-11.el7 base fuse-libs.i686 2.9.2-11.el7 base fuse-libs.x86_64 2.9.2-11.el7 base fuse-overlayfs.x86_64 0.7.2-6.el7_8 extras fuse3.x86_64 3.6.1-4.el7 extras fuse3-devel.x86_64 3.6.1-4.el7 extras fuse3-libs.x86_64 3.6.1-4.el7 extras fuseiso.x86_64 20070708-15.el7 base fusesource-pom.noarch 1.9-7.el7 base glusterfs-fuse.x86_64 6.0-61.el7 updates gvfs-fuse.x86_64 1.36.2-5.el7_9 updates ostree-fuse.x86_64 2019.1-2.el7 extrasyum install -y fuse.x86_64 打包到这里感觉自己所需要的依赖都已经装好了,先试一下,看一下有没有问题,这里第一次编译打包时需要下载很多依赖,比较慢,具体还依赖自己的网络环境打包命令:拷贝源码里的文档:---------------------------------------------------------------------------------- Building distributions: Create binary distribution without native code and without documentation: $ mvn package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true Create binary distribution with native code and with documentation: $ mvn package -Pdist,native,docs -DskipTests -Dtar Create source distribution: $ mvn package -Psrc -DskipTests Create source and binary distributions with native code and documentation: $ mvn package -Pdist,native,docs,src -DskipTests -Dtar Create a local staging version of the website (in /tmp/hadoop-site) $ mvn clean site -Preleasedocs; mvn site:stage -DstagingDirectory=/tmp/hadoop-site Note that the site needs to be built in a second pass after other artifacts. ----------------------------------------------------------------------------------根据自己的需求这里选择了下面的参数:mvn clean package -Pdist,native -DskipTests -Dtar 异常1/root/workspace/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs: line 16: $'\r': command not found : invalid option namep/hadoop-project/../dev-support/bin/dist-copynativelibs: line 17: set: pipefail /root/workspace/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs: line 18: $'\r': command not found /root/workspace/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs: line 21: syntax error near unexpected token `$'\r'' 'root/workspace/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs: line 21: `function bundle_native_lib()参考文章:https://blog.csdn.net/heihaozi/article/details/113602205解决方法:yum install -y dos2unix dos2unix dev-support/bin/* 异常2[INFO] --- frontend-maven-plugin:1.11.2:yarn (yarn install) @ hadoop-yarn-applications-catalog-webapp --- [INFO] Running 'yarn ' in /root/workspace/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target [INFO] yarn install v1.7.0 [INFO] info No lockfile found. [INFO] [1/4] Resolving packages... [INFO] warning angular-route@1.6.10: For the actively supported Angular, see https://www.npmjs.com/package/@angular/core. AngularJS support has officially ended. For extended AngularJS support options, see https://goo.gle/angularjs-path-forward. [INFO] warning angular@1.6.10: For the actively supported Angular, see https://www.npmjs.com/package/@angular/core. AngularJS support has officially ended. For extended AngularJS support options, see https://goo.gle/angularjs-path-forward. [INFO] info There appears to be trouble with your network connection. Retrying... [INFO] [2/4] Fetching packages... [INFO] error winston@3.7.2: The engine "node" is incompatible with this module. Expected version ">= 12.0.0". [INFO] error Found incompatible module [INFO] info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command. [INFO] ------------------------------------------------------------------------ ERROR] Failed to execute goal com.github.eirslett:frontend-maven-plugin:1.11.2:yarn (yarn install) on project hadoop-yarn-applications-catalog-webapp: Failed to run task: 'yarn ' failed. org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1) -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <args> -rf :hadoop-yarn-applications-catalog-webapp这里说是因为node版本不匹配,需要版本”>= 12.0.0”,但是我本地node版本是v12.16.3,而且我没有编译for YARN UI v2 building,应该用不到node,但是这里确实报了异常,通过查看目录hadoop-yarn-applications-catalog-webapp/target/node发现里面有yarn和node,所以文档上讲的有点问题,再打包hadoop-yarn-applications-catalog-webapp也用到了node,但是应该不是我本地的node应该是源码里依赖自带的,那么去对应项目里的pom查找,果然有yarn和node的依赖,我们将<nodeVersion>v8.11.3</nodeVersion>改为<nodeVersion>v12.16.3</nodeVersion>再打包,果然成功解决了上面的异常,并打包成功(我不确定有没有不改代码就可以解决这个异常并打包成功的方法)打包成功编译打包过程比较漫长,需要下载110个项目的依赖,中间下载依赖时可能还会卡住,这里我选择停掉命令重新打包,尝试几次后,编译打包成功[INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for Apache Hadoop Main 3.3.1: [INFO] [INFO] Apache Hadoop Main ................................. SUCCESS [ 6.283 s] [INFO] Apache Hadoop Build Tools .......................... SUCCESS [ 18.271 s] [INFO] Apache Hadoop Project POM .......................... SUCCESS [ 6.532 s] [INFO] Apache Hadoop Annotations .......................... SUCCESS [ 10.608 s] [INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.749 s] [INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 7.622 s] [INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 19.506 s] [INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 7.987 s] [INFO] Apache Hadoop Auth ................................. SUCCESS [ 23.571 s] [INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 10.436 s] [INFO] Apache Hadoop Common ............................... SUCCESS [04:31 min] [INFO] Apache Hadoop NFS .................................. SUCCESS [ 15.377 s] [INFO] Apache Hadoop KMS .................................. SUCCESS [ 11.759 s] [INFO] Apache Hadoop Registry ............................. SUCCESS [ 14.969 s] [INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.206 s] [INFO] Apache Hadoop HDFS Client .......................... SUCCESS [01:35 min] [INFO] Apache Hadoop HDFS ................................. SUCCESS [03:40 min] [INFO] Apache Hadoop HDFS Native Client ................... SUCCESS [ 5.246 s] [INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 16.972 s] [INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 9.375 s] [INFO] Apache Hadoop HDFS-RBF ............................. SUCCESS [01:13 min] [INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.184 s] [INFO] Apache Hadoop YARN ................................. SUCCESS [ 0.141 s] [INFO] Apache Hadoop YARN API ............................. SUCCESS [ 48.902 s] [INFO] Apache Hadoop YARN Common .......................... SUCCESS [01:51 min] [INFO] Apache Hadoop YARN Server .......................... SUCCESS [ 0.190 s] [INFO] Apache Hadoop YARN Server Common ................... SUCCESS [ 33.105 s] [INFO] Apache Hadoop YARN NodeManager ..................... SUCCESS [ 45.157 s] [INFO] Apache Hadoop YARN Web Proxy ....................... SUCCESS [ 7.523 s] [INFO] Apache Hadoop YARN ApplicationHistoryService ....... SUCCESS [ 12.931 s] [INFO] Apache Hadoop YARN Timeline Service ................ SUCCESS [ 11.880 s] [INFO] Apache Hadoop YARN ResourceManager ................. SUCCESS [01:07 min] [INFO] Apache Hadoop YARN Server Tests .................... SUCCESS [ 3.489 s] [INFO] Apache Hadoop YARN Client .......................... SUCCESS [ 16.235 s] [INFO] Apache Hadoop YARN SharedCacheManager .............. SUCCESS [ 8.134 s] [INFO] Apache Hadoop YARN Timeline Plugin Storage ......... SUCCESS [ 7.360 s] [INFO] Apache Hadoop YARN TimelineService HBase Backend ... SUCCESS [ 0.123 s] [INFO] Apache Hadoop YARN TimelineService HBase Common .... SUCCESS [ 12.865 s] [INFO] Apache Hadoop YARN TimelineService HBase Client .... SUCCESS [ 11.617 s] [INFO] Apache Hadoop YARN TimelineService HBase Servers ... SUCCESS [ 0.151 s] [INFO] Apache Hadoop YARN TimelineService HBase Server 1.2 SUCCESS [ 10.463 s] [INFO] Apache Hadoop YARN TimelineService HBase tests ..... SUCCESS [ 6.213 s] [INFO] Apache Hadoop YARN Router .......................... SUCCESS [ 10.472 s] [INFO] Apache Hadoop YARN TimelineService DocumentStore ... SUCCESS [ 7.838 s] [INFO] Apache Hadoop YARN Applications .................... SUCCESS [ 0.129 s] [INFO] Apache Hadoop YARN DistributedShell ................ SUCCESS [ 6.458 s] [INFO] Apache Hadoop YARN Unmanaged Am Launcher ........... SUCCESS [ 5.032 s] [INFO] Apache Hadoop MapReduce Client ..................... SUCCESS [ 0.474 s] [INFO] Apache Hadoop MapReduce Core ....................... SUCCESS [ 13.449 s] [INFO] Apache Hadoop MapReduce Common ..................... SUCCESS [ 18.252 s] [INFO] Apache Hadoop MapReduce Shuffle .................... SUCCESS [ 6.455 s] [INFO] Apache Hadoop MapReduce App ........................ SUCCESS [ 20.143 s] [INFO] Apache Hadoop MapReduce HistoryServer .............. SUCCESS [ 13.208 s] [INFO] Apache Hadoop MapReduce JobClient .................. SUCCESS [ 15.821 s] [INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 1.586 s] [INFO] Apache Hadoop YARN Services ........................ SUCCESS [ 0.095 s] [INFO] Apache Hadoop YARN Services Core ................... SUCCESS [ 8.248 s] [INFO] Apache Hadoop YARN Services API .................... SUCCESS [ 2.692 s] [INFO] Apache Hadoop YARN Application Catalog ............. SUCCESS [ 0.147 s] [INFO] Apache Hadoop YARN Application Catalog Webapp ...... SUCCESS [ 27.034 s] [INFO] Apache Hadoop YARN Application Catalog Docker Image SUCCESS [ 0.247 s] [INFO] Apache Hadoop YARN Application MaWo ................ SUCCESS [ 0.138 s] [INFO] Apache Hadoop YARN Application MaWo Core ........... SUCCESS [ 6.439 s] [INFO] Apache Hadoop YARN Site ............................ SUCCESS [ 0.164 s] [INFO] Apache Hadoop YARN Registry ........................ SUCCESS [ 0.849 s] [INFO] Apache Hadoop YARN UI .............................. SUCCESS [ 0.125 s] [INFO] Apache Hadoop YARN CSI ............................. SUCCESS [ 20.487 s] [INFO] Apache Hadoop YARN Project ......................... SUCCESS [ 24.698 s] [INFO] Apache Hadoop MapReduce HistoryServer Plugins ...... SUCCESS [ 5.055 s] [INFO] Apache Hadoop MapReduce NativeTask ................. SUCCESS [ 20.908 s] [INFO] Apache Hadoop MapReduce Uploader ................... SUCCESS [ 4.863 s] [INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 11.329 s] [INFO] Apache Hadoop MapReduce ............................ SUCCESS [ 6.357 s] [INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 11.950 s] [INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 12.106 s] [INFO] Apache Hadoop Client Aggregator .................... SUCCESS [ 6.302 s] [INFO] Apache Hadoop Dynamometer Workload Simulator ....... SUCCESS [ 9.199 s] [INFO] Apache Hadoop Dynamometer Cluster Simulator ........ SUCCESS [ 9.159 s] [INFO] Apache Hadoop Dynamometer Block Listing Generator .. SUCCESS [ 5.495 s] [INFO] Apache Hadoop Dynamometer Dist ..................... SUCCESS [ 11.781 s] [INFO] Apache Hadoop Dynamometer .......................... SUCCESS [ 0.093 s] [INFO] Apache Hadoop Archives ............................. SUCCESS [ 5.436 s] [INFO] Apache Hadoop Archive Logs ......................... SUCCESS [ 5.062 s] [INFO] Apache Hadoop Rumen ................................ SUCCESS [ 12.155 s] [INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 9.009 s] [INFO] Apache Hadoop Data Join ............................ SUCCESS [ 5.023 s] [INFO] Apache Hadoop Extras ............................... SUCCESS [ 4.804 s] [INFO] Apache Hadoop Pipes ................................ SUCCESS [ 1.855 s] [INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 8.414 s] [INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [ 35.861 s] [INFO] Apache Hadoop Kafka Library support ................ SUCCESS [ 4.439 s] [INFO] Apache Hadoop Azure support ........................ SUCCESS [ 22.346 s] [INFO] Apache Hadoop Aliyun OSS support ................... SUCCESS [ 6.827 s] [INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 12.319 s] [INFO] Apache Hadoop Resource Estimator Service ........... SUCCESS [ 9.552 s] [INFO] Apache Hadoop Azure Data Lake support .............. SUCCESS [ 6.276 s] [INFO] Apache Hadoop Image Generation Tool ................ SUCCESS [ 8.871 s] [INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 25.044 s] [INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.075 s] [INFO] Apache Hadoop Client API ........................... SUCCESS [03:06 min] [INFO] Apache Hadoop Client Runtime ....................... SUCCESS [03:26 min] [INFO] Apache Hadoop Client Packaging Invariants .......... SUCCESS [ 1.225 s] [INFO] Apache Hadoop Client Test Minicluster .............. SUCCESS [05:37 min] [INFO] Apache Hadoop Client Packaging Invariants for Test . SUCCESS [ 0.509 s] [INFO] Apache Hadoop Client Packaging Integration Tests ... SUCCESS [ 27.535 s] [INFO] Apache Hadoop Distribution ......................... SUCCESS [01:52 min] [INFO] Apache Hadoop Client Modules ....................... SUCCESS [ 0.228 s] [INFO] Apache Hadoop Tencent COS Support .................. SUCCESS [01:03 min] [INFO] Apache Hadoop Cloud Storage ........................ SUCCESS [ 2.805 s] [INFO] Apache Hadoop Cloud Storage Project ................ SUCCESS [ 0.134 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 45:20 min [INFO] Finished at: 2022-06-21T20:45:46+08:00 [INFO] ------------------------------------------------------------------------ 成功截图:
前言总结如何利用Hudi DeltaStreamer工具从外部数据源读取数据并写入新的Hudi表,HoodieDeltaStreamer是hudi-utilities-bundle的一部分,按照Apache Hudi 入门学习总结,将hudi-spark-bundle包拷贝至$SPARK_HOME/jars目录下即可。HoodieDeltaStreamer提供了从DFS或Kafka等不同来源进行摄取的方式,并具有以下功能。从Kafka单次摄取新事件,从Sqoop、HiveIncrementalPuller输出或DFS文件夹中的多个文件 增量导入支持json、avro或自定义记录类型的传入数据管理检查点,回滚和恢复利用DFS或Confluent schema注册表的Avro模式。支持自定义转换操作除了上述官网说的几项,也支持读取Hive表等(历史数据)转化Hudi表,源码里还有其他的工具类,可以自行查阅源码发掘命令行选项更详细地描述了这些功能:spark-submit --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer --help Options: --master MASTER_URL spark://host:port, mesos://host:port, yarn, k8s://https://host:port, or local (Default: local[*]). --deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or on one of the worker machines inside the cluster ("cluster") (Default: client). --class CLASS_NAME Your application's main class (for Java / Scala apps). --name NAME A name of your application. --jars JARS Comma-separated list of jars to include on the driver and executor classpaths. --packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version. --exclude-packages Comma-separated list of groupId:artifactId, to exclude while resolving the dependencies provided in --packages to avoid dependency conflicts. --repositories Comma-separated list of additional remote repositories to search for the maven coordinates given with --packages. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files FILES Comma-separated list of files to be placed in the working directory of each executor. File paths of these files in executors can be accessed via SparkFiles.get(fileName). --conf PROP=VALUE Arbitrary Spark configuration property. --properties-file FILE Path to a file from which to load extra properties. If not specified, this will look for conf/spark-defaults.conf. --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M). --driver-java-options Extra Java options to pass to the driver. --driver-library-path Extra library path entries to pass to the driver. --driver-class-path Extra class path entries to pass to the driver. Note that jars added with --jars are automatically included in the classpath. --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). --proxy-user NAME User to impersonate when submitting the application. This argument does not work with --principal / --keytab. --help, -h Show this help message and exit. --verbose, -v Print additional debug output. --version, Print the version of current Spark. Cluster deploy mode only: --driver-cores NUM Number of cores used by the driver, only in cluster mode (Default: 1). Spark standalone or Mesos with cluster deploy mode only: --supervise If given, restarts the driver on failure. --kill SUBMISSION_ID If given, kills the driver specified. --status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone and Mesos only: --total-executor-cores NUM Total cores for all executors. Spark standalone and YARN only: --executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode, or all available cores on the worker in standalone mode) YARN-only: --queue QUEUE_NAME The YARN queue to submit to (Default: "default"). --num-executors NUM Number of executors to launch (Default: 2). If dynamic allocation is enabled, the initial number of executors will be at least NUM. --archives ARCHIVES Comma separated list of archives to be extracted into the working directory of each executor. --principal PRINCIPAL Principal to be used to login to KDC, while running on secure HDFS. --keytab KEYTAB The full path to the file that contains the keytab for the principal specified above. This keytab will be copied to the node running the Application Master via the Secure Distributed Cache, for renewing the login tickets and the delegation tokens periodically.最新版本应该支持了更多参数,可以查阅官网:https://hudi.apache.org/cn/docs/hoodie_deltastreamerHive设置ambari设置:hive.resultset.use.unique.column.names=false,并重启SqlSource这里利用SqlSource 读取Hive历史表转化为Hudi表,先讲SqlSource的原因是其他几个类型的Source都需要提供表Schema相关的配置,比较麻烦,如JdbcbasedSchemaProvider需要配置jdbcUrl、user、password、table等或者FilebasedSchemaProvider需要提供一个Schema文件的地址如/path/source.avsc,无论是配置jdbc连接信息还是生成avsc文件都比较麻烦,所以想找一个不需要提供Schema的Source,通过搜索源码发现SqlSource可以满足这个需求,但是实际使用起来在0.9.0版本发现了bug,并不能直接使用,好在稍微修改一下对应的源码即可解决。当然还有其他不需要提供Schema的source,如ParquetDFSSource和CsvDFSSource,它们和SqlSource都是RowSource的子类,但是文件格式有限制,不如SqlSource通用,SqlSource只需要是Hive表即可,这也满足我们需要将Hive表转化为Hudi表的需求。创建Hive历史表create database test location '/test'; create table test.test_source ( id int, name string, price double, dt string, ts bigint insert into test.test_source values (105,'hudi', 10.0,'2021-05-05',100); Spark SQL创建Hudi目标表create database hudi location '/hudi'; create table hudi.test_hudi_target ( id int, name string, price double, ts long, dt string ) using hudi partitioned by (dt) options ( primaryKey = 'id', preCombineField = 'ts', type = 'cow' );这里事先用Spark SQL建表是因为虽然用HoodieDeltaStreamer时配置同步hive参数也可以自动建表,但是某些参数不生效,如hoodie.datasource.hive_sync.create_managed_table和hoodie.datasource.hive_sync.serde_properties,在properties配置和通过 --hoodie-conf 配置都不行,通过阅读源码,发现0.9.0版本不支持(应该属于bug),这样不满足我们要建内部表和主键表的需求,所以这里先用Spark SQL建表,再用HoodieDeltaStreamer转化数据。最新版本已支持这些参数,PR:https://github.com/apache/hudi/pull/4175配置文件COMMON.PROPERTIEShoodie.datasource.write.hive_style_partitioning=true hoodie.datasource.write.keygenerator.class=org.apache.hudi.keygen.ComplexKeyGenerator hoodie.datasource.hive_sync.use_jdbc=false hoodie.datasource.hive_sync.partition_extractor_class=org.apache.hudi.hive.MultiPartKeysValueExtractor SQL_SOURCE.PROPERTIESinclude=common.properties hoodie.datasource.write.recordkey.field=id hoodie.datasource.write.partitionpath.field=dt # 非分区表配置 hoodie.datasource.write.partitionpath.field= hoodie.deltastreamer.source.sql.sql.query = select * from test.test_source # 和同步Hive相关的配置 hoodie.datasource.hive_sync.table=test_hudi_target hoodie.datasource.hive_sync.database=hudi ## 非分区表可以不设置 hoodie.datasource.hive_sync.partition_fields=dt ## 内部表,默认外部表,0.9.0版本不支持 hoodie.datasource.hive_sync.create_managed_table = true ## 0.9.0版本不支持 hoodie.datasource.hive_sync.serde_properties = primaryKey=id命令spark-submit --conf "spark.sql.catalogImplementation=hive" \ --master yarn --deploy-mode client --executor-memory 2G --num-executors 3 --executor-cores 2 --driver-memory 4G --driver-cores 2 \ --principal spark/indata-10-110-105-163.indata.com@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytab \ --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer /usr/hdp/3.1.0.0-78/spark2/jars/hudi-utilities-bundle_2.11-0.9.0.jar \ --props file:///opt/dkl/sql_source.properties \ --target-base-path /hudi/test_hudi_target \ --target-table test_hudi_target \ --op BULK_INSERT \ --table-type COPY_ON_WRITE \ --source-ordering-field ts \ --source-class org.apache.hudi.utilities.sources.SqlSource \ --enable-sync \ --checkpoint earliest \ --hoodie-conf 'hoodie.datasource.hive_sync.create_managed_table = true' \ --hoodie-conf 'hoodie.datasource.hive_sync.serde_properties = primaryKey=id'enable-hive-sync和enable-sync都是开启同步Hive的,不过enable-hive-sync已弃用,建议用enable-sync这里需要加参数spark.sql.catalogImplementation=hive,因为源码里的Spark默认没有开启支持hive即enableHiveSupport,而enableHiveSupport的实现就是通过配置spark.sql.catalogImplementation=hive执行完后,查询目标表,可以发现数据已经从源表抽取到目标表了如果不加checkpoint(SqlSource从设计上不支持checkpoint,所以原则上不应该使用checkpoint参数),否则会有日志:No new data, source checkpoint has not changed. Nothing to commit. Old checkpoint=(Optional.empty). New Checkpoint=(null) 。这样不会抽取数据源码解读:if (Objects.equals(checkpointStr, resumeCheckpointStr.orElse(null))) { LOG.info("No new data, source checkpoint has not changed. Nothing to commit. Old checkpoint=(" + resumeCheckpointStr + "). New Checkpoint=(" + checkpointStr + ")"); return null; }当checkpointStr和resumeCheckpointStr相同时,则认为没有新的数据,checkpointStr是source里面的checkpoint,resumeCheckpointStr使我们根据配置从target里获取的checkpointStr = dataAndCheckpoint.getCheckpointForNextBatch() checkpointStr 在这里实际调用的是 SqlSource类里的 fetchNextBatch,而他的返回值写死为null return Pair.of(Option.of(source), null);resumeCheckpointStr的逻辑为当目标表为空时,返回cfg.checkpoint,具体的代码逻辑:(这里贴的为master最新代码,因为0.9.0版本的逻辑不如新版的清晰,大致逻辑是一样的)/** * Process previous commit metadata and checkpoint configs set by user to determine the checkpoint to resume from. * @param commitTimelineOpt commit timeline of interest. * @return the checkpoint to resume from if applicable. * @throws IOException private Option<String> getCheckpointToResume(Option<HoodieTimeline> commitTimelineOpt) throws IOException { Option<String> resumeCheckpointStr = Option.empty(); Option<HoodieInstant> lastCommit = commitTimelineOpt.get().lastInstant(); if (lastCommit.isPresent()) { // if previous commit metadata did not have the checkpoint key, try traversing previous commits until we find one. Option<HoodieCommitMetadata> commitMetadataOption = getLatestCommitMetadataWithValidCheckpointInfo(commitTimelineOpt.get()); if (commitMetadataOption.isPresent()) { HoodieCommitMetadata commitMetadata = commitMetadataOption.get(); LOG.debug("Checkpoint reset from metadata: " + commitMetadata.getMetadata(CHECKPOINT_RESET_KEY)); if (cfg.checkpoint != null && (StringUtils.isNullOrEmpty(commitMetadata.getMetadata(CHECKPOINT_RESET_KEY)) || !cfg.checkpoint.equals(commitMetadata.getMetadata(CHECKPOINT_RESET_KEY)))) { resumeCheckpointStr = Option.of(cfg.checkpoint); } else if (!StringUtils.isNullOrEmpty(commitMetadata.getMetadata(CHECKPOINT_KEY))) { //if previous checkpoint is an empty string, skip resume use Option.empty() resumeCheckpointStr = Option.of(commitMetadata.getMetadata(CHECKPOINT_KEY)); } else if (HoodieTimeline.compareTimestamps(HoodieTimeline.FULL_BOOTSTRAP_INSTANT_TS, HoodieTimeline.LESSER_THAN, lastCommit.get().getTimestamp())) { throw new HoodieDeltaStreamerException( "Unable to find previous checkpoint. Please double check if this table " + "was indeed built via delta streamer. Last Commit :" + lastCommit + ", Instants :" + commitTimelineOpt.get().getInstants().collect(Collectors.toList()) + ", CommitMetadata=" + commitMetadata.toJsonString()); // KAFKA_CHECKPOINT_TYPE will be honored only for first batch. if (!StringUtils.isNullOrEmpty(commitMetadata.getMetadata(CHECKPOINT_RESET_KEY))) { props.remove(KafkaOffsetGen.Config.KAFKA_CHECKPOINT_TYPE.key()); } else if (cfg.checkpoint != null) { // getLatestCommitMetadataWithValidCheckpointInfo(commitTimelineOpt.get()) will never return a commit metadata w/o any checkpoint key set. resumeCheckpointStr = Option.of(cfg.checkpoint); return resumeCheckpointStr; }所以我们加了--checkpoint earliest,但是这样的话SqlSource默认的只能抽取一次,如果多次抽取或用HoodieDeltaStreamer其他的增量抽取转化,则会抛异常:ERROR HoodieDeltaStreamer: Got error running delta sync once. Shutting down org.apache.hudi.utilities.exception.HoodieDeltaStreamerException: Unable to find previous checkpoint. Please double check if this table was indeed built via delta streamer. Last Commit :Option{val=[20220514205049__commit__COMPLETED]}, Instants :[[20220514205049__commit__COMPLETED]], CommitMetadata={ "partitionToWriteStats" : { "dt=2021-05-05" : [ { "fileId" : "487e265e-21f2-4830-9c54-e91bdae6e496-0", "path" : "dt=2021-05-05/487e265e-21f2-4830-9c54-e91bdae6e496-0_0-5-5_20220514205049.parquet", "prevCommit" : "null", "numWrites" : 1, "numDeletes" : 0, "numUpdateWrites" : 0, "numInserts" : 1, "totalWriteBytes" : 435208, "totalWriteErrors" : 0, "tempPath" : null, "partitionPath" : "dt=2021-05-05", "totalLogRecords" : 0, "totalLogFilesCompacted" : 0, "totalLogSizeCompacted" : 0, "totalUpdatedRecordsCompacted" : 0, "totalLogBlocks" : 0, "totalCorruptLogBlock" : 0, "totalRollbackBlocks" : 0, "fileSizeInBytes" : 435208, "minEventTime" : null, "maxEventTime" : null "compacted" : false, "extraMetadata" : { "schema" : "{\"type\":\"record\",\"name\":\"hoodie_source\",\"namespace\":\"hoodie.source\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"],\"default\":null},{\"name\":\"name\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"price\",\"type\":[\"null\",\"double\"],\"default\":null},{\"name\":\"dt\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ts\",\"type\":[\"null\",\"long\"],\"default\":null}]}", "deltastreamer.checkpoint.reset_key" : "earliest", "deltastreamer.checkpoint.key" : null "operationType" : "BULK_INSERT", "fileIdAndRelativePaths" : { "487e265e-21f2-4830-9c54-e91bdae6e496-0" : "dt=2021-05-05/487e265e-21f2-4830-9c54-e91bdae6e496-0_0-5-5_20220514205049.parquet" "totalRecordsDeleted" : 0, "totalLogRecordsCompacted" : 0, "totalLogFilesCompacted" : 0, "totalCompactedRecordsUpdated" : 0, "totalLogFilesSize" : 0, "totalScanTime" : 0, "totalCreateTime" : 0, "totalUpsertTime" : 0, "minAndMaxEventTime" : { "Optional.empty" : { "val" : null, "present" : false "writePartitionPaths" : [ "dt=2021-05-05" ] at org.apache.hudi.utilities.deltastreamer.DeltaSync.readFromSource(DeltaSync.java:347) at org.apache.hudi.utilities.deltastreamer.DeltaSync.syncOnce(DeltaSync.java:280) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.lambda$sync$2(HoodieDeltaStreamer.java:182) at org.apache.hudi.common.util.Option.ifPresent(Option.java:96) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.sync(HoodieDeltaStreamer.java:180) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:509) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 这是因为虽然我们加了参数--checkpoint earliest,但是代码里将checkpoint的值写死为null,在异常信息里可以看到从commit获取到的commit元数据信息:"deltastreamer.checkpoint.reset_key" : "earliest", "deltastreamer.checkpoint.key" : null (最新版本如果为null,则不保存,即没有这个key),代码对应为类:org.apache.hudi.utilities.sources.SqlSourcereturn Pair.of(Option.of(source), null);checkpoint为null就不能再次使用HoodieDeltaStreamer增量写这个表了,要解决这个问题,只需要将代码改为: return Pair.of(Option.of(source), "0"); 代码我已经提交到https://gitee.com/dongkelun/hudi/commits/0.9.0,该分支也包含对0.9.0版本的其他修改,除了这个异常还有可能因Hive版本不一致抛出没有方法setQueryTimeout的异常,解决方法我也提交到该分支了,可以自己查看。最新版本(0.11)已经尝试修复这个问题,PR:https://github.com/apache/hudi/pull/3648,可以参考这个PR解决这个问题,基于这个PR,我们使用该PR新增的参数--allow-commit-on-no-checkpoint-change,就会跳过No new data, source checkpoint has not changed. Nothing to commit. Old checkpoint=(Optional.empty). New Checkpoint=(null),它是这样解释的: allow commits even if checkpoint has not changed before and after fetch data from source. This might be useful in sources like SqlSource where there is not checkpoint. And is not recommended to enable in continuous mode当从源获取数据时,即使checkpoint没有变化也允许commit,这对于像SqlSource这样没有checkpoint的很有用,但是不建议在continuous模式中使用,但是SqlSource不能使用--checkpoint,否则依旧会报上面的异常,所以我提了一个PR:https://github.com/apache/hudi/pull/5633 尝试解决这个问题,不知道社区会不会接受DFSSourceDistributed File System (DFS)历史数据DFS JSON转化,支持多种数据格式创建hive历史表,存储格式JSONcreate table test.test_source_json( id int, name string, price double, ts bigint, dt string row format serde 'org.apache.hive.hcatalog.data.JsonSerDe' STORED AS TEXTFILE;插入数据insert into test.test_source_json values (1,'hudi', 10.0,100,'2021-05-05'); 配置文件dfs_source.properties这里演示非分区表include=common.properties hoodie.datasource.write.recordkey.field=id hoodie.datasource.write.partitionpath.field= hoodie.deltastreamer.source.dfs.root=/test/test_source_json hoodie.deltastreamer.schemaprovider.source.schema.jdbc.connection.url=jdbc:hive2://10.110.105.163:10000/hudi;principal=hive/indata-10-110-105-163.indata.com@INDATA.COM hoodie.deltastreamer.schemaprovider.source.schema.jdbc.driver.type=org.apache.hive.jdbc.HiveDriver hoodie.deltastreamer.schemaprovider.source.schema.jdbc.username=user hoodie.deltastreamer.schemaprovider.source.schema.jdbc.password=password hoodie.deltastreamer.schemaprovider.source.schema.jdbc.dbtable=test.test_source_json hoodie.deltastreamer.schemaprovider.source.schema.jdbc.timeout=100 hoodie.datasource.hive_sync.table=test_hudi_target_json hoodie.datasource.hive_sync.database=hudi命令spark-submit --principal hive/indata-10-110-105-163.indata.com@INDATA.COM --keytab /etc/security/keytabs/hive.service.keytab \ --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer /usr/hdp/3.1.0.0-78/spark2/jars/hudi-utilities-bundle_2.11-0.9.0.jar \ --props file:///opt/dkl/dfs_source.properties \ --target-base-path /hudi/test_hudi_target_json \ --target-table test_hudi_target_json \ --op BULK_INSERT \ --table-type COPY_ON_WRITE \ --schemaprovider-class org.apache.hudi.utilities.schema.JdbcbasedSchemaProvider \ --enable-sync这里没有指定--source-class的原因是,它的默认值就是JsonDFSSource用JdbcbasedSchemaProvider获取Schema的原因是因为我对于生成avsc文件没有经验,两者选其一,所以选择了通过配置jdbc的形式获取Schema当然这里在0.9.0版本如果需要内部表的话也需要和上面讲的一样事先用SparkSQL建表,0.11.0版本直接配置参数即可默认的代码读取Hive表Schema是有异常的,异常如下Exception in thread "main" org.apache.hudi.exception.HoodieException: Failed to get Schema through jdbc. at org.apache.hudi.utilities.schema.JdbcbasedSchemaProvider.getSourceSchema(JdbcbasedSchemaProvider.java:81) at org.apache.hudi.utilities.schema.SchemaProviderWithPostProcessor.getSourceSchema(SchemaProviderWithPostProcessor.java:42) at org.apache.hudi.utilities.deltastreamer.DeltaSync.registerAvroSchemas(DeltaSync.java:860) at org.apache.hudi.utilities.deltastreamer.DeltaSync.<init>(DeltaSync.java:235) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer$DeltaSyncService.<init>(HoodieDeltaStreamer.java:654) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.<init>(HoodieDeltaStreamer.java:143) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.<init>(HoodieDeltaStreamer.java:116) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:553) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: org.apache.hudi.exception.HoodieException: test.test_source_json table does not exists! at org.apache.hudi.utilities.UtilHelpers.getJDBCSchema(UtilHelpers.java:439) at org.apache.hudi.utilities.schema.JdbcbasedSchemaProvider.getSourceSchema(JdbcbasedSchemaProvider.java:79) ... 19 more第一个异常是:table does not exists!,当然根本原因并不是不存,需要修改源码,代码已经提交到代码我已经提交到https://gitee.com/dongkelun/hudi/commits/0.9.0,关于这个异常主要修改了两个地方,一个是因为Hive版本不一致抛出没有方法setQueryTimeout的异常,这里直接把调用setQueryTimeout方法的两个地方都删掉了,还有一个地方,是原来的tableExists如果遇到异常,直接返回false,后面的逻辑如果返回false,直接抛出异常table does not exists!,这样不能分析根本原因,因为还有其他原因造成的异常,比如kerberos权限问题,这里改成直接打印异常信息,方便分析原因,关于这一点我已经提交了PR:https://github.com/apache/hudi/pull/5827原代码private static Boolean tableExists(Connection conn, Map<String, String> options) { JdbcDialect dialect = JdbcDialects.get(options.get(JDBCOptions.JDBC_URL())); try (PreparedStatement statement = conn.prepareStatement(dialect.getTableExistsQuery(options.get(JDBCOptions.JDBC_TABLE_NAME())))) { statement.setQueryTimeout(Integer.parseInt(options.get(JDBCOptions.JDBC_QUERY_TIMEOUT()))); statement.executeQuery(); } catch (SQLException e) { return false; return true; }修改后:private static Boolean tableExists(Connection conn, Map<String, String> options) { JdbcDialect dialect = JdbcDialects.get(options.get(JDBCOptions.JDBC_URL())); try (PreparedStatement statement = conn.prepareStatement(dialect.getTableExistsQuery(options.get(JDBCOptions.JDBC_TABLE_NAME())))) { statement.setQueryTimeout(Integer.parseInt(options.get(JDBCOptions.JDBC_QUERY_TIMEOUT()))); statement.executeQuery(); } catch (SQLException e) { e.printStackTrace(); return true; }下面的这个异常,只需要修改Hive配置:hive.resultset.use.unique.column.names=false,关于这一点,我已经在Apache Hudi 入门学习总结提到过了Exception in thread "main" org.apache.hudi.exception.HoodieException: Failed to get Schema through jdbc. at org.apache.hudi.utilities.schema.JdbcbasedSchemaProvider.getSourceSchema(JdbcbasedSchemaProvider.java:81) at org.apache.hudi.utilities.schema.SchemaProviderWithPostProcessor.getSourceSchema(SchemaProviderWithPostProcessor.java:42) at org.apache.hudi.utilities.deltastreamer.DeltaSync.registerAvroSchemas(DeltaSync.java:730) at org.apache.hudi.utilities.deltastreamer.DeltaSync.<init>(DeltaSync.java:220) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer$DeltaSyncService.<init>(HoodieDeltaStreamer.java:606) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.<init>(HoodieDeltaStreamer.java:143) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.<init>(HoodieDeltaStreamer.java:107) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:509) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: org.apache.avro.SchemaParseException: Illegal character in: test_source_json.id at org.apache.avro.Schema.validateName(Schema.java:1151) at org.apache.avro.Schema.access$200(Schema.java:81) at org.apache.avro.Schema$Field.<init>(Schema.java:403) at org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2124) at org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2120) at org.apache.avro.SchemaBuilder$FieldBuilder.access$5200(SchemaBuilder.java:2034) at org.apache.avro.SchemaBuilder$GenericDefault.noDefault(SchemaBuilder.java:2417) at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:177) at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:174) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99) at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$.toAvroType(SchemaConverters.scala:174) at org.apache.hudi.AvroConversionUtils$.convertStructTypeToAvroSchema(AvroConversionUtils.scala:63) at org.apache.hudi.AvroConversionUtils.convertStructTypeToAvroSchema(AvroConversionUtils.scala) at org.apache.hudi.utilities.UtilHelpers.getJDBCSchema(UtilHelpers.java:388) at org.apache.hudi.utilities.schema.JdbcbasedSchemaProvider.getSourceSchema(JdbcbasedSchemaProvider.java:79) ... 19 moreKafkaSourceHoodieDeltaStreamer支持两种Kafka格式的数据Avro和Json,分别对应AvroKafkaSource和JsonKafkaSource,这里为了方便造数,以JsonKafkaSource为例Kafka配置文件KAFKA_CLIENT_JAAS.CONFKafkaClient { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=false useKeyTab=true keyTab="./kafka.service.keytab" principal="kafka/indata-10-110-105-163.indata.com@INDATA.COM" serviceName="kafka" storeKey=true renewTicket=true; };PRODUCER.PROPERTIES这个配置是为了往kafka里造数security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.mechanism=GSSAPI造数先kinit认证kerberoskinit -kt /etc/security/keytabs/kafka.service.keytab kafka/indata-10-110-105-163.indata.com@INDATA.COM /usr/hdp/3.1.0.0-78/kafka/bin/kafka-console-producer.sh --broker-list indata-10-110-105-163.indata.com:6667 --topic test_hudi_target_topic --producer.config=producer.properties {"id":1,"name":"hudi","price":11.0,"ts":100,"dt":"2021-05-05"} {"id":2,"name":"hudi","price":12.0,"ts":100,"dt":"2021-05-05"} {"id":3,"name":"hudi","price":13.0,"ts":100,"dt":"2021-05-06"} 消费命令行消费topic验证数据是否成功写到对应的topic/usr/hdp/3.1.0.0-78/kafka/bin/kafka-console-consumer.sh --bootstrap-server indata-10-110-105-163.indata.com:6667 --from-beginning --topic test_hudi_target_topic --group dkl_hudi --consumer-property security.protocol=SASL_PLAINTEXT {"id":1,"name":"hudi","price":11.0,"ts":100,"dt":"2021-05-05"} {"id":2,"name":"hudi","price":12.0,"ts":100,"dt":"2021-05-05"} {"id":3,"name":"hudi","price":13.0,"ts":100,"dt":"2021-05-06"}Hudi配置文件KAFKA_SOURCE.PROPERTIESinclude=common.properties hoodie.datasource.write.recordkey.field=id hoodie.datasource.write.partitionpath.field=dt hoodie.deltastreamer.source.kafka.topic=test_hudi_target_topic bootstrap.servers=indata-10-110-105-162.indata.com:6667,indata-10-110-105-163.indata.com:6667,indata-10-110-105-164.indata.com:6667 auto.offset.reset=earliest group.id=dkl_hudi security.protocol=SASL_PLAINTEXT hoodie.deltastreamer.schemaprovider.source.schema.jdbc.connection.url=jdbc:hive2://10.110.105.163:10000/default;principal=hive/indata-10-110-105-163.indata.com@INDATA.COM hoodie.deltastreamer.schemaprovider.source.schema.jdbc.driver.type=org.apache.hive.jdbc.HiveDriver hoodie.deltastreamer.schemaprovider.source.schema.jdbc.username=user hoodie.deltastreamer.schemaprovider.source.schema.jdbc.password=password hoodie.deltastreamer.schemaprovider.source.schema.jdbc.dbtable=test.test_source_json hoodie.deltastreamer.schemaprovider.source.schema.jdbc.timeout=100 hoodie.datasource.hive_sync.table=test_hudi_target_kafka hoodie.datasource.hive_sync.database=hudi hoodie.datasource.hive_sync.partition_fields=dtkafkaSource和dfsSource一样也需要提供表Schema,由于这里读取kafka,而没有源表,这里从上面dfsSource建的表test.test_source_json读取schema,从哪个表读取Schema都行,只要表结构一致即可命令spark-submit --principal hive/indata-10-110-105-163.indata.com@INDATA.COM --keytab /etc/security/keytabs/hive.service.keytab \ --files ./kafka_client_jaas.conf,./kafka.service.keytab \ --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./kafka_client_jaas.conf" \ --driver-java-options "-Djava.security.auth.login.config=./kafka_client_jaas.conf" \ --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer /usr/hdp/3.1.0.0-78/spark2/jars/hudi-utilities-bundle_2.11-0.9.0.jar \ --props file:///opt/dkl/kafka_source.properties \ --target-base-path /hudi/test_hudi_target_kafka \ --target-table test_hudi_target_kafka \ --op UPSERT \ --table-type COPY_ON_WRITE \ --schemaprovider-class org.apache.hudi.utilities.schema.JdbcbasedSchemaProvider \ --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \ --enable-sync上面的都是一次性读取转化,kafka也可以连续模式读取增量数据,通过参数--continuous,即:spark-submit --principal hive/indata-10-110-105-163.indata.com@INDATA.COM --keytab /etc/security/keytabs/hive.service.keytab \ --files ./kafka_client_jaas.conf,./kafka.service.keytab \ --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./kafka_client_jaas.conf" \ --driver-java-options "-Djava.security.auth.login.config=./kafka_client_jaas.conf" \ --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer /usr/hdp/3.1.0.0-78/spark2/jars/hudi-utilities-bundle_2.11-0.9.0.jar \ --props file:///opt/dkl/kafka_source.properties \ --target-base-path /hudi/test_hudi_target_kafka \ --target-table test_hudi_target_kafka \ --op UPSERT \ --table-type COPY_ON_WRITE \ --schemaprovider-class org.apache.hudi.utilities.schema.JdbcbasedSchemaProvider \ --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \ --enable-sync \ --continuous连续模式默认间隔0s即没有间隔连续性的读取checkpoint判断kafka(和offset对比)里是否有增量,可以通过参数--min-sync-interval-seconds来修改间隔,比如 –min-sync-interval-seconds 60,设置60s读取一次我们可以往kafka topic里再造几条josn数据,进行验证,是否可以正常读取增量数据多表转化博客参考: https://hudi.apache.org/blog/2020/08/22/ingest-multiple-tables-using-hudi/以kafka json示例,首选创建两个用于获取schema的空表,test.test_source_json_1,test.test_source_json_2,然后创建两个kafka topic并往里造数test_hudi_target_topic_1,test_hudi_target_topic_2,最后通过HoodieMultiTableDeltaStreamer往两个Hudi表test_hudi_target_kafka_1,test_hudi_target_kafka_2写数据配置文件KAFKA_SOURCE_MULTI_TABLE.PROPERTIEShoodie.deltastreamer.ingestion.tablesToBeIngested=hudi.test_hudi_target_kafka_1,hudi.test_hudi_target_kafka_2 hoodie.deltastreamer.ingestion.hudi.test_hudi_target_kafka_1.configFile=file:///opt/dkl/multi/config_table_1.properties hoodie.deltastreamer.ingestion.hudi.test_hudi_target_kafka_2.configFile=file:///opt/dkl/multi/config_table_2.properties #Kafka props bootstrap.servers=indata-10-110-105-162.indata.com:6667,indata-10-110-105-163.indata.com:6667,indata-10-110-105-164.indata.com:6667 auto.offset.reset=earliest group.id=dkl_hudi security.protocol=SASL_PLAINTEXTCONFIG_TABLE_1.PROPERTIESinclude=common.properties hoodie.datasource.write.recordkey.field=id hoodie.datasource.write.partitionpath.field=dt hoodie.deltastreamer.source.kafka.topic=test_hudi_target_topic_1 hoodie.deltastreamer.schemaprovider.source.schema.jdbc.connection.url=jdbc:hive2://10.110.105.163:10000/default;principal=hive/indata-10-110-105-163.indata.com@INDATA.COM hoodie.deltastreamer.schemaprovider.source.schema.jdbc.driver.type=org.apache.hive.jdbc.HiveDriver hoodie.deltastreamer.schemaprovider.source.schema.jdbc.username=user hoodie.deltastreamer.schemaprovider.source.schema.jdbc.password=password hoodie.deltastreamer.schemaprovider.source.schema.jdbc.dbtable=test.test_source_json_1 hoodie.deltastreamer.schemaprovider.source.schema.jdbc.timeout=100 hoodie.datasource.hive_sync.table=test_hudi_target_kafka_1 hoodie.datasource.hive_sync.database=hudi hoodie.datasource.hive_sync.partition_fields=dtCONFIG_TABLE_2.PROPERTIESinclude=common.properties hoodie.datasource.write.recordkey.field=id hoodie.datasource.write.partitionpath.field=dt hoodie.deltastreamer.source.kafka.topic=test_hudi_target_topic_2 hoodie.deltastreamer.schemaprovider.source.schema.jdbc.connection.url=jdbc:hive2://10.110.105.163:10000/default;principal=hive/indata-10-110-105-163.indata.com@INDATA.COM hoodie.deltastreamer.schemaprovider.source.schema.jdbc.driver.type=org.apache.hive.jdbc.HiveDriver hoodie.deltastreamer.schemaprovider.source.schema.jdbc.username=user hoodie.deltastreamer.schemaprovider.source.schema.jdbc.password=password hoodie.deltastreamer.schemaprovider.source.schema.jdbc.dbtable=test.test_source_json_2 hoodie.deltastreamer.schemaprovider.source.schema.jdbc.timeout=100 hoodie.datasource.hive_sync.table=test_hudi_target_kafka_2 hoodie.datasource.hive_sync.database=hudi hoodie.datasource.hive_sync.partition_fields=dtspark-submit --principal hive/indata-10-110-105-163.indata.com@INDATA.COM --keytab /etc/security/keytabs/hive.service.keytab \ --files "./kafka_client_jaas.conf,./kafka.service.keytab" \ --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./kafka_client_jaas.conf" \ --driver-java-options "-Djava.security.auth.login.config=./kafka_client_jaas.conf" \ --class org.apache.hudi.utilities.deltastreamer.HoodieMultiTableDeltaStreamer /usr/hdp/3.1.0.0-78/spark2/jars/hudi-utilities-bundle_2.11-0.9.0.jar \ --props file:///opt/dkl/multi/kafka_source_multi_table.properties \ --base-path-prefix / \ --config-folder file:///opt/dkl/multi \ --target-table test_hudi_target_kafka_1 \ --op UPSERT \ --table-type COPY_ON_WRITE \ --schemaprovider-class org.apache.hudi.utilities.schema.JdbcbasedSchemaProvider \ --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \ --enable-hive-sync注意这里--target-table是必填项,我们填其中一个表名即可,不过我认为不应该设置为必填,因为两个表名已经在配置文件中了还有0.9.0版本同步hive只有--enable-hive-sync参数没有--enable-sync,在最新版本里是有这个参数的,但是新版中--enable-hive-sync并没有弃用,可见HoodieMultiTableDeltaStreamer和HoodieDeltaStreamer对于相同的参数并没有保持一致,可能用的人不多,贡献的也就不多。关于是否可以去掉--target-table参数的问题我已经提交了PR:https://github.com/apache/hudi/pull/5883这里我们并没有指定target-base-path,那么程序又是怎么知道表路径是什么呢,通过阅读源码发现,表路径为:String targetBasePath = basePathPrefix + "/" + database + "/" + tableName; 具体代码在方法resetTarget,因为我们创建的数据库路径为/hudi,所以这里--base-path-prefix的值为/执行上面的命令检查是否成功从每个topic里读取数据并写到对应的表中HiveSchemaProvider对应类:org.apache.hudi.utilities.schema.HiveSchemaProvider上面介绍到用SqlSource的原因主要是可以不用提供为了获取Schema的jdbc Url等信息,但是SqlSource本身存在这一些问题,而其他的则要提供jdbc Url等信息配置起来麻烦,比如DFSSource KafkaSource等,而读取Kafka中的增量也不能用SqlSource,SqlSource只能用来转换一次增量数据,在0.9.0版本读取增量只能配置jdbc相关的参数来获取Schema,而且默认的代码读取Hive Schema还有bug或者不通用(因Hive版本不一致抛出没有方法setQueryTimeout的异常),只能自己该代码编译后才能使用,而0.11.0版本新增了HiveSchemaProvider,应该可以只指定库名表名就可以获取Schema信息了,具体如何使用等有时间我尝试一下再更新HiveSchemaProvider提供了四个参数:private static final String SOURCE_SCHEMA_DATABASE_PROP = "hoodie.deltastreamer.schemaprovider.source.schema.hive.database"; private static final String SOURCE_SCHEMA_TABLE_PROP = "hoodie.deltastreamer.schemaprovider.source.schema.hive.table"; private static final String TARGET_SCHEMA_DATABASE_PROP = "hoodie.deltastreamer.schemaprovider.target.schema.hive.database"; private static final String TARGET_SCHEMA_TABLE_PROP = "hoodie.deltastreamer.schemaprovider.target.schema.hive.table";分别为sourceSchema数据库名称、sourceSchema表名、targetSchema数据库名称、targetSchema表名,其中targetSchema对应的配置是可选的,当没有配置targetSchema,默认targetSchema等于sourceSchema,这个逻辑也同样适用于其他的SchemaProvider,比如上面示例中的JdbcbasedSchemaProvider,只不过JdbcbasedSchemaProvider并没有targetSchema的配置参数,只有sourceSchema的参数配置参数我们以上面的JsonDFSSource为例json_dfs_source.properties这里演示非分区表include=common.properties hoodie.datasource.write.recordkey.field=id hoodie.datasource.write.partitionpath.field= hoodie.deltastreamer.source.dfs.root=/test/test_source_json # 通过HiveSchemaProvider获取Schema hoodie.deltastreamer.schemaprovider.source.schema.hive.database=test hoodie.deltastreamer.schemaprovider.source.schema.hive.table=test_source_json hoodie.datasource.hive_sync.table=test_hudi_target_json_2 hoodie.datasource.hive_sync.database=hudi命令spark-submit --conf "spark.sql.catalogImplementation=hive" \ --principal hive/indata-10-110-105-163.indata.com@INDATA.COM --keytab /etc/security/keytabs/hive.service.keytab \ --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer /usr/hdp/3.1.0.0-78/spark2/jars/hudi-utilities-bundle_2.11-0.11.0.jar \ --props file:///opt/dkl/json_dfs_source.properties \ --target-base-path /hudi/test_hudi_target_json_2 \ --target-table test_hudi_target_json_2 \ --op BULK_INSERT \ --table-type COPY_ON_WRITE \ --schemaprovider-class org.apache.hudi.utilities.schema.HiveSchemaProvider \ --enable-sync我们先把jar升到0.11.0版本及以上,然后执行上面的命令,可能会报下面的异常,原因是因为HiveSchemaProvider,在获取json格式的表时需要用到hive-hcatalog-core.jar,我们去hive lib下面执行ls | grep hcatalog-core,找到该jar包,然后将jar拷贝至spark jars目录下再执行,就可以成功读取表schema并将json数据转为Hudi目标表。22/06/15 16:12:09 ERROR log: error in initSerDe: java.lang.ClassNotFoundException Class org.apache.hive.hcatalog.data.JsonSerDe not found java.lang.ClassNotFoundException: Class org.apache.hive.hcatalog.data.JsonSerDe not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2500) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:385) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276) at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258) at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$7.apply(HiveClientImpl.scala:373) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1$$anonfun$apply$7.apply(HiveClientImpl.scala:370) at scala.Option.map(Option.scala:146) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:370) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getTableOption$1.apply(HiveClientImpl.scala:368) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:277) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:214) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:260) at org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:368) at org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:81) at org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:84) at org.apache.spark.sql.hive.HiveExternalCatalog.getRawTable(HiveExternalCatalog.scala:118) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:700) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:700) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.getTable(HiveExternalCatalog.scala:699) at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.getTable(ExternalCatalogWithListener.scala:138) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:434) at org.apache.hudi.utilities.schema.HiveSchemaProvider.<init>(HiveSchemaProvider.java:65) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hudi.common.util.ReflectionUtils.loadClass(ReflectionUtils.java:89) at org.apache.hudi.common.util.ReflectionUtils.loadClass(ReflectionUtils.java:118) at org.apache.hudi.utilities.UtilHelpers.createSchemaProvider(UtilHelpers.java:155) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer$DeltaSyncService.<init>(HoodieDeltaStreamer.java:656) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.<init>(HoodieDeltaStreamer.java:143) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.<init>(HoodieDeltaStreamer.java:116) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:557) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 22/06/15 16:12:09 ERROR Table: Unable to get field from serde: org.apache.hive.hcatalog.data.JsonSerDe java.lang.RuntimeException: MetaException(message:java.lang.ClassNotFoundException Class org.apache.hive.hcatalog.data.JsonSerDe not found)通过示例中的配置可以看到利用HiveSchemaProvider获取schema时的配置比较简单,方便使用,如果有targetSchema和sourceShcema不一致的需求,大家可以通过配置targetSchema的库名表名自己尝试。对于其他类型的Source,大家觉得HiveSchemaProvider比较方便的话,也可以自行修改配置参数等。总结本文主要总结了Hudi DeltaStreamer的使用,以及遇到的各种问题,给出了解决方法,主要是使用该工具类读取历史表并转化为Hudi表以及读取增量数据写入Hudi表,当然也支持从关系型数据库读取表数据同步到Hudi表中,本文没有作出示例,由于问题较多,写的稍微乱一点,后面应该还会再写一篇整理一下,并且会从原理、源码层面进行总结,不过示例可能涉及会比较少。更新2022-09-02:hudi 0.12.0版本,同步hive时异常:2/09/02 17:40:33 ERROR HoodieDeltaStreamer: Got error running delta sync once. Shutting down org.apache.hudi.exception.HoodieException: Could not sync using the meta sync class org.apache.hudi.hive.HiveSyncTool at org.apache.hudi.sync.common.util.SyncUtilHelpers.runHoodieMetaSync(SyncUtilHelpers.java:58) at org.apache.hudi.utilities.deltastreamer.DeltaSync.runMetaSync(DeltaSync.java:716) at org.apache.hudi.utilities.deltastreamer.DeltaSync.writeToSink(DeltaSync.java:634) at org.apache.hudi.utilities.deltastreamer.DeltaSync.syncOnce(DeltaSync.java:335) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.lambda$sync$2(HoodieDeltaStreamer.java:201) at org.apache.hudi.common.util.Option.ifPresent(Option.java:97) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.sync(HoodieDeltaStreamer.java:199) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:557) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.NoClassDefFoundError: org/apache/calcite/rel/type/RelDataTypeSystem at org.apache.hadoop.hive.ql.parse.SemanticAnalyzerFactory.get(SemanticAnalyzerFactory.java:318) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:484) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227) at org.apache.hudi.hive.ddl.HiveQueryDDLExecutor.updateHiveSQLs(HiveQueryDDLExecutor.java:95) at org.apache.hudi.hive.ddl.HiveQueryDDLExecutor.runSQL(HiveQueryDDLExecutor.java:86) at org.apache.hudi.hive.ddl.QueryBasedDDLExecutor.createTable(QueryBasedDDLExecutor.java:92) at org.apache.hudi.hive.HoodieHiveSyncClient.createTable(HoodieHiveSyncClient.java:152) at org.apache.hudi.hive.HiveSyncTool.syncSchema(HiveSyncTool.java:279) at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:219) at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:153) at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:141) at org.apache.hudi.sync.common.util.SyncUtilHelpers.runHoodieMetaSync(SyncUtilHelpers.java:56) ... 19 more Caused by: java.lang.ClassNotFoundException: org.apache.calcite.rel.type.RelDataTypeSystem at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357)解决方法 hive/libls | grep cal calcite-core-1.16.0.3.1.0.0-78.jar calcite-druid-1.16.0.3.1.0.0-78.jar calcite-linq4j-1.16.0.3.1.0.0-78.jar 将calcite-core包拷贝到 spark/jars即可 cp calcite-core-1.16.0.3.1.0.0-78.jar /usr/hdp/3.1.0.0-78/spark2/jars/
前言总结Hudi Spark SQL的使用,本人仍然以Hudi0.9.0版本为例,也会稍微提及最新版的一些改动。Hudi 从0.9.0版本开始支持Spark SQL,是由阿里的pengzhiwei同学贡献的,pengzhiwei目前已不负责Hudi,改由同事YannByron负责,现在又有ForwardXu贡献了很多功能特性,目前好像主要由ForwardXu负责。三位都是大佬,都是Apache Hudi Committer,膜拜大佬,向大佬学习!!!大佬的github:彭志伟(阿里) pengzhiwei https://github.com/pengzhiwei2018毕岩(阿里) YannByron https://github.com/YannByron徐前进(腾讯) ForwardXu https://github.com/XuQianJin-Stars当然还有很多其他大佬,如Apache member/Hudi PMC Raymond Xu/许世彦 https://github.com/xushiyan,负责整个Spark模块配置参数核心参数:--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'使用三种方式使用Hudi Spark SQLSpark Thrift Server启动hudi-spark-thrift-server spark-submit --master yarn --deploy-mode client --executor-memory 2G --num-executors 3 --executor-cores 2 --driver-memory 2G --driver-cores 2 --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name hudi-spark-thrift-server --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' --principal spark/indata-192-168-44-128.indata.com@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytab --hiveconf hive.server2.thrift.http.port=20003连接hudi-spark-thrift-server/usr/hdp/3.1.0.0-78/spark2/bin/beeline -u "jdbc:hive2://192.168.44.128:20003/default;principal=HTTP/indata-192-168-44-128.indata.com@INDATA.COM?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice"spark-sql脚本 spark-sql --master yarn --deploy-mode client --executor-memory 2G --num-executors 3 --executor-cores 2 --driver-memory 2G --driver-cores 2 --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' --principal spark/indata-192-168-44-128.indata.com@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytabSpark 程序配置好参数后,直接使用spark.sql(sql)即可建表create table test_hudi_table ( id int, name string, price double, ts long, dt string ) using hudi partitioned by (dt) options ( primaryKey = 'id', preCombineField = 'ts', type = 'cow' location '/tmp/test_hudi_table' using hudi 表示我们要建的表是Hudi表primaryKey 主键,不设置的话,则表示该表没有主键,0.9.0版本以后必须设置preCombineField 预合并字段type 表类型也支持其他hudi参数:hoodie开头的一些配置参数,表参数优先级较高,可以覆盖其他SQL默认参数,慎用,因为有些参数可能有bug,比如 hoodie.table.name和hoodie.datasource.write.operation,详情参考PR:https://github.com/apache/hudi/pull/5495location 指定了外部路径,那么表默认为外部表,如果不指定则使用数据库路径,为内部表0.9.0版本以后 options 建议用 tblproperties, options可以继续使用执行完建表语句,会在对应的表路径下初始化Hudi表,生成.hoodie元数据目录,并且会将Hudi表元数据信息同步到Hive表中,可以自行在Hive中验证内部表外部表的逻辑,Spark SQL目前不能验证,即使为外部表也不显示,不知道是否为buginsertinsert into test_hudi_table values (1,'hudi',10,100,'2021-05-05'),(2,'hudi',10,100,'2021-05-05')或insert into test_hudi_table select 1 as id, 'hudi' as name, 10 as price, 100 as ts, '2021-05-05' as dt union select 2 as id, 'hudi' as name, 10 as price, 100 as ts, '2021-05-05' as dtinsert完查询验证一下,数据是否成功插入select * from test_hudi_table +----------------------+-----------------------+---------------------+-------------------------+--------------------------------------------------------------------------+-----+-------+--------+------+-------------+--+ | _hoodie_commit_time | _hoodie_commit_seqno | _hoodie_record_key | _hoodie_partition_path | _hoodie_file_name | id | name | price | ts | dt | +----------------------+-----------------------+---------------------+-------------------------+--------------------------------------------------------------------------+-----+-------+--------+------+-------------+--+ | 20220513110302 | 20220513110302_0_1 | id:2 | dt=2021-05-05 | 083f7be7-ecbd-4c60-a5c2-51b04bb5a3d5-0_0-27-2009_20220513110302.parquet | 2 | hudi | 10.0 | 100 | 2021-05-05 | | 20220513110302 | 20220513110302_0_2 | id:1 | dt=2021-05-05 | 083f7be7-ecbd-4c60-a5c2-51b04bb5a3d5-0_0-27-2009_20220513110302.parquet | 1 | hudi | 10.0 | 100 | 2021-05-05 | +----------------------+-----------------------+---------------------+-------------------------+--------------------------------------------------------------------------+-----+-------+--------+------+-------------+--+另外备注一下:insert默认是会随机更新的,随机指某些情况下,这和Hudi合并小文件有关,原理这里不详细解释,可以自行查看源码(以后可能会单独总结一篇相关的文章,和Hudi写文件、合并文件有关)。要想是insert操作不更新,可以使用以下配置: hoodie.merge.allow.duplicate.on.inserts = true 相关PR:https://github.com/apache/hudi/pull/3644,这个PR是在Java客户端支持这个参数的,Spark客户端本身(在这之前)就支持这个参数update update test_hudi_table set price = 20.0 where id = 1 price字段已经成功更新+----------------------+-----------------------+---------------------+-------------------------+--------------------------------------------------------------------------+-----+-------+--------+------+-------------+--+ | _hoodie_commit_time | _hoodie_commit_seqno | _hoodie_record_key | _hoodie_partition_path | _hoodie_file_name | id | name | price | ts | dt | +----------------------+-----------------------+---------------------+-------------------------+--------------------------------------------------------------------------+-----+-------+--------+------+-------------+--+ | 20220513110302 | 20220513110302_0_1 | id:2 | dt=2021-05-05 | 083f7be7-ecbd-4c60-a5c2-51b04bb5a3d5-0_0-27-2009_20220513110302.parquet | 2 | hudi | 10.0 | 100 | 2021-05-05 | | 20220513143459 | 20220513143459_0_1 | id:1 | dt=2021-05-05 | 083f7be7-ecbd-4c60-a5c2-51b04bb5a3d5-0_0-57-3422_20220513143459.parquet | 1 | hudi | 20.0 | 100 | 2021-05-05 | +----------------------+-----------------------+---------------------+-------------------------+--------------------------------------------------------------------------+-----+-------+--------+------+-------------+--+deletedelete from test_hudi_table where id = 1 id为1的记录被成功删除+----------------------+-----------------------+---------------------+-------------------------+--------------------------------------------------------------------------+-----+-------+--------+------+-------------+--+ | _hoodie_commit_time | _hoodie_commit_seqno | _hoodie_record_key | _hoodie_partition_path | _hoodie_file_name | id | name | price | ts | dt | +----------------------+-----------------------+---------------------+-------------------------+--------------------------------------------------------------------------+-----+-------+--------+------+-------------+--+ | 20220513110302 | 20220513110302_0_1 | id:2 | dt=2021-05-05 | 083f7be7-ecbd-4c60-a5c2-51b04bb5a3d5-0_0-27-2009_20220513110302.parquet | 2 | hudi | 10.0 | 100 | 2021-05-05 | +----------------------+-----------------------+---------------------+-------------------------+--------------------------------------------------------------------------+-----+-------+--------+------+-------------+--+mergeHUDI 支持MERGE语句,有merge into 、merge update和merge delte。可以把增删改统一为:merge into test_hudi_table as t0 using ( select 1 as id, 'hudi' as name, 112 as price, 98 as ts, '2021-05-05' as dt,'INSERT' as opt_type union select 2 as id, 'hudi_2' as name, 10 as price, 100 as ts, '2021-05-05' as dt,'UPDATE' as opt_type union select 3 as id, 'hudi' as name, 10 as price, 100 as ts, '2021-05-05' as dt ,'DELETE' as opt_type ) as s0 on t0.id = s0.id when matched and opt_type!='DELETE' then update set * when matched and opt_type='DELETE' then delete when not matched and opt_type!='DELETE' then insert *+----------------------+-----------------------+---------------------+-------------------------+---------------------------------------------------------------------------+-----+---------+--------+------+-------------+--+ | _hoodie_commit_time | _hoodie_commit_seqno | _hoodie_record_key | _hoodie_partition_path | _hoodie_file_name | id | name | price | ts | dt | +----------------------+-----------------------+---------------------+-------------------------+---------------------------------------------------------------------------+-----+---------+--------+------+-------------+--+ | 20220513143914 | 20220513143914_0_1 | id:2 | dt=2021-05-05 | 083f7be7-ecbd-4c60-a5c2-51b04bb5a3d5-0_0-137-7255_20220513143914.parquet | 2 | hudi_2 | 10.0 | 100 | 2021-05-05 | | 20220513143914 | 20220513143914_0_2 | id:1 | dt=2021-05-05 | 083f7be7-ecbd-4c60-a5c2-51b04bb5a3d5-0_0-137-7255_20220513143914.parquet | 1 | hudi | 112.0 | 98 | 2021-05-05 | +----------------------+-----------------------+---------------------+-------------------------+---------------------------------------------------------------------------+-----+---------+--------+------+-------------+--+这样的好处有:1、把所有的类型统一为一个SQL,避免产生过多的job2、避免产生异常,对于insert,如果新insert的主键已经存在会产生异常3、减少程序复杂度,对于update和delete,不用判断where条件和要修改哪些字段,再去拼接sql(从binlog里可能不能获取这些内容)4、提升性能,对于批量DELETE, merge的性能要比使用in 好。但是需要注意,当没有设置preCombineField = ‘ts’时,新来的数据会直接覆盖掉历史数据,这种情况存在于新数据的到达时间早于旧数据的到达时间的情况。merge values在merge的基础上对源码进行优化(代码目前没有提交到社区,想使用的话可以查看:https://gitee.com/dongkelun/hudi/commits/0.9.0),使Hudi SQL 支持 merge values形式,示例如下:merge into test_hudi_table as t0 using (1, 'hudi', 112, 98, '2021-05-05','INSERT'), (2, 'hudi_2', 1, 100, '2021-05-05','UPDATE'), (3, 'hudi', 10, 100, '2021-05-05','DELETE') as s0 (id,name,price,ts,dt,opt_type) on t0.id = s0.id when matched and opt_type!='DELETE' then update set * when matched and opt_type='DELETE' then delete when not matched and opt_type!='DELETE' then insert *对于想要SQL实现数据同步:这样修改的原因是merge是merge subQuery的形式,当拼接SQL很长时,如7000条记录,这样等于7000个select的语句,程序用递归的形式解析SQL很慢,仅解析subQuery的时间就要10分钟,这样不能满足我们的分钟级事务的需求。而通过修改源码支持 merge values的形式,通过values传值 ,这样解析时间从10分钟降低到几秒,后面在用程序将values转成表,直接upsert,大大提升了每个批次的事务时间。经过测试,千万级历史数据,千万级日增量,即平均每分钟7000条,可以实现分钟级事务的需求。测评记录用merge values语句测试的性能结果Spark Thrift Server配置参数--executor-memory 4G --num-executors 15 --executor-cores 2 --driver-memory 6G --driver-cores 2历史数据本次测评数据采用TPC-DS的web_sales表,历史数据一千万,模拟日增量一千万,需要注意的是,源数据表的小数类型同样为double类型,不能是decimal,否则在后面增量数据同步时会有异常(Hudi Spark SQL 对于decimal类型有bug)增量数据程序读取增量数据拼接SQL,jdbc连接Spark Thrift Server实现增量同步,拼接SQL性能:一万条记录1秒之内完成测评结果Spark ServerStreaming批次批次数据量时间(s)批次批次数据量时间(s)16000119168985926000792321127360007035999364600068459993355902326599535Spark Server为Java程序通过JDBC连接Spark Thrift Server,在第一次没有缓存的情况下时间为120秒,在有缓存的情况下为70秒。Streaming 为用Structured Streaming在每个批次中拼接Merge SQL,然后调用spark.sql()实现,从结果上看Streaming要比Spark Server快30秒,主要原因是Spark Server 的延迟调度时间比Streaming 的时间长,目前还没有找到解决方案使Spark Server的时间缩减到和Streaming的时间相当。本次测评模拟的增量数据每分钟包含所有分区,没有起到分区过滤的效果。实际生产数据只包含部分少量分区,可以起到分区过滤的效果,增量同步的性能优于本次测评。总结本文主要总结了Hudi0.9.0版本Spark SQL常用的SQL命令的使用以及一些注意事项,其实还支持其他SQL语义,并且新版本支持的更多,大家可以自己学习测试。
前言学习和使用Hudi近一年了,由于之前忙于工作和学习,没时间总结,现在从头开始总结一下,先从入门开始Hudi 概念Apache Hudi 是一个支持插入、更新、删除的增量数据湖处理框架,有两种表类型:COW和MOR,可以自动合并小文件,Hudi自己管理元数据,元数据目录为.hoodie,具体的概念可以查看官网https://hudi.apache.org/cn/docs/0.9.0/overviewHudi 学习Hudi 官网 https://hudi.apache.org/cn/docs/0.9.0/overview/(因本人最开始学习时Hudi的版本为0.9.0版本,所以这里列的也是0.9.0的连接)Hudi 官方公众号号:ApacheHudi (Hudi PMC leesf 运营的),自己搜索即可,这里不贴二维码了Github https://github.com/leesf/hudi-resources 这个是Hudi PMC leesf整理的公众号上的文章,PC 浏览器上看比较方便GitHub 源码 https://github.com/apache/hudi 想要深入学习,还是得看源码并多和社区交流Hudi 安装只需要将Hudi的jar包放到Spark和Hive对应的路径下,再修改几个配置SparkHudi支持Spark程序读写Hudi表,同时也支持Spark SQL insert/update/delete/merge等包名:hudi-spark-bundle_2.11-0.9.0.jar下载地址:https://repo1.maven.org/maven2/org/apache/hudi/hudi-spark-bundle_2.11/0.9.0/hudi-spark-bundle_2.11-0.9.0.jar包名:hudi-utilities-bundle_2.11-0.9.0.jar下载地址:https://repo1.maven.org/maven2/org/apache/hudi/hudi-utilities-bundle_2.11/0.9.0/hudi-utilities-bundle_2.11-0.9.0.jar将hudi-spark-bundle_2.11-0.9.0.jar 和 hudi-utilities-bundle_2.11-0.9.0.jar拷贝到$SPARK_HOME/jars,当前版本目录为/usr/hdp/3.1.0.0-78/spark2/jars/版本说明:0.9.0为hudi发行版本,2.11为HDP中Spark对应的scala版本这里提供的是Maven的下载地址,对于其他版本,Maven上可以下载到,当然也可以自己打包另外可能还需要json包,只需要将json包也放到$SPARK_HOME/jars目录下即可,如json-20200518.jar,json包从一样从maven仓库下载即可,地址为https://repo1.maven.org/maven2/org/json/json/20200518/json-20200518.jar,不放的话可能会报缺少json包异常,如在同步hive元数据时Exception in thread "main" java.lang.NoClassDefFoundError: org/json/JSONException at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:10847) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10047) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10128) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:209) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049) at org.apache.hudi.hive.ddl.HiveQueryDDLExecutor.updateHiveSQLs(HiveQueryDDLExecutor.java:94) at org.apache.hudi.hive.ddl.HiveQueryDDLExecutor.runSQL(HiveQueryDDLExecutor.java:85) at org.apache.hudi.hive.ddl.QueryBasedDDLExecutor.createTable(QueryBasedDDLExecutor.java:82) at org.apache.hudi.hive.HoodieHiveClient.createTable(HoodieHiveClient.java:191) at org.apache.hudi.hive.HiveSyncTool.syncSchema(HiveSyncTool.java:237) at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:182) at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:131) at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:117) at org.apache.hudi.utilities.deltastreamer.DeltaSync.syncHive(DeltaSync.java:625) at org.apache.hudi.utilities.deltastreamer.DeltaSync.syncMeta(DeltaSync.java:601) at org.apache.hudi.utilities.deltastreamer.DeltaSync.writeToSink(DeltaSync.java:526) at org.apache.hudi.utilities.deltastreamer.DeltaSync.syncOnce(DeltaSync.java:303) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.lambda$sync$2(HoodieDeltaStreamer.java:182) at org.apache.hudi.common.util.Option.ifPresent(Option.java:96) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.sync(HoodieDeltaStreamer.java:180) at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:509) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.ClassNotFoundException: org.json.JSONException at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 39 moreHiveHudi可以将元数据同步到Hive表中,Hive只能用来查询,不能insert/update/delete包名:hudi-hadoop-mr-bundle-0.9.0.jar下载地址:https://repo1.maven.org/maven2/org/apache/hudi/hudi-hadoop-mr-bundle/0.9.0/hudi-hadoop-mr-bundle-0.9.0.jar1、将hudi-hadoop-mr-bundle-0.9.0.jar 拷贝至$HIVE_HOME/lib,当前版本目录为:/usr/hdp/3.1.0.0-78/hive/lib/2、修改hive配置(在hive-site.xml) hive.input.format=org.apache.hudi.hadoop.HoodieParquetInputFormathive.resultset.use.unique.column.names=false (修改这里的配置是因为如果我们用hudi-utilities-bundle中的工具类HoodieDeltaStreamer,其中的JdbcbasedSchemaProvider解析Hive表Schema时需要设置这个属性,否则解析异常,关于HoodieDeltaStreamer的使用我会单独在另一篇文章中总结)3、重启hiveTez1、将上述hudi-hadoop-mr-bundle-0.9.0.jar 打到/hdp/apps/${hdp.version}/tez/tez2.tar.gz中注意:这里的路径是指HDFS路径2、修改hive配置(在hive-site.xml) hive.tez.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat3、重启Tez、Hive关于第一个打包到tez2.tar.gz,我自己写了一个脚本,如下:jar=$1 sudo rm -r tez_temp mkdir tez_temp cd tez_temp hadoop fs -get /hdp/apps/3.1.0.0-78/tez/tez.tar.gz mkdir tez tar -zxvf tez.tar.gz -C tez mkdir gz sudo rm -r tez/lib/hudi-hadoop-mr* cp $jar tez/lib/ cd tez tar -zcvf ../gz/tez.tar.gz ./* hadoop fs -rm -r /hdp/apps/3.1.0.0-78/tez/tez.tar.gz.back hadoop fs -mv /hdp/apps/3.1.0.0-78/tez/tez.tar.gz /hdp/apps/3.1.0.0-78/tez/tez.tar.gz.back cd ../gz/ hadoop fs -put tez.tar.gz /hdp/apps/3.1.0.0-78/tez/ su - hdfs <<EOF kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-cluster1@INDATA.COM hadoop fs -chown hdfs:haoop /hdp/apps/3.1.0.0-78/tez/tez.tar.gz hadoop fs -chmod 444 /hdp/apps/3.1.0.0-78/tez/tez.tar.gz hadoop fs -ls /hdp/apps/3.1.0.0-78/tez/ EOF这个脚本在我自己的环境上是可以正常运行使用的,当然可能因本人水平有限,写的还不够好,不能适用所有环境,可以自行修改,仅做参考FlinkHudi也支持Flink,本人目前还不会Flink~,可以参考官网https://hudi.apache.org/cn/docs/0.9.0/flink-quick-start-guideHudi 写入Hudi支持Spark、Flink、Java等多种客户端,本人常用Spark、Java客户端,这俩相比较而言,大家用Spark较多,这里就以Spark代码进行简单的示例总结Spark 配置参数import org.apache.hudi.DataSourceWriteOptions import org.apache.hudi.DataSourceWriteOptions._ import org.apache.hudi.config.HoodieWriteConfig import org.apache.hudi.config.HoodieWriteConfig.TBL_NAME import org.apache.hudi.hive.MultiPartKeysValueExtractor import org.apache.hudi.keygen.ComplexKeyGenerator import org.apache.spark.sql.SaveMode.{Append, Overwrite} import org.apache.spark.sql.hudi.command.UuidKeyGenerator import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession} val spark = SparkSession.builder(). master("local[*]"). appName("SparkHudiDemo"). config("spark.serializer", "org.apache.spark.serializer.KryoSerializer"). // 扩展Spark SQL,使Spark SQL支持Hudi config("spark.sql.extensions", "org.apache.spark.sql.hudi.HoodieSparkSessionExtension"). // 支持Hive,本地测试时,注释掉 // enableHiveSupport(). getOrCreate()写Hudi并同步到Hive表代码示例:val spark = SparkSession.builder(). master("local[*]"). appName("SparkHudiDemo"). config("spark.serializer", "org.apache.spark.serializer.KryoSerializer"). // 扩展Spark SQL,使Spark SQL支持Hudi config("spark.sql.extensions", "org.apache.spark.sql.hudi.HoodieSparkSessionExtension"). // 支持Hive,本地测试时,注释掉 // enableHiveSupport(). getOrCreate() import spark.implicits._ val df = Seq((1, "a1", 10, 1000, "2022-05-12")).toDF("id", "name", "value", "ts", "dt") val databaseName = "default" val tableName1 = "test_hudi_table_1" val primaryKey = "id" val preCombineField = "ts" val partitionField = "dt" val tablePath1 = "/tmp/test_hudi_table_1" save2HudiSyncHiveWithPrimaryKey(df, databaseName, tableName1, primaryKey, preCombineField, partitionField, UPSERT_OPERATION_OPT_VAL, tablePath1, Overwrite) spark.read.format("hudi").load(tablePath1).show(false) // 删除表 save2HudiSyncHiveWithPrimaryKey(df, databaseName, tableName1, primaryKey, preCombineField, partitionField, DELETE_OPERATION_OPT_VAL, tablePath1, Append) spark.read.format("hudi").load(tablePath1).show(false) * 写hudi并同步到hive,有主键,分区字段dt def save2HudiSyncHiveWithPrimaryKey(df: DataFrame, databaseName: String, tableName: String, primaryKey: String, preCombineField: String, partitionField: String, operation: String, tablePath: String, mode: SaveMode): Unit = { write.format("hudi"). option(RECORDKEY_FIELD.key, primaryKey). // 主键字段 option(PRECOMBINE_FIELD.key, preCombineField). // 预合并字段 option(PARTITIONPATH_FIELD.key, partitionField). option(TBL_NAME.key, tableName). option(KEYGENERATOR_CLASS_NAME.key(), classOf[ComplexKeyGenerator].getName). option(OPERATION.key(), operation). // 下面的参数和同步hive元数据,查询hive有关 option(META_SYNC_ENABLED.key, true). option(HIVE_USE_JDBC.key, false). option(HIVE_DATABASE.key, databaseName). option(HIVE_AUTO_CREATE_DATABASE.key, true). // 内部表,这里非必须,但是在用saveAsTable时则必须,因为0.9.0有bug,默认外部表 option(HIVE_CREATE_MANAGED_TABLE.key, true). option(HIVE_TABLE.key, tableName). option(HIVE_CREATE_MANAGED_TABLE.key, true). option(HIVE_STYLE_PARTITIONING.key, true). option(HIVE_PARTITION_FIELDS.key, partitionField). option(HIVE_PARTITION_EXTRACTOR_CLASS.key, classOf[MultiPartKeysValueExtractor].getName). // 为了SparkSQL更新用,0.9.0版本有bug,需要设置这个参数,最新版本已经修复,可以不设置这个参数 // 详情查看PR:https://github.com/apache/hudi/pull/3745 option(DataSourceWriteOptions.HIVE_TABLE_SERDE_PROPERTIES.key, s"primaryKey=$primaryKey"). mode(mode) .save(tablePath) }代码说明:本地测试需要把同步Hive的代码部分注释掉,因为同步Hive需要连接Hive metaStore服务器spark-shell里可以跑完整的代码,可以成功同步Hive,0.9.0版本同步Hive时会抛出一个关闭Hive的异常,这个可以忽略,这是该版本的一个bug,虽然有异常但是已同步成功,最新版本已经修复该bug,具体可以查看PR:https://github.com/apache/hudi/pull/3364我已经将该PR合到0.9.0版本,如果想使用的话,可以查看:https://gitee.com/dongkelun/hudi/commits/0.9.0,该分支也包含其他基于0.9.0版本的bug修复和特性支持。读HudiSpark 读取如上述代码示例: spark.read.format("hudi").load(tablePath1).show(false) 结果:+-------------------+--------------------+------------------+----------------------+------------------------------------------------------------------------+---+----+-----+----+----------+ |_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path|_hoodie_file_name |id |name|value|ts |dt | +-------------------+--------------------+------------------+----------------------+------------------------------------------------------------------------+---+----+-----+----+----------+ |20220512101542 |20220512101542_0_1 |id:1 |2022-05-12 |38c1ff87-8bc9-404c-8d2c-84f720e8133c-0_0-20-12004_20220512101542.parquet|1 |a1 |10 |1000|2022-05-12| +-------------------+--------------------+------------------+----------------------+------------------------------------------------------------------------+---+----+-----+----+----------+可以看到多了几个Hudi元数据字段其中_hoodie_record_key为Hudi主键,如果设置了RECORDKEY_FIELD,比如这里的ID,那么_hoodie_record_key是根据我们设置字段生成的,默认不是复合主键,这里代码示例改为了复合主键,具体配置为option(KEYGENERATOR_CLASS_NAME.key(), classOf[ComplexKeyGenerator].getCanonicalName). 这里主要为了和SparkSQL保持一致,因为SparkSQL默认的为复合主键,如果不一致,那么upsert/delete时会有问题默认情况RECORDKEY_FIELD是必须设置的,RECORDKEY_FIELD的默认值为uuid,如果不设置,则会去找uuid,因为schema里没有uuid,那么会报错Hive在服务器上运行示例代码是可以成功同步到Hive表的,我们看一下Hive表情况:show create table test_hudi_table_1;下面是Hive Hudi表的建表语句,和普通的Hive表的建表语句的区别可以自己比较,其中SERDEPROPERTIES里的内容是为了SparkSQL用的,可以看到这里包含了’primaryKey’=’id’,在0.9.0版本,Spark SQL获取Hudi的主键字段是根据Hive表里这里的’primaryKey’获取的,如果没有这个属性,那么Spark SQL认为该表不是主键表,则不能进行update等操作+----------------------------------------------------+ | createtab_stmt | +----------------------------------------------------+ | CREATE TABLE `test_hudi_table_1`( | | `_hoodie_commit_time` string, | | `_hoodie_commit_seqno` string, | | `_hoodie_record_key` string, | | `_hoodie_partition_path` string, | | `_hoodie_file_name` string, | | `id` int, | | `name` string, | | `value` int, | | `ts` int) | | PARTITIONED BY ( | | `dt` string) | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' | | WITH SERDEPROPERTIES ( | | 'hoodie.query.as.ro.table'='false', | | 'path'='/tmp/test_hudi_table_1', | | 'primaryKey'='id') | | STORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.HoodieParquetInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' | | LOCATION | | 'hdfs://cluster1/tmp/test_hudi_table_1' | | TBLPROPERTIES ( | | 'last_commit_time_sync'='20220512101500', | | 'spark.sql.sources.provider'='hudi', | | 'spark.sql.sources.schema.numPartCols'='1', | | 'spark.sql.sources.schema.numParts'='1', | | 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":[{"name":"_hoodie_commit_time","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_commit_seqno","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_record_key","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_partition_path","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_file_name","type":"string","nullable":true,"metadata":{}},{"name":"id","type":"integer","nullable":false,"metadata":{}},{"name":"name","type":"string","nullable":true,"metadata":{}},{"name":"value","type":"integer","nullable":false,"metadata":{}},{"name":"ts","type":"integer","nullable":false,"metadata":{}},{"name":"dt","type":"string","nullable":true,"metadata":{}}]}', | | 'spark.sql.sources.schema.partCol.0'='dt', | | 'transient_lastDdlTime'='1652320902') | +----------------------------------------------------+Hive查询Hudi表:select * from test_hudi_table_1; +----------------------+-----------------------+---------------------+-------------------------+---------------------------------------------------------------------------+-----+-------+--------+-------+-------------+--+ | _hoodie_commit_time | _hoodie_commit_seqno | _hoodie_record_key | _hoodie_partition_path | _hoodie_file_name | id | name | value | ts | dt | +----------------------+-----------------------+---------------------+-------------------------+---------------------------------------------------------------------------+-----+-------+--------+-------+-------------+--+ | 20220513150854 | 20220513150854_0_1 | id:1 | dt=2022-05-12 | dd4ef080-97b6-4046-a337-abb01e26943e-0_0-21-12005_20220513150854.parquet | 1 | a1 | 10 | 1000 | 2022-05-12 | +----------------------+-----------------------+---------------------+-------------------------+---------------------------------------------------------------------------+-----+-------+--------+-------+-------------+--+Hive是可以查询Hudi表的,但是不能update/delete,要想使用update/delete等语句,只能使用Spark SQL,另外Hive可以增量查询。关于如何使用Hudi Spark SQL和Hive的增量查询,这里不展开描述,以后会单独写配置项说明这里只说明几个比较重要的配置,其他相关的配置可以看官网和源码RECORDKEY_FIELD:默认情况RECORDKEY_FIELD是必须设置的,RECORDKEY_FIELD的默认值为uuid,如果不设置,则会去找uuid,因为schema里没有uuid,那么会报错。另外Hudi0.9.0支持非主键Hudi表,只需要配置option(KEYGENERATOR_CLASS_NAME.key, classOf[UuidKeyGenerator].getName).即可,但是在后面的版本已经不支持了KEYGENERATOR_CLASS_NAME:默认值为SimpleKeyGenerator,默认不支持复合主键,默认情况下上述_hoodie_record_key的内容为1,而不是id:1,而SparkSQL的默认值为SqlKeyGenerator,该类是ComplexKeyGenerator的子类:class SqlKeyGenerator(props: TypedProperties) extends ComplexKeyGenerator(props) 也就是本示例所使用的的复合主键类,当使用SimpleKeyGenerator和ComplexKeyGenerator同时upsert一个表时,那么会生成两条记录,因为_hoodie_record_key的内容不一样,所以一张表的 KEYGENERATOR_CLASS_NAME必须保证一致(父类和子类也是一致的)PRECOMBINE_FIELD: 预合并字段,默认值:ts,想详细了解预合并可以参考我的另外两篇博客https://dongkelun.com/2021/07/10/hudiPreCombinedField/和https://dongkelun.com/2021/11/30/hudiPreCombineField2/ upsert时,预合并是必须的,如果我们的表里没有预合并字段,或者不想使用预合并,不设置的话是会抛异常的,因为默认去找ts字段,找不到则跑异常,那么我们可以将预合并字段设置为主键字段PARTITIONPATH_FIELD: Hudi的分区字段,默认值partitionpath,对于没有分区的表,我们需要将该字段设置为空字符串option(PARTITIONPATH_FIELD.key, ""),否则可能会因找不到默认值partitionpath而抛异常。最新版本已经去掉分区字段默认值,详情可见:https://github.com/apache/hudi/pull/4195OPERATION: Hudi的写操作类型,默认值为UPSERT_OPERATION_OPT_VAL即upsert,Hudi支持多种操作类型 如:upsert、insert、bulk_insert、delete等,具体每个版本支持哪些操作类型,可以查看官网或源码,可以根据自己的需求选择选择操作类型。本代码展示了upsert成功后,又删除成功。下面的参数和同步hive元数据,查询hive有关META_SYNC_ENABLED: 默认为false,不同步Hive,要想同步Hive可以将该值设为true,另外也可以设置HIVE_SYNC_ENABLED进行同步Hive,作用差不多,至于区别,这里不详细解说HIVE_USE_JDBC: 是否使用jdbc同步hive,默认为true,如果使用jdbc,那么需要设置HIVE_URL、HIVE_USER、HIVE_PASS等配置,因为url和ip有关,每个环境不一样,用起来比较麻烦,所以这里不采用,另外因为实际使用是和Hive绑定的,可以直接使用HMS进行同步,使用起来比较方便,改为false后默认使用HMS同步Hive,具体逻辑可以看Hudi Hive 同步模块的源码,这里不展开HIVE_STYLE_PARTITIONING: 是否使用Hive格式的分区路径,默认为false,如果设置为true,那么分区路径格式为=,在这里为dt=2022-05-12,默认情况下只有即2022-05-12,因为我们常用Hive表查询Hudi所以,这里设置为trueHIVE_CREATE_MANAGED_TABLE: 同步Hive建表时是否为内部表,默认为false,使用saveAsTable(实际调用的Hudi Spark SQL CTAS)建表时0.9.0版本有,本应该为内部表,但还是为外部表,可以通过设置这个参数修正,最新版本已修复,详情可见PR:https://github.com/apache/hudi/pull/3146HIVE_TABLE_SERDE_PROPERTIES: 同步到Hive表SERDEPROPERTIES,为了Hudi Spark SQL 使用,在0.9.0版本,Spark SQL获取Hudi的主键字段是根据Hive表里这里的’primaryKey’获取的,如果没有这个属性,那么Spark SQL认为该表不是主键表,则不能进行update等操作,而默认情况同步Hive时没有将主键字段同步过去,最新版本已经不需要设置该属性了。相关PR:https://github.com/apache/hudi/pull/3745 这个PR添加了支持HIVE_CREATE_MANAGED_TABLE配置,但是CTAS依旧有bug,代码里的虽然判断表类型是否为内部表,并添加到options中,但是最后并没有将options用到最终写Hudi的参数中。另一个PR:https://github.com/apache/hudi/pull/3998 该PR的主要目的不是为了解决这个bug,但是附带解决了这个问题,因为options最终被正确传到写Hudi的参数中了其他Hive相关的配置参数不一一解释,可自行查看官网hoodie.properties.hoodie目录下有表属性文件.hoodie.properties,内容为:hoodie.table.precombine.field=ts hoodie.table.partition.fields=dt hoodie.table.type=COPY_ON_WRITE hoodie.archivelog.folder=archived hoodie.populate.meta.fields=true hoodie.timeline.layout.version=1 hoodie.table.version=2 hoodie.table.recordkey.fields=id hoodie.table.base.file.format=PARQUET hoodie.table.keygenerator.class=org.apache.hudi.keygen.ComplexKeyGenerator hoodie.table.name=test_hudi_table_1新版本在该属性文件里增加了很多属性,如HIVE_STYLE_PARTITIONING即hoodie.datasource.write.hive_style_partitioning,增加属性便于使表的属性前后保持统一非主键表如上面配置项说明所示Hudi0.9.0版本支持非主键表,对于纯insert的表有用,这里进行简单的代码示例 val tableName2 = "test_hudi_table_2" val tablePath2 = "/tmp/test_hudi_table_2" save2HudiWithNoPrimaryKey(df, tableName2, tablePath2) spark.read.format("hudi").load(tablePath2).show(false) * 非主键表,非分区表 def save2HudiWithNoPrimaryKey(df: DataFrame, tableName: String, tablePath: String): Unit = { write.format("hudi"). option(KEYGENERATOR_CLASS_NAME.key, classOf[UuidKeyGenerator].getName). option(RECORDKEY_FIELD.key, ""). option(PARTITIONPATH_FIELD.key, ""). option(TBL_NAME.key, tableName). option(OPERATION.key(), INSERT_OPERATION_OPT_VAL). mode(Overwrite). save(tablePath) }结果:+-------------------+--------------------+------------------------------------+----------------------+------------------------------------------------------------------------+---+----+-----+----+----------+ |_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key |_hoodie_partition_path|_hoodie_file_name |id |name|value|ts |dt | +-------------------+--------------------+------------------------------------+----------------------+------------------------------------------------------------------------+---+----+-----+----+----------+ |20220512145525 |20220512145525_0_1 |7263eac1-51f6-42eb-834d-bb5dfe13708e| |4fe619f1-58b1-4f58-94e6-002f9f5f5155-0_0-20-12004_20220512145525.parquet|1 |a1 |10 |1000|2022-05-12| +-------------------+--------------------+------------------------------------+----------------------+------------------------------------------------------------------------+---+----+-----+----+----------+可以看到Hudi的主键为uuid,_hoodie_partition_path为空,即非主键非分区表备注:insert默认是会随机更新的(如果是主键表,大家可以将程序改为主键表,自行测试),随机指某些情况下,这和Hudi合并小文件有关,原理这里不详细解释,可以自行查看源码(以后可能会单独总结一篇相关的文章,和Hudi写文件、合并文件有关)。要想是insert操作不更新,可以使用以下配置:hoodie.merge.allow.duplicate.on.inserts = true 相关PR:https://github.com/apache/hudi/pull/3644,这个PR是在Java客户端支持这个参数的,Spark客户端本身(在这之前)就支持这个参数saveAsTable利用saveAsTable写Hudi并同步Hive,实际最终调用的是Spark SQL CTAS(CreateHoodieTableAsSelectCommand)CTAS 先用的insert into(InsertIntoHoodieTableCommand),再建表,默认insert,这里展示怎么配置参数使用bulk_insert,并且不使用预合并,这对于转化没有重复数据的历史表时很有用。insert into SQL 默认是insert,配置一些参数就可以使用upsert/bulk_insert,具体可以看InsertIntoHoodieTableCommand源码 val tableName3 = "test_hudi_table_3" save2HudiWithSaveAsTable(df, databaseName, tableName3, primaryKey) spark.table(tableName3).show() def save2HudiWithSaveAsTable(df: DataFrame, databaseName: String, tableName: String, primaryKey: String): Unit = { write.format("hudi"). option(RECORDKEY_FIELD.key(), primaryKey). // 不需要预合并,所以设置为primaryKey // 当insert/bulk_insert等操作,并且关闭了相关参数,则不需要设置 // SparkSQL中如果没有显示配置预合并字段,则默认将预合并字段设置为schema的最后一个字段 // 如果为默认值的话,则可能会报null异常,所以设置为主键 // `PRECOMBINE_FIELD.key -> tableSchema.fields.last.name` // 相关issue:https://github.com/apache/hudi/issues/4131 option(PRECOMBINE_FIELD.key(), primaryKey). option(DataSourceWriteOptions.HIVE_TABLE_SERDE_PROPERTIES.key, s"primaryKey=$primaryKey"). option(TBL_NAME.key(), tableName). option(HIVE_CREATE_MANAGED_TABLE.key, true). // 关闭预合并,虽然默认值为false,但是0.9.0版本SparkSQL,当有主键时,设置为了true option(HoodieWriteConfig.COMBINE_BEFORE_INSERT.key, false). // 使用bulk_insert option(DataSourceWriteOptions.SQL_ENABLE_BULK_INSERT.key, true). // 这里虽然为Overwrite,但是Hudi CTAS要求目录必须为空,否则会报验证错误 mode(Overwrite). saveAsTable(s"$databaseName.$tableName") }这段代码本地是可以直接跑通的,结果为:+-------------------+--------------------+------------------+----------------------+--------------------+---+----+-----+----+----------+ |_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path| _hoodie_file_name| id|name|value| ts| dt| +-------------------+--------------------+------------------+----------------------+--------------------+---+----+-----+----+----------+ | 20220512154039| 20220512154039_0_1| id:1| |de3c99a2-3858-462...| 1| a1| 10|1000|2022-05-12| +-------------------+--------------------+------------------+----------------------+--------------------+---+----+-----+----+----------+本地测试并没有同步到Hive表,因为并没有开启enableHiveSupport()(本地验证时,注释掉这个配置),当在服务器上运行时,则可以成功同步到Hive表,可以自己试试,用saveAsTable的好处是,很多配置比如同步Hive都在Hudi Spark SQL的源码里配置了,所以配置较少。CTAS也有一些限制,比如不能覆盖写,不如save(path)灵活代码完整代码地址:https://gitee.com/dongkelun/spark-hudi/blob/master/src/main/scala/com/dkl/blog/hudi/SparkHudiDemo.scala备注:以后可能因重构地址有所变动总结本文对Hudi安装、读写进行了简单的总结,因为精力原因写的可能没有很全面,希望对刚入门Hudi的同学有所帮助,后面会继续总结Hudi Spark SQL 等其他方面的知识。
前言源码层面总结分析Hudi Clean是如何实现的,不了解Hudi Clean的可以先看这篇:一文彻底理解Apache Hudi的清理服务。Hudi Clean主要是清理删除不需要的历史文件,可以根据实际业务需要配置参数,不能影响查询,比如某个查询语句正在用某个文件,Clean如果删除了这个文件,查询就会报错。这里只是删除历史文件,Hudi的文件是有多个版本的,不管配置什么参数,使用什么策略,都不会删除当前最新版本的文件。Hudi 0.9.0版本有两种清理策略KEEP_LATEST_COMMITS和KEEP_LATEST_FILE_VERSIONS,默认为KEEP_LATEST_COMMITSKEEP_LATEST_COMMITS:简单讲就是根据commit的次数,默认保留最新的10个commit的所有文件,对于10个之前的文件只保留最新版本的文件,历史文件全部删除KEEP_LATEST_FILE_VERSIONS:简单讲就是保留文件的版本数,默认保留三个版本具体的可以看上面的那篇公众号文章目前最新版本0.11.0 添加了一个新的策略KEEP_LATEST_BY_HOURS:根据小时数清理,默认保留最近24小时的文件,具体实现请查看PR:[HUDI-349] Added new cleaning policy based on number of hours本文以Hudi 0.9.0 Java Client COW表 进行分析InsertHoodieJavaWriteClient->postWrite->postCommit->autoCleanOnCommit以Insert为入口进行代码跟踪,Hudi源码里有java客户端的代码示例,这里只贴部分主要代码writeClient = new HoodieJavaWriteClient<>(new HoodieJavaEngineContext(hadoopConf), cfg) String newCommitTime = writeClient.startCommit(); writeClient.insert(records, newCommitTime);HoodieJavaWriteClient.insertpublic List<WriteStatus> insert(List<HoodieRecord<T>> records, String instantTime) { HoodieTable<T, List<HoodieRecord<T>>, List<HoodieKey>, List<WriteStatus>> table = getTableAndInitCtx(WriteOperationType.INSERT, instantTime); table.validateUpsertSchema(); preWrite(instantTime, WriteOperationType.INSERT, table.getMetaClient()); HoodieWriteMetadata<List<WriteStatus>> result = table.insert(context, instantTime, records); if (result.getIndexLookupDuration().isPresent()) { metrics.updateIndexMetrics(LOOKUP_STR, result.getIndexLookupDuration().get().toMillis()); return postWrite(result, instantTime, table); }在执行完table.insert写完数据后会执行postWrite方法@Override protected List<WriteStatus> postWrite(HoodieWriteMetadata<List<WriteStatus>> result, String instantTime, HoodieTable<T, List<HoodieRecord<T>>, List<HoodieKey>, List<WriteStatus>> hoodieTable) { if (result.getIndexLookupDuration().isPresent()) { metrics.updateIndexMetrics(getOperationType().name(), result.getIndexUpdateDuration().get().toMillis()); if (result.isCommitted()) { // Perform post commit operations. if (result.getFinalizeDuration().isPresent()) { metrics.updateFinalizeWriteMetrics(result.getFinalizeDuration().get().toMillis(), result.getWriteStats().get().size()); postCommit(hoodieTable, result.getCommitMetadata().get(), instantTime, Option.empty()); emitCommitMetrics(instantTime, result.getCommitMetadata().get(), hoodieTable.getMetaClient().getCommitActionType()); return result.getWriteStatuses(); }postWrite方法里又会执行父类 AbstractHoodieWriteClient.postCommit方法protected void postCommit(HoodieTable<T, I, K, O> table, HoodieCommitMetadata metadata, String instantTime, Option<Map<String, String>> extraMetadata) { try { // Delete the marker directory for the instant. WriteMarkersFactory.get(config.getMarkersType(), table, instantTime) .quietDeleteMarkerDir(context, config.getMarkersDeleteParallelism()); // We cannot have unbounded commit files. Archive commits if we have to archive HoodieTimelineArchiveLog archiveLog = new HoodieTimelineArchiveLog(config, table); archiveLog.archiveIfRequired(context); // commit期间执行自动清理 autoCleanOnCommit(); if (operationType != null && operationType != WriteOperationType.CLUSTER && operationType != WriteOperationType.COMPACT) { syncTableMetadata(); } catch (IOException ioe) { throw new HoodieIOException(ioe.getMessage(), ioe); } finally { this.heartbeatClient.stop(instantTime); }postCommit方法里会调用autoCleanOnCommit()执行清理文件AbstractHoodieWriteClient.autoCleanOnCommitautoCleanOnCommit->clean->scheduleTableServiceInternal->HoodieJavaCopyOnWriteTable.clean首先调用scheduleTableServiceInternal,该方法会根据清理策略配置参数获取最早的需要保留的instant(earliestInstant),然后获取需要清理的分区路径列表(partitionsToClean),再根据分区路径获取需要删除的文件列表,最后将这些信息封装成HoodieCleanerPlan序列化到新创建的.clean.requested文件中再执行HoodieJavaCopyOnWriteTable.clean,该方法首先获取刚才创建的.clean.requested文件和其他的之前失败的(如果有的话).clean.inflight,然后反序列化刚才保存的.clean.requested的文件内容为HoodieCleanerPlan,然后通过deleteFilesFunc方法依次删除HoodieCleanerPlan里的要删除的文件列表并返回HoodieCleanStat,最后将HoodieCleanStat作为参数构建HoodieCleanMetadata,然后将HoodieCleanMetadata序列化保存到新创建的.clean文件中,这样整个clean操作就基本完成了。如何根据清理策略获取要被清理的文件列表,请看后面的部分:获取要删除的文件列表/** * Handle auto clean during commit. protected void autoCleanOnCommit() { if (config.isAutoClean()) { // 默认true // Call clean to cleanup if there is anything to cleanup after the commit, if (config.isAsyncClean()) { // 默认false LOG.info("Cleaner has been spawned already. Waiting for it to finish"); AsyncCleanerService.waitForCompletion(asyncCleanerService); LOG.info("Cleaner has finished"); } else { // Do not reuse instantTime for clean as metadata table requires all changes to have unique instant timestamps. LOG.info("Auto cleaning is enabled. Running cleaner now"); // 执行clean操作 clean(); public HoodieCleanMetadata clean() { return clean(HoodieActiveTimeline.createNewInstantTime()); public HoodieCleanMetadata clean(String cleanInstantTime) throws HoodieIOException { return clean(cleanInstantTime, true); }前面的只是调用链,下面才到了真正的逻辑/** * Clean up any stale/old files/data lying around (either on file storage or index storage) based on the * configurations and CleaningPolicy used. (typically files that no longer can be used by a running query can be * cleaned). This API provides the flexibility to schedule clean instant asynchronously via * {@link AbstractHoodieWriteClient#scheduleTableService(String, Option, TableServiceType)} and disable inline scheduling * of clean. public HoodieCleanMetadata clean(String cleanInstantTime, boolean scheduleInline) throws HoodieIOException { if (scheduleInline) { // 主要逻辑为:创建.clean.requested // .clean.requested内容为序列化后的(包含了要删除的文件列表等信息的)cleanerPlan scheduleTableServiceInternal(cleanInstantTime, Option.empty(), TableServiceType.CLEAN); LOG.info("Cleaner started"); final Timer.Context timerContext = metrics.getCleanCtx(); LOG.info("Cleaned failed attempts if any"); // 判断是否执行rollback,默认策略为EAGER,clean期间不执行rollback CleanerUtils.rollbackFailedWrites(config.getFailedWritesCleanPolicy(), HoodieTimeline.CLEAN_ACTION, () -> rollbackFailedWrites()); // 执行table.clean,删除需要删除的文件,转化.clean.requested=>.clean.inflight=>.clean,返回HoodieCleanMetadata // 这里为HoodieJavaCopyOnWriteTable.clean HoodieCleanMetadata metadata = createTable(config, hadoopConf).clean(context, cleanInstantTime); if (timerContext != null && metadata != null) { long durationMs = metrics.getDurationInMs(timerContext.stop()); metrics.updateCleanMetrics(durationMs, metadata.getTotalFilesDeleted()); LOG.info("Cleaned " + metadata.getTotalFilesDeleted() + " files" + " Earliest Retained Instant :" + metadata.getEarliestCommitToRetain() + " cleanerElapsedMs" + durationMs); // 返回metadata return metadata; }scheduleTableServiceInternalprivate Option<String> scheduleTableServiceInternal(String instantTime, Option<Map<String, String>> extraMetadata, TableServiceType tableServiceType) { switch (tableServiceType) { case CLUSTER: LOG.info("Scheduling clustering at instant time :" + instantTime); Option<HoodieClusteringPlan> clusteringPlan = createTable(config, hadoopConf) .scheduleClustering(context, instantTime, extraMetadata); return clusteringPlan.isPresent() ? Option.of(instantTime) : Option.empty(); case COMPACT: LOG.info("Scheduling compaction at instant time :" + instantTime); Option<HoodieCompactionPlan> compactionPlan = createTable(config, hadoopConf) .scheduleCompaction(context, instantTime, extraMetadata); return compactionPlan.isPresent() ? Option.of(instantTime) : Option.empty(); case CLEAN: LOG.info("Scheduling cleaning at instant time :" + instantTime); // 这里在子类`HoodieJavaCopyOnWriteTable.scheduleCleaning`实现 Option<HoodieCleanerPlan> cleanerPlan = createTable(config, hadoopConf) .scheduleCleaning(context, instantTime, extraMetadata); return cleanerPlan.isPresent() ? Option.of(instantTime) : Option.empty(); default: throw new IllegalArgumentException("Invalid TableService " + tableServiceType); }HoodieJavaCopyOnWriteTable.scheduleCleaningpublic Option<HoodieCleanerPlan> scheduleCleaning(HoodieEngineContext context, String instantTime, Option<Map<String, String>> extraMetadata) { return new JavaScheduleCleanActionExecutor<>(context, config, this, instantTime, extraMetadata).execute(); public Option<HoodieCleanerPlan> execute() { // Plan a new clean action return requestClean(instantTime); * Creates a Cleaner plan if there are files to be cleaned and stores them in instant file. * // 如果有需要被清理的文件,创建一个cleanerPlan,并且将它们保存到instant文件中 * Cleaner Plan contains absolute file paths. * cleanerPlan 包含文件的绝对路径 * @param startCleanTime Cleaner Instant Time * @return Cleaner Plan if generated protected Option<HoodieCleanerPlan> requestClean(String startCleanTime) { // cleanerPlan包含需要被清理的文件列表 final HoodieCleanerPlan cleanerPlan = requestClean(context); if ((cleanerPlan.getFilePathsToBeDeletedPerPartition() != null) && !cleanerPlan.getFilePathsToBeDeletedPerPartition().isEmpty() && cleanerPlan.getFilePathsToBeDeletedPerPartition().values().stream().mapToInt(List::size).sum() > 0) { // 如果要删除的文件列表不为空 // Only create cleaner plan which does some work // 创建.clean.requested final HoodieInstant cleanInstant = new HoodieInstant(HoodieInstant.State.REQUESTED, HoodieTimeline.CLEAN_ACTION, startCleanTime); // Save to both aux and timeline folder try { // 保存.clean.requested,.clean.requested文件里包含了序列化的cleanerPlan,也就包含了文件列表等信息 // 后面删除文件时会用到 table.getActiveTimeline().saveToCleanRequested(cleanInstant, TimelineMetadataUtils.serializeCleanerPlan(cleanerPlan)); LOG.info("Requesting Cleaning with instant time " + cleanInstant); } catch (IOException e) { LOG.error("Got exception when saving cleaner requested file", e); throw new HoodieIOException(e.getMessage(), e); // 返回cleanerPlan return Option.of(cleanerPlan); // 返回空 return Option.empty(); * Generates List of files to be cleaned. * 生成需要被清理的文件列表 * @param context HoodieEngineContext * @return Cleaner Plan HoodieCleanerPlan requestClean(HoodieEngineContext context) { try { CleanPlanner<T, I, K, O> planner = new CleanPlanner<>(context, table, config); // 获取最早需要保留的HoodieInstant Option<HoodieInstant> earliestInstant = planner.getEarliestCommitToRetain(); // 获取需要被清理的分区路径 List<String> partitionsToClean = planner.getPartitionPathsToClean(earliestInstant); if (partitionsToClean.isEmpty()) { // 如果分区路径为空,直接返回一个空的HoodieCleanerPlan LOG.info("Nothing to clean here. It is already clean"); return HoodieCleanerPlan.newBuilder().setPolicy(HoodieCleaningPolicy.KEEP_LATEST_COMMITS.name()).build(); LOG.info("Total Partitions to clean : " + partitionsToClean.size() + ", with policy " + config.getCleanerPolicy()); // 清理的并发度 int cleanerParallelism = Math.min(partitionsToClean.size(), config.getCleanerParallelism()); LOG.info("Using cleanerParallelism: " + cleanerParallelism); context.setJobStatus(this.getClass().getSimpleName(), "Generates list of file slices to be cleaned"); // Map<分区路径,要删除的文件列表>, 真正的实现是在planner.getDeletePaths Map<String, List<HoodieCleanFileInfo>> cleanOps = context .map(partitionsToClean, partitionPathToClean -> Pair.of(partitionPathToClean, planner.getDeletePaths(partitionPathToClean)), cleanerParallelism) .stream() .collect(Collectors.toMap(Pair::getKey, y -> CleanerUtils.convertToHoodieCleanFileInfoList(y.getValue()))); // 构造HoodieCleanerPlan并返回,参数分别: // earliestInstantToRetain = 根据earliestInstant生成的HoodieActionInstant // policy = config.getCleanerPolicy().name() // filesToBeDeletedPerPartition = CollectionUtils.createImmutableMap() 一个空的只读的map // version = 2 // filePathsToBeDeletedPerPartition = cleanOps,即上面我们获取的要删除的文件列表 return new HoodieCleanerPlan(earliestInstant .map(x -> new HoodieActionInstant(x.getTimestamp(), x.getAction(), x.getState().name())).orElse(null), config.getCleanerPolicy().name(), CollectionUtils.createImmutableMap(), CleanPlanner.LATEST_CLEAN_PLAN_VERSION, cleanOps); } catch (IOException e) { throw new HoodieIOException("Failed to schedule clean operation", e); }HoodieJavaCopyOnWriteTable.cleanpublic HoodieCleanMetadata clean(HoodieEngineContext context, String cleanInstantTime) { return new JavaCleanActionExecutor(context, config, this, cleanInstantTime).execute(); }BaseCleanActionExecutor.executepublic HoodieCleanMetadata execute() { List<HoodieCleanMetadata> cleanMetadataList = new ArrayList<>(); // If there are inflight(failed) or previously requested clean operation, first perform them // 获取状态为inflight或者requested的clean instant // 因为我们前面创建了.clean.requested所以首先包含前面创建的.requested // 如果还有其他的.clean.inflight文件,这表明是之前失败的操作,也需要执行clean List<HoodieInstant> pendingCleanInstants = table.getCleanTimeline() .filterInflightsAndRequested().getInstants().collect(Collectors.toList()); if (pendingCleanInstants.size() > 0) { pendingCleanInstants.forEach(hoodieInstant -> { LOG.info("Finishing previously unfinished cleaner instant=" + hoodieInstant); try { cleanMetadataList.add(runPendingClean(table, hoodieInstant)); } catch (Exception e) { LOG.warn("Failed to perform previous clean operation, instant: " + hoodieInstant, e); table.getMetaClient().reloadActiveTimeline(); // return the last clean metadata for now // 返回最后一个cleanMetadata // TODO (NA) : Clean only the earliest pending clean just like how we do for other table services // This requires the BaseCleanActionExecutor to be refactored as BaseCommitActionExecutor return cleanMetadataList.size() > 0 ? cleanMetadataList.get(cleanMetadataList.size() - 1) : null; * Executes the Cleaner plan stored in the instant metadata. HoodieCleanMetadata runPendingClean(HoodieTable<T, I, K, O> table, HoodieInstant cleanInstant) { try { // 将.clean.requested或者.clean.inflight反序列为cleanerPlan HoodieCleanerPlan cleanerPlan = CleanerUtils.getCleanerPlan(table.getMetaClient(), cleanInstant); return runClean(table, cleanInstant, cleanerPlan); } catch (IOException e) { throw new HoodieIOException(e.getMessage(), e); private HoodieCleanMetadata runClean(HoodieTable<T, I, K, O> table, HoodieInstant cleanInstant, HoodieCleanerPlan cleanerPlan) { ValidationUtils.checkArgument(cleanInstant.getState().equals(HoodieInstant.State.REQUESTED) || cleanInstant.getState().equals(HoodieInstant.State.INFLIGHT)); try { final HoodieInstant inflightInstant; final HoodieTimer timer = new HoodieTimer(); timer.startTimer(); if (cleanInstant.isRequested()) { // 如果是.clean.requested,转化为.clean.inflight inflightInstant = table.getActiveTimeline().transitionCleanRequestedToInflight(cleanInstant, TimelineMetadataUtils.serializeCleanerPlan(cleanerPlan)); } else { inflightInstant = cleanInstant; // 执行clean方法,主要是删除文件,返回HoodieCleanStat列表 // 具体在实现类,这里是 JavaCleanActionExecutor List<HoodieCleanStat> cleanStats = clean(context, cleanerPlan); if (cleanStats.isEmpty()) { return HoodieCleanMetadata.newBuilder().build(); table.getMetaClient().reloadActiveTimeline(); // 构建HoodieCleanMetadata HoodieCleanMetadata metadata = CleanerUtils.convertCleanMetadata( inflightInstant.getTimestamp(), Option.of(timer.endTimer()), cleanStats // 生成.clean,并将metadata序列化到.clean table.getActiveTimeline().transitionCleanInflightToComplete(inflightInstant, TimelineMetadataUtils.serializeCleanMetadata(metadata)); LOG.info("Marked clean started on " + inflightInstant.getTimestamp() + " as complete"); return metadata; } catch (IOException e) { throw new HoodieIOException("Failed to clean up after commit", e); }JavaCleanActionExecutor.cleanList<HoodieCleanStat> clean(HoodieEngineContext context, HoodieCleanerPlan cleanerPlan) { // 需要被删除的文件列表 Iterator<ImmutablePair<String, CleanFileInfo>> filesToBeDeletedPerPartition = cleanerPlan.getFilePathsToBeDeletedPerPartition().entrySet().stream() .flatMap(x -> x.getValue().stream().map(y -> new ImmutablePair<>(x.getKey(), new CleanFileInfo(y.getFilePath(), y.getIsBootstrapBaseFile())))).iterator(); // 通过deleteFilesFunc函数执行删除文件操作,返回PartitionCleanStat Stream<Pair<String, PartitionCleanStat>> partitionCleanStats = deleteFilesFunc(filesToBeDeletedPerPartition, table) .collect(Collectors.groupingBy(Pair::getLeft)) .entrySet().stream() .map(x -> new ImmutablePair(x.getKey(), x.getValue().stream().map(y -> y.getRight()).reduce(PartitionCleanStat::merge).get())); Map<String, PartitionCleanStat> partitionCleanStatsMap = partitionCleanStats .collect(Collectors.toMap(Pair::getLeft, Pair::getRight)); // Return PartitionCleanStat for each partition passed. return cleanerPlan.getFilePathsToBeDeletedPerPartition().keySet().stream().map(partitionPath -> { PartitionCleanStat partitionCleanStat = partitionCleanStatsMap.containsKey(partitionPath) ? partitionCleanStatsMap.get(partitionPath) : new PartitionCleanStat(partitionPath); HoodieActionInstant actionInstant = cleanerPlan.getEarliestInstantToRetain(); return HoodieCleanStat.newBuilder().withPolicy(config.getCleanerPolicy()).withPartitionPath(partitionPath) .withEarliestCommitRetained(Option.ofNullable( actionInstant != null ? new HoodieInstant(HoodieInstant.State.valueOf(actionInstant.getState()), actionInstant.getAction(), actionInstant.getTimestamp()) : null)) .withDeletePathPattern(partitionCleanStat.deletePathPatterns()) .withSuccessfulDeletes(partitionCleanStat.successDeleteFiles()) .withFailedDeletes(partitionCleanStat.failedDeleteFiles()) .withDeleteBootstrapBasePathPatterns(partitionCleanStat.getDeleteBootstrapBasePathPatterns()) .withSuccessfulDeleteBootstrapBaseFiles(partitionCleanStat.getSuccessfulDeleteBootstrapBaseFiles()) .withFailedDeleteBootstrapBaseFiles(partitionCleanStat.getFailedDeleteBootstrapBaseFiles()) .build(); }).collect(Collectors.toList()); private static Stream<Pair<String, PartitionCleanStat>> deleteFilesFunc(Iterator<ImmutablePair<String, CleanFileInfo>> iter, HoodieTable table) { Map<String, PartitionCleanStat> partitionCleanStatMap = new HashMap<>(); FileSystem fs = table.getMetaClient().getFs(); while (iter.hasNext()) { Pair<String, CleanFileInfo> partitionDelFileTuple = iter.next(); String partitionPath = partitionDelFileTuple.getLeft(); Path deletePath = new Path(partitionDelFileTuple.getRight().getFilePath()); String deletePathStr = deletePath.toString(); Boolean deletedFileResult = null; try { // 删除文件返回是否删除成功 deletedFileResult = deleteFileAndGetResult(fs, deletePathStr); } catch (IOException e) { LOG.error("Delete file failed"); if (!partitionCleanStatMap.containsKey(partitionPath)) { partitionCleanStatMap.put(partitionPath, new PartitionCleanStat(partitionPath)); boolean isBootstrapBasePathFile = partitionDelFileTuple.getRight().isBootstrapBaseFile(); PartitionCleanStat partitionCleanStat = partitionCleanStatMap.get(partitionPath); if (isBootstrapBasePathFile) { // For Bootstrap Base file deletions, store the full file path. partitionCleanStat.addDeleteFilePatterns(deletePath.toString(), true); partitionCleanStat.addDeletedFileResult(deletePath.toString(), deletedFileResult, true); } else { partitionCleanStat.addDeleteFilePatterns(deletePath.getName(), false); partitionCleanStat.addDeletedFileResult(deletePath.getName(), deletedFileResult, false); return partitionCleanStatMap.entrySet().stream().map(e -> Pair.of(e.getKey(), e.getValue())); protected static Boolean deleteFileAndGetResult(FileSystem fs, String deletePathStr) throws IOException { Path deletePath = new Path(deletePathStr); LOG.debug("Working on delete path :" + deletePath); try { boolean deleteResult = fs.delete(deletePath, false); if (deleteResult) { LOG.debug("Cleaned file at path :" + deletePath); return deleteResult; } catch (FileNotFoundException fio) { // With cleanPlan being used for retried cleaning operations, its possible to clean a file twice return false; }获取要删除的文件列表这里和策略配置参数有关,并且逻辑相对复杂一点,就先贴一下入口的代码,先不深入,以后单独总结返回AbstractHoodieWriteClient.autoCleanOnCommitOption<HoodieInstant> earliestInstant = planner.getEarliestCommitToRetain(); List<String> partitionsToClean = planner.getPartitionPathsToClean(earliestInstant); planner.getDeletePaths(partitionPathToClean) * Returns earliest commit to retain based on cleaning policy. * 根据清理策略返回最早的需要保留的commit public Option<HoodieInstant> getEarliestCommitToRetain() { Option<HoodieInstant> earliestCommitToRetain = Option.empty(); int commitsRetained = config.getCleanerCommitsRetained(); if (config.getCleanerPolicy() == HoodieCleaningPolicy.KEEP_LATEST_COMMITS && commitTimeline.countInstants() > commitsRetained) { earliestCommitToRetain = commitTimeline.nthInstant(commitTimeline.countInstants() - commitsRetained); return earliestCommitToRetain; * Returns list of partitions where clean operations needs to be performed. * 返回清理操作需要执行的分区列表 * @param earliestRetainedInstant New instant to be retained after this cleanup operation * @return list of partitions to scan for cleaning * @throws IOException when underlying file-system throws this exception public List<String> getPartitionPathsToClean(Option<HoodieInstant> earliestRetainedInstant) throws IOException { switch (config.getCleanerPolicy()) { case KEEP_LATEST_COMMITS: return getPartitionPathsForCleanByCommits(earliestRetainedInstant); case KEEP_LATEST_FILE_VERSIONS: return getPartitionPathsForFullCleaning(); default: throw new IllegalStateException("Unknown Cleaner Policy"); * Returns files to be cleaned for the given partitionPath based on cleaning policy. * 基于清理策略根据给出的分区路径返回需要被清理的文件列表 public List<CleanFileInfo> getDeletePaths(String partitionPath) { HoodieCleaningPolicy policy = config.getCleanerPolicy(); List<CleanFileInfo> deletePaths; if (policy == HoodieCleaningPolicy.KEEP_LATEST_COMMITS) { deletePaths = getFilesToCleanKeepingLatestCommits(partitionPath); } else if (policy == HoodieCleaningPolicy.KEEP_LATEST_FILE_VERSIONS) { deletePaths = getFilesToCleanKeepingLatestVersions(partitionPath); } else { throw new IllegalArgumentException("Unknown cleaning policy : " + policy.name()); LOG.info(deletePaths.size() + " patterns used to delete in partition path:" + partitionPath); return deletePaths; }注释代码githubgitee
前言前段时间研究了一下Kyuubi,主要是安装配置,适配Spark版本,验证Spark Server HA 的功能,基本都验证通过,但是后续没有实际使用,现在回忆总结一下,避免遗忘。主要适配Spark2.4.5 以及 Spark3.1.2版本,同时验证是否支持Hudi。版本说明目前Kyuubi最新版本为1.4,Kyuubi 1.x 默认不支持Spark2,1.4版本默认Spark版本3.1.2,并且默认支持Hudi,但是因hudi0.9版本不支持Spark3.1.2,所以需要hudi0.10.1hudi0.10.1 已经发布,可以通过 Mvn 下载:https://repo1.maven.org/maven2/org/apache/hudi/hudi-spark3.1.2-bundle_2.12/0.10.1/hudi-spark3.1.2-bundle_2.12-0.10.1.jar也可以自己打包要想支持Spark2可以选择Kyuubi0.7,两个Kyuubi 版本都支持HA,但是0.7版本默认不支持hudi0.7版本打包: git 切换到branch-0.7修改pom,添加<profile> <id>spark-2.4.5</id> <properties> <spark.version>2.4.5</spark.version> <scalatest.version>3.0.3</scalatest.version> </properties> </profile>然后执行打包命令./build/dist --tgz -P spark-2.4.5 打包完成后,生成kyuubi-0.7.0-SNAPSHOT-bin-spark-2.4.5.tar.gzKyuubi1.4下载下载apache-kyuubi-1.4.0-incubating-bin.tgz解压tar -zxvf apache-kyuubi-1.4.0-incubating-bin.tgz -C /opt/这里的路径为 /opt/apache-kyuubi-1.4.0-incubating-bin验证Spark首先确保安装的Spark版本为 spark3.1.2Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 3.1.2 测试Hadoop/Spark环境/usr/hdp/3.1.0.0-78/spark3/bin/spark-submit \ --master yarn \ --class org.apache.spark.examples.SparkPi \ /usr/hdp/3.1.0.0-78/spark3/examples/jars/spark-examples_2.12-3.1.2.jar \ 10输出结果pi is roughly 3.138211138211138 修改Kyuubi配置cd /opt/apache-kyuubi-1.4.0-incubating-bin/conf/ KYUUBI-ENV.SHcp kyuubi-env.sh.template kyuubi-env.sh vi kyuubi-env.sh export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64 export SPARK_HOME=/usr/hdp/3.1.0.0-78/spark3 export HADOOP_CONF_DIR=/usr/hdp/3.1.0.0-78/hadoop/etc/hadoop export KYUUBI_JAVA_OPTS="-Xmx10g -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=4096 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark -XX:MaxDirectMemorySize=1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./logs -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -Xloggc:./logs/kyuubi-server-gc-%t.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=5M -XX:NewRatio=3 -XX:MetaspaceSize=512m"KYUUBI-DEFAULTS.CONFcp kyuubi-defaults.conf.template kyuubi-defaults.conf vi kyuubi-defaults.conf kyuubi.frontend.bind.host indata-192-168-44-128.indata.com kyuubi.frontend.bind.port 10009 #kerberos kyuubi.authentication KERBEROS kyuubi.kinit.principal hive/indata-192-168-44-128.indata.com@INDATA.COM kyuubi.kinit.keytab /etc/security/keytabs/hive.service.keytab配置环境变量vi ~/.bashrc export KYUUBI_HOME=/opt/kyuubi/apache-kyuubi-1.4.0-incubating-bin export PATH=$KYUUBI_HOME/bin:$PATH source ~/.bashrc启停Kyuubibin/kyuubi start bin/kyuubi stop端口冲突引用:放弃Spark Thrift Server吧,你需要的是Apache Kyuubi!cat logs/kyuubi-root-org.apache.kyuubi.server.KyuubiServer-indata-*.indata.com.out 22/03/21 09:56:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Exception in thread "main" java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67) at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:90) at org.apache.kyuubi.zookeeper.EmbeddedZookeeper.initialize(EmbeddedZookeeper.scala:53) at org.apache.kyuubi.server.KyuubiServer$.startServer(KyuubiServer.scala:48) at org.apache.kyuubi.server.KyuubiServer$.main(KyuubiServer.scala:121) at org.apache.kyuubi.server.KyuubiServer.main(KyuubiServer.scala)发现KyuubiServer报错端口占用,但不知道占用的是哪个端口。看下源码,private val zkServer = new EmbeddedZookeeper() def startServer(conf: KyuubiConf): KyuubiServer = { if (!ServiceDiscovery.supportServiceDiscovery(conf)) { zkServer.initialize(conf) zkServer.start() conf.set(HA_ZK_QUORUM, zkServer.getConnectString) conf.set(HA_ZK_ACL_ENABLED, false) val server = new KyuubiServer() server.initialize(conf) server.start() Utils.addShutdownHook(new Runnable { override def run(): Unit = server.stop() }, Utils.SERVER_SHUTDOWN_PRIORITY) server }看下源码,如果检测到没有配置服务发现,就会默认使用的是内嵌的ZooKeeper。当前,我们并没有开启HA模式。所以,会启动一个本地的ZK,而我们当前测试环境已经部署了ZK。所以,基于此,我们还是配置好HA。这样,也可以让我们Kyuubi服务更加可靠。当然其实我们本来就是要配置HA的配置Kyuubi HAkyuubi.ha.enabled true kyuubi.ha.zookeeper.quorum indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com kyuubi.ha.zookeeper.client.port 2181 kyuubi.ha.zookeeper.session.timeout 600000beeline连接非HA首先以非HA,即IP:Port的形式测试连接这里用spark3 beeline,hive beeline不兼容/usr/hdp/3.1.0.0-78/spark3/bin/beeline Beeline version 2.3.7 by Apache Hive beeline> !connect jdbc:hive2://indata-192-168-44-130.indata.com:10009/;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;hive.server2.proxy.user=spark Connecting to jdbc:hive2://indata-192-168-44-130.indata.com:10009/;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;hive.server2.proxy.user=spark 22/03/22 19:53:39 INFO Utils: Supplied authorities: indata-192-168-44-130.indata.com:10009 22/03/22 19:53:39 INFO Utils: Resolved authority: indata-192-168-44-130.indata.com:10009 22/03/22 19:53:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Connected to: Apache Kyuubi (Incubating) (version 1.4.0-incubating) Driver: Hive JDBC (version 2.3.7) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://indata-192-168-44-130.indata.> !connect jdbc:hive2://indata-192-168-44-130.indata.com:10009/;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;hive.server2.proxy.user=hive Connecting to jdbc:hive2://indata-192-168-44-130.indata.com:10009/;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;hive.server2.proxy.user=hive 22/03/22 19:54:03 INFO Utils: Supplied authorities: indata-192-168-44-130.indata.com:10009 22/03/22 19:54:03 INFO Utils: Resolved authority: indata-192-168-44-130.indata.com:10009 Connected to: Apache Kyuubi (Incubating) (version 1.4.0-incubating) Driver: Hive JDBC (version 2.3.7) Transaction isolation: TRANSACTION_REPEATABLE_READ这里分别以spark用户和hive用户起了两个程序HAHA连接方式是指,通过Zookeeper地址发现的方式,Kyuubi 1.4 zooKeeperNamespace默认值为kyuubi/usr/hdp/3.1.0.0-78/spark3/bin/beeline !connect jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi;hive.server2.proxy.user=spark Connecting to jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi;hive.server2.proxy.user=spark Enter username for jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default: Enter password for jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default: 22/03/23 09:56:30 INFO Utils: Supplied authorities: indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com 22/03/23 09:56:30 INFO CuratorFrameworkImpl: Starting 22/03/23 09:56:30 INFO ZooKeeper: Initiating client connection, connectString=indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@1224144a 22/03/23 09:56:30 INFO ClientCnxn: Opening socket connection to server indata-192-168-44-130.indata.com/192.168.44.130:2181. Will not attempt to authenticate using SASL (unknown error) 22/03/23 09:56:30 INFO ClientCnxn: Socket connection established, initiating session, client: /192.168.44.130:59044, server: indata-192-168-44-130.indata.com/192.168.44.130:2181 22/03/23 09:56:30 INFO ClientCnxn: Session establishment complete on server indata-192-168-44-130.indata.com/192.168.44.130:2181, sessionid = 0x37f2e7f62d60ba1, negotiated timeout = 60000 22/03/23 09:56:30 INFO ConnectionStateManager: State change: CONNECTED 22/03/23 09:56:30 INFO ZooKeeper: Session: 0x37f2e7f62d60ba1 closed 22/03/23 09:56:30 INFO ClientCnxn: EventThread shut down 22/03/23 09:56:30 INFO Utils: Resolved authority: indata-192-168-44-130.indata.com:10009 22/03/23 09:56:30 INFO HiveConnection: Connected to indata-192-168-44-130.indata.com:10009 Connected to: Apache Kyuubi (Incubating) (version 1.4.0-incubating) Driver: Hive JDBC (version 2.3.7) Transaction isolation: TRANSACTION_REPEATABLE_READ根据日志,可以看到,我们的jdbc链接地址为zookeeper,通过zookeeper解析zooKeeperNamespace获取到kyuubi server真正的地址为indata-192-168-44-130.indata.com:10009实际上,当我们配置了HA参数,启动Kyuubi Server时会在zookeeper创建一个地址/kyuubi,让我们来看一下,在zookeeper上存的是什么信息/usr/hdp/3.1.0.0-78/zookeeper/bin/zkCli.sh -server indata-192-168-44-130.indata.com:2181 ls /kyuubi [serviceUri=indata-192-168-44-130.indata.com:10009;version=1.4.0-incubating;sequence=0000000007] get /kyuubi/serviceUri=indata-192-168-44-130.indata.com:10009;version=1.4.0-incubating;sequence=0000000007 hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=indata-192-168-44-130.indata.com;hive.server2.transport.mode=binary;hive.server2.authentication=KERBEROS;hive.server2.thrift.port=10009;hive.server2.authentication.kerberos.principal=hive/indata-192-168-44-130.indata.com@INDATA.COM cZxid = 0x300029a08 ctime = Tue Mar 22 19:57:36 CST 2022 mZxid = 0x300029a08 mtime = Tue Mar 22 19:57:36 CST 2022 pZxid = 0x300029a08 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x37f2e7f62d60b64 dataLength = 295 numChildren = 0可以看到,不仅保存了kyuubi server的ip、端口,还保存了kerberos票据等信息。zooKeeperNamespace默认值为kyuubi,我们可以通过修改配置参数更改它vi conf/kyuubi-defaults.conf kyuubi.ha.zookeeper.namespace=kyuubi_cluster001 kyuubi.session.engine.initialize.timeout=180000 重启kyuubi server,在通过zk查看,发现原来的/kyuubi内容为空,新建了一个/kyuubi_cluster001,内容和之前一样。再另一台机器再起一个kyuubi server,这样才能达到真正的HA效果,再在zk看一下,我们发现,内容已经变成两个kyuubi server了ls /kyuubi_cluster001 [serviceUri=indata-192-168-44-130.indata.com:10009;version=1.4.0-incubating;sequence=0000000006, serviceUri=indata-192-168-44-129.indata.com:10009;version=1.4.0-incubating;sequence=0000000009]查看具体内容,需要以逗号分隔,分开查看,这里就不贴详细信息了get /kyuubi_cluster001/serviceUri=indata-192-168-44-130.indata.com:10009;version=1.4.0-incubating;sequence=0000000006 get /kyuubi_cluster001/serviceUri=indata-192-168-44-129.indata.com:10009;version=1.4.0-incubating;sequence=0000000009beeline 多登陆几次,就会发现,会随机选择两个kyuubi server中的一个,这样就做到了HA,主要这里的zooKeeperNamespac改为了kyuubi_cluster001!connect jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi_cluster001;hive.server2.proxy.user=spark 22/03/23 11:51:43 INFO Utils: Resolved authority: indata-192-168-44-130.indata.com:10009 22/03/23 11:51:43 INFO HiveConnection: Connected to indata-192-168-44-130.indata.com:10009 22/03/23 13:53:19 INFO Utils: Resolved authority: indata-192-168-44-129.indata.com:10009 22/03/23 13:53:19 INFO HiveConnection: Connected to indata-192-168-44-129.indata.com:10009支持HUDI注意首先配好 spark hudi环境,这里的hudi jar包需要适配spark3.1.2,开头有说明!connect jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi_cluster001;hive.server2.proxy.user=spark#spark.yarn.queue=default;spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension;spark.serializer=org.apache.spark.serializer.KryoSerializer;spark.executor.instances=1;spark.executor.memory=1g;kyuubi.engine.share.level=CONNECTION 通过jdbc_url中指定spark的参数spark.yarn.queue 队列spark.executor.instances executor的数量 默认2spark.executor.memory executor的内存大小kyuubi.engine.share.level kyuubi的级别,默认user支持hudi是通过这俩参数:spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension;spark.serializer=org.apache.spark.serializer.KryoSerializer通过上面的参数连接kyuubi server并启动spark server程序后,验证hudi sql功能create table test_hudi_table ( id int, name string, price double, ts long, dt string ) using hudi partitioned by (dt) options ( primaryKey = 'id', preCombineField = 'ts', type = 'cow' insert into test_hudi_table values (1,'hudi',10,100,'2021-05-05'),(2,'hudi',10,100,'2021-05-05'); update test_hudi_table set price = 20.0 where id = 1; select * from test_hudi_table;简单的验证一下hudi的sql没有问题就行kyuubi 0.7tar -zxvf kyuubi-0.7.0-SNAPSHOT-bin-spark-2.4.5.tar.gz -C /opt/ cd /opt/kyuubi-0.7.0-SNAPSHOT-bin-spark-2.4.5/ vi bin/kyuubi-env.sh export SPARK_HOME=/usr/hdp/3.1.0.0-78/spark2 ## 注意需要放在# Find the spark-submit之前 这里主要是为了适配spark2, 本人的spark2版本为spark2.4.5,前面有说明如何打包kyuubi 0.7,这里打包完后的包名为:kyuubi-0.7.0-SNAPSHOT-bin-spark-2.4.5.tar.gz配置解决: No such file or directory因为tar包是在windows上打的,脚本和Linux不兼容,需要进行修改,解决方法用vim打开该sh文件,输入: :set ff //回车,显示fileformat=dos :set ff=unix //重新设置下文件格式 :wq //保存退出 再执行,就不会再提示No such file or directory这个问题了。bin 目录下面的脚本都执行一遍启动Kyuubi Serverbin/start-kyuubi.sh --master yarn --deploy-mode client --driver-memory 2g --conf spark.kyuubi.frontend.bind.port=10010 --conf spark.kyuubi.authentication=KERBEROS --conf spark.kyuubi.ha.enabled=true \ --conf spark.kyuubi.ha.zk.quorum=indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com --conf spark.kyuubi.ha.zk.namespace=kyuubi_cluster002 --conf spark.kyuubi.ha.mode=load-balance \ --conf spark.kyuubi.frontend.bind.host=indata-192-168-44-130.indata.com --conf spark.yarn.keytab=/etc/security/keytabs/hive.service.keytab --conf spark.yarn.principal=hive/indata-192-168-44-130.indata.com@INDATA.COMbeeline连接IP:PORT!connect jdbc:hive2://indata-192-168-44-130.indata.com:10010/;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;hive.server2.proxy.user=spark Connecting to jdbc:hive2://indata-192-168-44-130.indata.com:10010/;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;hive.server2.proxy.user=spark Enter username for jdbc:hive2://indata-192-168-44-130.indata.com:10010/;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;hive.server2.proxy.user=spark: Enter password for jdbc:hive2://indata-192-168-44-130.indata.com:10010/;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;hive.server2.proxy.user=spark: 22/03/25 14:37:25 INFO Utils: Supplied authorities: indata-192-168-44-130.indata.com:10010 22/03/25 14:37:25 INFO Utils: Resolved authority: indata-192-168-44-130.indata.com:10010 22/03/25 14:37:25 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://indata-192-168-44-130.indata.com:10010/;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;hive.server2.proxy.user=spark Connected to: Spark SQL (version 2.4.5) Driver: Hive JDBC (version 1.2.1) Transaction isolation: TRANSACTION_REPEATABLE_READZK/usr/hdp/3.1.0.0-78/spark2/bin/beeline !connect jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi_cluster002 22/03/23 17:01:00 INFO Utils: Resolved authority: null:0 22/03/23 17:01:00 INFO ClientCnxn: EventThread shut down for session: 0x37f2e7f62d60bdb 22/03/23 17:01:00 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://null:0/default;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi_cluster002 然后我们发现没有连接成功,解析zk地址为null:0,我们在zk客户端,发现zk地址里的内容是正确的,那么就是beeline客户端解析有问题,然后我用Java 连接 Kereros认证下的Spark Thrift Server/Hive Server总结连接发现是成功的,只需要将程序中的SPARK_JDBC_URL 改为 jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi_cluster002,前提pom依赖版本要对应好这样进一步验证我们的server没有问题,然后我试着将maven仓库中的hive-jdbc-1.2.1.jar放到 $SPARK_HOME/jars下,然后将原来的hive-jdbc-1.21.2.3.1.0.0-78.jar做一个备份并删掉mv hive-jdbc-1.21.2.3.1.0.0-78.jar hive-jdbc-1.21.2.3.1.0.0-78.jar.bak /usr/hdp/3.1.0.0-78/spark2/bin/beeline !connect jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi_cluster002;hive.server2.proxy.user=hive Utils: Resolved authority: indata-192-168-44-130.indata.com:10010 22/03/25 14:41:02 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://indata-192-168-44-130.indata.com:10010/default;principal=hive/indata-192-168-44-130.indata.com@INDATA.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi_cluster002;hive.server2.proxy.user=hive Connected to: Spark SQL (version 2.4.5) Driver: Hive JDBC (version 1.2.1)然后发现正确解析了zk里面的内容,连接成功!!当然这里碰巧替换了hive-jdbc-1.2.1.jar就成功了,如果不行的话,可以自己下一个开源的spark2.4.5,然后考一个之前的spark备份,试着将备份中所有的jar包都替换为开源版本,再试着用备份路径下的beeline命令spark 2.4.5 下载地址:http://archive.apache.org/dist/spark/spark-2.4.5/程序连接异常上面的连接串程序运行是正常的,当加了hive.server2.proxy.user=spark时,就会抛出下面的异常org.apache.hive.service.cli.HiveSQLException: Failed to validate proxy privilege of hive for spark at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:256) at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:247) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:586) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:192) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:270) at com.dkl.blog.SparkThriftServerDemoWithKerberos_2.jdbcDemo(SparkThriftServerDemoWithKerberos_2.java:52) at com.dkl.blog.SparkThriftServerDemoWithKerberos_2.main(SparkThriftServerDemoWithKerberos_2.java:46) Caused by: java.lang.RuntimeException: yaooqinn.kyuubi.KyuubiSQLException:Failed to validate proxy privilege of hive for spark at yaooqinn.kyuubi.auth.KyuubiAuthFactory$.verifyProxyAccess(KyuubiAuthFactory.scala:190) at yaooqinn.kyuubi.server.FrontendService.getProxyUser(FrontendService.scala:210) at yaooqinn.kyuubi.server.FrontendService.getUserName(FrontendService.scala:188) at yaooqinn.kyuubi.server.FrontendService.getSessionHandle(FrontendService.scala:229) at yaooqinn.kyuubi.server.FrontendService.OpenSession(FrontendService.scala:248) at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253) at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:692) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.security.authorize.AuthorizationException: Unauthorized connection for super-user: hive from IP 10.201.30.73这是因为我本地机器没有权限修改core-site.xmlhadoop.proxyuser.hive.groups=* hadoop.proxyuser.hive.hosts=* 然后重启HDFS服务,再重新运行程序就可以了当然我这里启动Kyuubi使用的票据为hive用户,就需要修改hive的权限;还有在生产环境不要设置权限为*HA在另一台服务器上,配置相同的kyuubi,并启动, bin/start-kyuubi.sh --master yarn --deploy-mode client --driver-memory 2g --conf spark.kyuubi.frontend.bind.port=10010 --conf spark.kyuubi.authentication=KERBEROS --conf spark.kyuubi.ha.enabled=true --conf spark.kyuubi.ha.zk.quorum=indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com --conf spark.kyuubi.ha.zk.namespace=kyuubi_cluster002 --conf spark.kyuubi.ha.mode=load-balance --conf spark.kyuubi.frontend.bind.host=indata-192-168-44-129.indata.com --conf spark.yarn.keytab=/etc/security/keytabs/hive.service.keytab --conf spark.yarn.principal=hive/indata-192-168-44-129.indata.com@INDATA.COM然后beeline重新多次连接后,会发现当用这台服务器的principal连接连接另外一台服务器的kyuubi server时,会报kerberos认证的错误,我们只需要将jdbc连接串中的principal改为hive/_HOST@INDATA.COM,就可以成功随机连接其中的一台Kyuubi server了,但是在Kyuubi 1.4 Spark 3.1.2没有这个问题 !connect jdbc:hive2://indata-192-168-44-128.indata.com,indata-192-168-44-129.indata.com,indata-192-168-44-130.indata.com/default;principal=hive/_HOST@INDATA.COM;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi_cluster002;hive.server2.proxy.user=hive bin/stop-kyuubi.sh
前言本文总结如何只用SQL迁移关系型数据库中的表转化为Hudi表,这样的意义在于方便项目上对Spark、Hudi等不熟悉的人员运维。Spark Thrift Server首先将Orace的驱动拷贝至Spark jars目录下启动Spark Thrift Server 扩展支持Hudi/usr/hdp/3.1.0.0-78/spark2/bin/spark-submit --master yarn --deploy-mode client --executor-memory 2G --num-executors 6 --executor-cores 2 --driver-memory 4G --driver-cores 2 --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift-20003 --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' --principal spark/indata-10-110-105-163.indata.com@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytab --hiveconf hive.server2.thrift.http.port=20003 >~/server.log 2>&1 &Beeline连接/usr/hdp/3.1.0.0-78/spark2/bin/beeline -u "jdbc:hive2://10.110.105.163:20003/hudi;principal=HTTP/indata-10-110-105-163.indata.com@INDATA.COM?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice"SQL 转化首先建好Hive数据库Oracle 表结构及数据将Oracle表映射为临时表CREATE TEMPORARY VIEW temp_test_clob USING org.apache.spark.sql.jdbc OPTIONS ( url "jdbc:oracle:thin:@ip:1521:orcl", dbtable "TEST_CLOB", user 'userName', password 'password' );字段类型字段匹配Hudi Spark SQL 支持 CTAS语法create table test_hudi.test_clob_oracle_sync using hudi options(primaryKey = 'ID',preCombineField = 'ID') as select * from temp_test_clob;注意这里的ID为大写,是因为在Oracle表中的字段名为大写show create table test_clob_oracle_sync; +----------------------------------------------------+ | createtab_stmt | +----------------------------------------------------+ | CREATE EXTERNAL TABLE `test_clob_oracle_sync`( | | `_hoodie_commit_time` string COMMENT '', | | `_hoodie_commit_seqno` string COMMENT '', | | `_hoodie_record_key` string COMMENT '', | | `_hoodie_partition_path` string COMMENT '', | | `_hoodie_file_name` string COMMENT '', | | `id` decimal(38,0) COMMENT '', | | `name` string COMMENT '', | | `temp_clob` string COMMENT '', | | `temp_blob` binary COMMENT '') | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' | | WITH SERDEPROPERTIES ( | | 'hoodie.query.as.ro.table'='false', | | 'path'='hdfs://cluster1/warehouse/tablespace/managed/hive/test_hudi.db/test_clob_oracle_sync') | | STORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.HoodieParquetInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' | | LOCATION | | 'hdfs://cluster1/warehouse/tablespace/managed/hive/test_hudi.db/test_clob_oracle_sync' | | TBLPROPERTIES ( | | 'last_commit_time_sync'='20220215112046', | | 'spark.sql.sources.provider'='hudi', | | 'spark.sql.sources.schema.numParts'='1', | | 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":[{"name":"_hoodie_commit_time","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_commit_seqno","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_record_key","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_partition_path","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_file_name","type":"string","nullable":true,"metadata":{}},{"name":"ID","type":"decimal(38,0)","nullable":true,"metadata":{}},{"name":"NAME","type":"string","nullable":true,"metadata":{}},{"name":"TEMP_CLOB","type":"string","nullable":true,"metadata":{}},{"name":"TEMP_BLOB","type":"binary","nullable":true,"metadata":{}}]}', | | 'transient_lastDdlTime'='1644895263') | +----------------------------------------------------+ select * from test_clob_oracle_sync; +--------------------------------------------+---------------------------------------------+-------------------------------------------+-----------------------------------------------+----------------------------------------------------+---------------------------+-----------------------------+----------------------------------+----------------------------------+ | test_clob_oracle_sync._hoodie_commit_time | test_clob_oracle_sync._hoodie_commit_seqno | test_clob_oracle_sync._hoodie_record_key | test_clob_oracle_sync._hoodie_partition_path | test_clob_oracle_sync._hoodie_file_name | test_clob_oracle_sync.id | test_clob_oracle_sync.name | test_clob_oracle_sync.temp_clob | test_clob_oracle_sync.temp_blob | +--------------------------------------------+---------------------------------------------+-------------------------------------------+-----------------------------------------------+----------------------------------------------------+---------------------------+-----------------------------+----------------------------------+----------------------------------+ | 20220215112046 | 20220215112046_0_1 | ID:1 | | 1a2d42c4-5d1e-404a-a09f-cf7705552634-0_0-2027-0_20220215112046.parquet | 1 | update_result | inspurname | | | 20220215112046 | 20220215112046_1_1 | ID:2 | | b3e7ef23-01ee-4558-988b-a946f6947294-0_1-2028-0_20220215112046.parquet | 2 | test_name | test_cob_content | | +--------------------------------------------+---------------------------------------------+-------------------------------------------+-----------------------------------------------+----------------------------------------------------+---------------------------+-----------------------------+----------------------------------+----------------------------------+ 2 rows selected (0.531 seconds)可以看到这里Spark为字段类型做了适配提前建表提前建表的好处是,字段类型可以自己掌握create table test_hudi.test_clob_oracle_sync1( id string, name string, temp_clob string, temp_blob string ) using hudi options ( primaryKey = 'id', preCombineField = 'id', type = 'cow' insert into test_hudi.test_clob_oracle_sync1 select * from temp_test_clob; show create table test_clob_oracle_sync1; +----------------------------------------------------+ | createtab_stmt | +----------------------------------------------------+ | CREATE TABLE `test_clob_oracle_sync1`( | | `_hoodie_commit_time` string, | | `_hoodie_commit_seqno` string, | | `_hoodie_record_key` string, | | `_hoodie_partition_path` string, | | `_hoodie_file_name` string, | | `id` string, | | `name` string, | | `temp_clob` string, | | `temp_blob` string) | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' | | WITH SERDEPROPERTIES ( | | 'path'='hdfs://cluster1/warehouse/tablespace/managed/hive/test_hudi.db/test_clob_oracle_sync1', | | 'preCombineField'='id', | | 'primaryKey'='id', | | 'type'='cow') | | STORED AS INPUTFORMAT | | 'org.apache.hudi.hadoop.HoodieParquetInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' | | LOCATION | | 'hdfs://cluster1/warehouse/tablespace/managed/hive/test_hudi.db/test_clob_oracle_sync1' | | TBLPROPERTIES ( | | 'last_commit_time_sync'='20220215113952', | | 'spark.sql.create.version'='2.4.5', | | 'spark.sql.sources.provider'='hudi', | | 'spark.sql.sources.schema.numParts'='1', | | 'spark.sql.sources.schema.part.0'='{"type":"struct","fields":[{"name":"_hoodie_commit_time","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_commit_seqno","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_record_key","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_partition_path","type":"string","nullable":true,"metadata":{}},{"name":"_hoodie_file_name","type":"string","nullable":true,"metadata":{}},{"name":"id","type":"string","nullable":true,"metadata":{}},{"name":"name","type":"string","nullable":true,"metadata":{}},{"name":"temp_clob","type":"string","nullable":true,"metadata":{}},{"name":"temp_blob","type":"string","nullable":true,"metadata":{}}]}', | | 'transient_lastDdlTime'='1644896342') | +----------------------------------------------------+ 30 rows selected (0.178 seconds) select * from test_clob_oracle_sync1; +---------------------------------------------+----------------------------------------------+--------------------------------------------+------------------------------------------------+----------------------------------------------------+----------------------------+------------------------------+-----------------------------------+-----------------------------------+ | test_clob_oracle_sync1._hoodie_commit_time | test_clob_oracle_sync1._hoodie_commit_seqno | test_clob_oracle_sync1._hoodie_record_key | test_clob_oracle_sync1._hoodie_partition_path | test_clob_oracle_sync1._hoodie_file_name | test_clob_oracle_sync1.id | test_clob_oracle_sync1.name | test_clob_oracle_sync1.temp_clob | test_clob_oracle_sync1.temp_blob | +---------------------------------------------+----------------------------------------------+--------------------------------------------+------------------------------------------------+----------------------------------------------------+----------------------------+------------------------------+-----------------------------------+-----------------------------------+ | 20220215113952 | 20220215113952_0_1 | id:1 | | a63e5288-7396-4303-8cd9-8a6ef6423932-0_0-73-3636_20220215113952.parquet | 1 | update_result | test_cob_content | | | 20220215113952 | 20220215113952_0_2 | id:2 | | a63e5288-7396-4303-8cd9-8a6ef6423932-0_0-73-3636_20220215113952.parquet | 2 | test_name | test_cob_content | | +---------------------------------------------+----------------------------------------------+--------------------------------------------+------------------------------------------------+----------------------------------------------------+----------------------------+------------------------------+-----------------------------------+-----------------------------------+ 2 rows selected (0.385 seconds)可以根据需求选择是否配置主键、分区字段等;另hudi0.9.0版本支持非主键表,当前0.10版本主键字段必填,未来的版本也许会有所变化MySQLCREATE TEMPORARY VIEW temp_mysql_table USING org.apache.spark.sql.jdbc OPTIONS ( url "jdbc:mysql://ip:3306/default?useUnicode=true&characterEncoding=utf-8", dbtable "taleName", user 'userName', password 'password'
前言本文为组内同事整理,这里稍作改动记录。本文主要修改Spark源码,实现Spark Spark Thrift Server 注册到到ZK,通过ZK连接实现负载均衡;另外可以通过使用Kyuubi实现HA,这里不做详细描述背景Spark ThriftServer 不支持Zookeeper连接,不能实现负载均衡。解决方案:修改Spark ThriftServer源码,使其支持Zookeeper连接版本信息:Spark3.1.2 ;Hive2.3.7Spark ThriftServer 启动命令nohup spark-submit --master yarn --deploy-mode client --num-executors 2 --executor-cores 1 --executor-memory 1G --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name SparkThriftServer_1 spark-internal --hiveconf hive.server2.thrift.http.port=20003 >> /var/log/spark2/20003.log 2>&1 < /dev/null &Spark 源码debug./bin/spark-submit --master yarn --deploy-mode client --num-executors 2 --executor-cores 1 --executor-memory 1G --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name SparkThriftServer_1 spark-internal --hiveconf hive.server2.thrift.http.port=20003 --driver-java-options "-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005"Spark 连接单节点!connect jdbc:hive2://indata-1192-168-44-128:20003/default;principal=HTTP/indata-1192-168-44-128@INDATA.COM?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice;Zookeeper!connect jdbc:hive2://192-168-44-129.indata.com:2181,indata-1192-168-44-128:2181,indata-192-168-44-130.indata.com:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2_zk Zookeeper 使用连接: /usr/hdp/3.1.0.0-78/zookeeper/bin/zkCli.sh -server indata-1192-168-44-128:2181 使用: [zk: indata-1192-168-44-128:2181(CONNECTED) 2] ls /hiveserver2_zk [serverUri=indata-1192-168-44-128:20004;version=2.3.7;sequence=0000000022] [zk: indata-1192-168-44-128:2181(CONNECTED) 3] get /hiveserver2_zk/serverUri=indata-1192-168-44-128:20004;version=2.3.7;sequence=0000000022 hive.server2.authentication=KERBEROS;hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice;hive.server2.thrift.http.port=20004;hive.server2.thrift.bind.host=indata-1192-168-44-128;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=spark/_HOST@INDATA.COM cZxid = 0x200001d9d ctime = Fri Feb 11 08:37:57 CST 2022 mZxid = 0x200001d9d mtime = Fri Feb 11 08:37:57 CST 2022 pZxid = 0x200001d9d cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x37ed86adc050206 dataLength = 306 numChildren = 0Spark 源码修改pom添加 sql/hive-thriftserver 模块,使其编译,便于修改代码 <module>sql/hive-thriftserver</module> HiveThriftServer2修改包路径:sql\hive-thriftserver\src\main\scala\org\apache\spark\sql\hive\thriftserver\HiveThriftServer2.scala 源码添加zookeeper支持,增加hiveConf,如果开启zookeeper,通过反射调用 addServerInstanceToZooKeeper、removeServerInstanceFromZooKeeper 方法private[hive] class HiveThriftServer2(sqlContext: SQLContext) extends HiveServer2 with ReflectedCompositeService { // state is tracked internally so that the server only attempts to shut down if it successfully // started, and then once only. private val started = new AtomicBoolean(false) var hiveConf: HiveConf = _ override def init(hiveConf: HiveConf): Unit = { this.hiveConf = hiveConf val sparkSqlCliService = new SparkSQLCLIService(this, sqlContext) setSuperField(this, "cliService", sparkSqlCliService) addService(sparkSqlCliService) val thriftCliService = if (isHTTPTransportMode(hiveConf)) { new ThriftHttpCLIService(sparkSqlCliService) } else { new ThriftBinaryCLIService(sparkSqlCliService) setSuperField(this, "thriftCLIService", thriftCliService) addService(thriftCliService) initCompositeService(hiveConf) private def isHTTPTransportMode(hiveConf: HiveConf): Boolean = { val transportMode = hiveConf.getVar(ConfVars.HIVE_SERVER2_TRANSPORT_MODE) transportMode.toLowerCase(Locale.ROOT).equals("http") override def start(): Unit = { super.start() started.set(true) hiveConf.set(HiveConf.ConfVars.HIVE_SERVER2_THRIFT_BIND_HOST.varname, getServerHost) if (this.hiveConf.getBoolVar( ConfVars.HIVE_SERVER2_SUPPORT_DYNAMIC_SERVICE_DISCOVERY)) { invoke(classOf[HiveServer2], this, "addServerInstanceToZooKeeper", classOf[HiveConf] -> this.hiveConf) override def stop(): Unit = { if (started.getAndSet(false)) { if (this.hiveConf.getBoolVar( ConfVars.HIVE_SERVER2_SUPPORT_DYNAMIC_SERVICE_DISCOVERY)) { invoke(classOf[HiveServer2], this, "removeServerInstanceFromZooKeeper") super.stop() }HiveServer2包路径:spark\sql\hive-thriftserver\src\main\java\org\apache\hive\service\server\HiveServer2.java由于spark不支持zookeeper,需要添加 addServerInstanceToZooKeeper、removeServerInstanceFromZooKeeper方法/** * StartOptionExecutor: starts HiveServer2. * This is the default executor, when no option is specified. static class StartOptionExecutor implements ServerOptionsExecutor { @Override public void execute() { try { startHiveServer2(); } catch (Throwable t) { LOG.error("Error starting HiveServer2", t); System.exit(-1); private String getServerInstanceURI() throws Exception { if ((thriftCLIService == null) || (thriftCLIService.getServerIPAddress() == null)) { throw new Exception("Unable to get the server address; it hasn't been initialized yet."); return thriftCLIService.getServerIPAddress().getHostName() + ":" + thriftCLIService.getPortNumber(); * For a kerberized cluster, we dynamically set up the client's JAAS conf. * @param hiveConf * @return * @throws Exception private void setUpZooKeeperAuth(HiveConf hiveConf) throws Exception { if (UserGroupInformation.isSecurityEnabled()) { String principal = hiveConf.getVar(ConfVars.HIVE_SERVER2_KERBEROS_PRINCIPAL); if (principal.isEmpty()) { throw new IOException("HiveServer2 Kerberos principal is empty"); String keyTabFile = hiveConf.getVar(ConfVars.HIVE_SERVER2_KERBEROS_KEYTAB); if (keyTabFile.isEmpty()) { throw new IOException("HiveServer2 Kerberos keytab is empty"); // Install the JAAS Configuration for the runtime Utils.setZookeeperClientKerberosJaasConfig(principal, keyTabFile); * ACLProvider for providing appropriate ACLs to CuratorFrameworkFactory private final ACLProvider zooKeeperAclProvider = new ACLProvider() { @Override public List<ACL> getDefaultAcl() { List<ACL> nodeAcls = new ArrayList<ACL>(); if (UserGroupInformation.isSecurityEnabled()) { // Read all to the world nodeAcls.addAll(Ids.READ_ACL_UNSAFE); // Create/Delete/Write/Admin to the authenticated user nodeAcls.add(new ACL(Perms.ALL, Ids.AUTH_IDS)); } else { // ACLs for znodes on a non-kerberized cluster // Create/Read/Delete/Write/Admin to the world nodeAcls.addAll(Ids.OPEN_ACL_UNSAFE); return nodeAcls; @Override public List<ACL> getAclForPath(String path) { return getDefaultAcl(); * Adds a server instance to ZooKeeper as a znode. * @param hiveConf * @throws Exception private void addServerInstanceToZooKeeper(HiveConf hiveConf) throws Exception { String zooKeeperEnsemble = ZooKeeperHiveHelper.getQuorumServers(hiveConf); String rootNamespace = hiveConf.getVar(HiveConf.ConfVars.HIVE_SERVER2_ZOOKEEPER_NAMESPACE); String instanceURI = getServerInstanceURI(); setUpZooKeeperAuth(hiveConf); int sessionTimeout = (int) hiveConf.getTimeVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_SESSION_TIMEOUT, TimeUnit.MILLISECONDS); int baseSleepTime = (int) hiveConf.getTimeVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CONNECTION_BASESLEEPTIME, TimeUnit.MILLISECONDS); int maxRetries = hiveConf.getIntVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CONNECTION_MAX_RETRIES); // Create a CuratorFramework instance to be used as the ZooKeeper client // Use the zooKeeperAclProvider to create appropriate ACLs zooKeeperClient = CuratorFrameworkFactory.builder().connectString(zooKeeperEnsemble) .sessionTimeoutMs(sessionTimeout).aclProvider(zooKeeperAclProvider) .retryPolicy(new ExponentialBackoffRetry(baseSleepTime, maxRetries)).build(); zooKeeperClient.start(); // Create the parent znodes recursively; ignore if the parent already exists. try { zooKeeperClient.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT) .forPath(ZooKeeperHiveHelper.ZOOKEEPER_PATH_SEPARATOR + rootNamespace); LOG.info("Created the root name space: " + rootNamespace + " on ZooKeeper for HiveServer2"); } catch (KeeperException e) { if (e.code() != KeeperException.Code.NODEEXISTS) { LOG.error("Unable to create HiveServer2 namespace: " + rootNamespace + " on ZooKeeper", e); throw e; // Create a znode under the rootNamespace parent for this instance of the server // Znode name: serverUri=host:port;version=versionInfo;sequence=sequenceNumber try { String pathPrefix = ZooKeeperHiveHelper.ZOOKEEPER_PATH_SEPARATOR + rootNamespace + ZooKeeperHiveHelper.ZOOKEEPER_PATH_SEPARATOR + "serverUri=" + instanceURI + ";" + "version=" + HiveVersionInfo.getVersion() + ";" + "sequence="; String znodeData = ""; if (hiveConf.getBoolVar(HiveConf.ConfVars.HIVE_SERVER2_ZOOKEEPER_PUBLISH_CONFIGS)) { // HiveServer2 configs that this instance will publish to ZooKeeper, // so that the clients can read these and configure themselves properly. Map<String, String> confsToPublish = new HashMap<String, String>(); addConfsToPublish(hiveConf, confsToPublish); // Publish configs for this instance as the data on the node znodeData = Joiner.on(';').withKeyValueSeparator("=").join(confsToPublish); } else { znodeData = instanceURI; byte[] znodeDataUTF8 = znodeData.getBytes(Charset.forName("UTF-8")); znode = new PersistentEphemeralNode(zooKeeperClient, PersistentEphemeralNode.Mode.EPHEMERAL_SEQUENTIAL, pathPrefix, znodeDataUTF8); znode.start(); // We'll wait for 120s for node creation long znodeCreationTimeout = 120; if (!znode.waitForInitialCreate(znodeCreationTimeout, TimeUnit.SECONDS)) { throw new Exception("Max znode creation wait time: " + znodeCreationTimeout + "s exhausted"); setDeregisteredWithZooKeeper(false); znodePath = znode.getActualPath(); // Set a watch on the znode if (zooKeeperClient.checkExists().usingWatcher(new DeRegisterWatcher()).forPath(znodePath) == null) { // No node exists, throw exception throw new Exception("Unable to create znode for this HiveServer2 instance on ZooKeeper."); LOG.info("Created a znode on ZooKeeper for HiveServer2 uri: " + instanceURI); } catch (Exception e) { LOG.error("Unable to create a znode for this server instance", e); if (znode != null) { znode.close(); throw (e); * The watcher class which sets the de-register flag when the znode corresponding to this server * instance is deleted. Additionally, it shuts down the server if there are no more active client * sessions at the time of receiving a 'NodeDeleted' notification from ZooKeeper. private class DeRegisterWatcher implements Watcher { @Override public void process(WatchedEvent event) { if (event.getType().equals(Watcher.Event.EventType.NodeDeleted)) { if (znode != null) { try { znode.close(); LOG.warn("This HiveServer2 instance is now de-registered from ZooKeeper. " + "The server will be shut down after the last client sesssion completes."); } catch (IOException e) { LOG.error("Failed to close the persistent ephemeral znode", e); } finally { HiveServer2.this.setDeregisteredWithZooKeeper(true); // If there are no more active client sessions, stop the server if (cliService.getSessionManager().getOpenSessionCount() == 0) { LOG.warn("This instance of HiveServer2 has been removed from the list of server " + "instances available for dynamic service discovery. " + "The last client session has ended - will shutdown now."); HiveServer2.this.stop(); private void removeServerInstanceFromZooKeeper() throws Exception { setDeregisteredWithZooKeeper(true); if (znode != null) { znode.close(); zooKeeperClient.close(); LOG.info("Server instance removed from ZooKeeper."); private void setDeregisteredWithZooKeeper(boolean deregisteredWithZooKeeper) { this.deregisteredWithZooKeeper = deregisteredWithZooKeeper; public String getServerHost() throws Exception { if ((thriftCLIService == null) || (thriftCLIService.getServerIPAddress() == null)) { throw new Exception("Unable to get the server address; it hasn't been initialized yet."); return thriftCLIService.getServerIPAddress().getHostName(); * Add conf keys, values that HiveServer2 will publish to ZooKeeper. * @param hiveConf private void addConfsToPublish(HiveConf hiveConf, Map<String, String> confsToPublish) { // Hostname confsToPublish.put(ConfVars.HIVE_SERVER2_THRIFT_BIND_HOST.varname, hiveConf.getVar(ConfVars.HIVE_SERVER2_THRIFT_BIND_HOST)); // Transport mode confsToPublish.put(ConfVars.HIVE_SERVER2_TRANSPORT_MODE.varname, hiveConf.getVar(ConfVars.HIVE_SERVER2_TRANSPORT_MODE)); // Transport specific confs if (isHTTPTransportMode(hiveConf)) { confsToPublish.put(ConfVars.HIVE_SERVER2_THRIFT_HTTP_PORT.varname, Integer.toString(hiveConf.getIntVar(ConfVars.HIVE_SERVER2_THRIFT_HTTP_PORT))); confsToPublish.put(ConfVars.HIVE_SERVER2_THRIFT_HTTP_PATH.varname, hiveConf.getVar(ConfVars.HIVE_SERVER2_THRIFT_HTTP_PATH)); } else { confsToPublish.put(ConfVars.HIVE_SERVER2_THRIFT_PORT.varname, Integer.toString(hiveConf.getIntVar(ConfVars.HIVE_SERVER2_THRIFT_PORT))); confsToPublish.put(ConfVars.HIVE_SERVER2_THRIFT_SASL_QOP.varname, hiveConf.getVar(ConfVars.HIVE_SERVER2_THRIFT_SASL_QOP)); // Auth specific confs confsToPublish.put(ConfVars.HIVE_SERVER2_AUTHENTICATION.varname, hiveConf.getVar(ConfVars.HIVE_SERVER2_AUTHENTICATION)); if (isKerberosAuthMode(hiveConf)) { confsToPublish.put(ConfVars.HIVE_SERVER2_KERBEROS_PRINCIPAL.varname, hiveConf.getVar(ConfVars.HIVE_SERVER2_KERBEROS_PRINCIPAL)); // SSL conf confsToPublish.put(ConfVars.HIVE_SERVER2_USE_SSL.varname, Boolean.toString(hiveConf.getBoolVar(ConfVars.HIVE_SERVER2_USE_SSL))); public static boolean isKerberosAuthMode(HiveConf hiveConf) { String authMode = hiveConf.getVar(HiveConf.ConfVars.HIVE_SERVER2_AUTHENTICATION); if (authMode != null && (authMode.equalsIgnoreCase("KERBEROS"))) { return true; return false; }Spark 编译apache-maven-3.6.3scala-2.12.15全部编译./build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.4 -Phive -Phive-thriftserver -Dscala-2.12 -DskipTests clean package指定模块编译./build/mvn -Pyarn -Phive -Phive-thriftserver -DskipTests -pl sql/hive-thriftserver -am clean package编译过程中遇到的问题[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:4.3.0:compile (scala-compile-first) on project spark-hive-thriftserver_2.12: Execution scala-compile-first of goal net.alchim31.maven:scala-maven-plugin:4.3.0:compile failed: java.lang.AssertionError: assertion failed: Expected protocol to be 'file' or empty in URI jar:file:/D:/Repositories/Maven/org/apache/zookeeper/zookeeper/3.4.14/zookeeper-3.4.14.jar!/org/apache/zookeeper/ZooDefs$Ids.class -> [Help 1] 解决方式: 修改 scala-maven-plugin 版本:4.3.0 改为 4.5.6 <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>4.5.6</version>验证替换 spark-hive-thriftserver_2.12-3.1.2.jar将编译好的包 spark-hive-thriftserver_2.12-3.1.2.jar,替换到 Spark 安装路径 jars/ 下,启动 Spark修改 Spark conf 下 hive-site 配置添加:<property> <name>hive.server2.support.dynamic.service.discovery</name> <value>true</value> </property> <property> <name>hive.server2.zookeeper.namespace</name> <value>hiveserver2_zk</value> </property> <property> <name>hive.zookeeper.quorum</name> <value>192-168-44-129.indata.com:2181,indata-1192-168-44-128:2181,indata-192-168-44-130.indata.com:2181</value> </property> <property> <name>hive.zookeeper.client.port</name> <value>2181</value> </property>启动 Spark ThriftServer spark-submit --master yarn --deploy-mode client --num-executors 2 --executor-cores 1 --executor-memory 1G --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name SparkThriftServer_1 spark-internal --hiveconf hive.server2.thrift.http.port=20004 连接验证[root@indata-192-168-44-128 spark-3.1.2]./bin/beeline beeline> !connect jdbc:hive2://192-168-44-129.indata.com:2181,indata-1192-168-44-128:2181,indata-192-168-44-130.indata.com:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2_zk 22/02/11 08:57:31 INFO ConnectionStateManager: State change: CONNECTED 22/02/11 08:57:31 INFO ZooKeeper: Session: 0x27ed86adc0401d2 closed 22/02/11 08:57:31 INFO Utils: Resolved authority: indata-1192-168-44-128:20004 22/02/11 08:57:31 INFO ClientCnxn: EventThread shut down for session: 0x27ed86adc0401d2 22/02/11 08:57:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Connected to: Spark SQL (version 3.1.2) Driver: Hive JDBC (version 2.3.7) Transaction isolation: TRANSACTION_REPEATABLE_READ验证过程中遇到的问题zookeeper解析不到IP22/02/11 09:08:46 WARN HiveConnection: Failed to connect to :20004 22/02/11 09:08:46 INFO CuratorFrameworkImpl: Starting 22/02/11 09:08:46 INFO ZooKeeper: Initiating client connection, connectString=192-168-44-129.indata.com:2181,indata-1192-168-44-128:2181,indata-192-168-44-130.indata.com:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@3da30852 22/02/11 09:08:46 INFO ClientCnxn: Opening socket connection to server indata-192-168-44-130.indata.com/192.168.44.130:2181. Will not attempt to authenticate using SASL (unknown error) 22/02/11 09:08:46 INFO ClientCnxn: Socket connection established to indata-192-168-44-130.indata.com/192.168.44.130:2181, initiating session 22/02/11 09:08:46 INFO ClientCnxn: Session establishment complete on server indata-192-168-44-130.indata.com/192.168.44.130:2181, sessionid = 0x37ed86adc050208, negotiated timeout = 60000 22/02/11 09:08:46 INFO ConnectionStateManager: State change: CONNECTED 22/02/11 09:08:46 INFO ZooKeeper: Session: 0x37ed86adc050208 closed 22/02/11 09:08:46 INFO ClientCnxn: EventThread shut down for session: 0x37ed86adc050208 22/02/11 09:08:46 ERROR Utils: Unable to read HiveServer2 configs from ZooKeeper Error: Could not open client transport for any of the Server URI's in ZooKeeper: Host name may not be empty (state=08S01,code=0)解决方案通过 Hive JDBC Connector 代码连接zookeeper地址进行debug,发现 hive.server2.thrift.bind.host 值为空,同时,zookeeper命令查看节点值:[zk: indata-1192-168-44-128:2181(CONNECTED) 5] get /hiveserver2_zk/serverUri=indata-1192-168-44-128:20004;version=2.3.7;sequence=0000000023 hive.server2.authentication=KERBEROS;hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice;hive.server2.thrift.http.port=20004;hive.server2.thrift.bind.host=;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=spark/_HOST@INDATA.COM hive.server2.thrift.bind.host值同样为空,定位到ThriftServer注册到zookeeper时产生问题zookeeper注册代码:private void addConfsToPublish(HiveConf hiveConf, Map<String, String> confsToPublish) { // Hostname confsToPublish.put(ConfVars.HIVE_SERVER2_THRIFT_BIND_HOST.varname, hiveConf.getVar(ConfVars.HIVE_SERVER2_THRIFT_BIND_HOST)); //hive-site没有 HIVE_SERVER2_THRIFT_BIND_HOST 配置项,因此为空。 //通过getServerHost方法动态获取该配置 hiveConf.set(HiveConf.ConfVars.HIVE_SERVER2_THRIFT_BIND_HOST.varname, getServerHost)代码完整代码已提交到:https://gitee.com/dongkelun/spark/tree/3.1.2-STS-HA/
前言记录一个Presto查询Hudi的异常的解决办法,本人目前对Presto还不是很熟,只是在使用过程中遇到了问题,记录一下异常解决方法以及过程异常2021-12-22T17:29:55.440+0800 ERROR SplitRunner-101-126 com.facebook.presto.execution.executor.TaskExecutor Error processing Split 20211222_092954_00047_8xk77.1.0.0-0 {path=hdfs://cluster1/warehouse/tablespace/managed/hive/test_dkl.db/test_hudi_1/069eddc2-f3bf-4efc-911b-3cb3aa523a8e-0_0-0-0_20211222172800.parquet, start=0, length=435116, fileSize=435116, fileModifiedTime=1640165281695, hosts=[], database=test_dkl, table=test_hudi_1, nodeSelectionStrategy=NO_PREFERENCE, partitionName=<UNPARTITIONED>, s3SelectPushdownEnabled=false, cacheQuotaRequirement=CacheQuotaRequirement{cacheQuotaScope=GLOBAL, quota=Optional.empty}} (start = 9.271133085962032E9, wall = 49 ms, cpu = 0 ms, wait = 0 ms, calls = 1) org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs://cluster1/warehouse/tablespace/managed/hive/test_dkl.db/test_hudi_1/069eddc2-f3bf-4efc-911b-3cb3aa523a8e-0_0-0-0_20211222172800.parquet at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:251) at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:98) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:60) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:92) at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:216) at com.facebook.presto.hive.HiveUtil.createRecordReader(HiveUtil.java:266) at com.facebook.presto.hive.GenericHiveRecordCursorProvider.lambda$createRecordCursor$0(GenericHiveRecordCursorProvider.java:74) at com.facebook.presto.hive.authentication.UserGroupInformationUtils.lambda$executeActionInDoAs$0(UserGroupInformationUtils.java:29) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at com.facebook.presto.hive.authentication.UserGroupInformationUtils.executeActionInDoAs(UserGroupInformationUtils.java:27) at com.facebook.presto.hive.authentication.DirectHdfsAuthentication.doAs(DirectHdfsAuthentication.java:38) at com.facebook.presto.hive.HdfsEnvironment.doAs(HdfsEnvironment.java:81) at com.facebook.presto.hive.GenericHiveRecordCursorProvider.createRecordCursor(GenericHiveRecordCursorProvider.java:73) at com.facebook.presto.hive.HivePageSourceProvider.createHivePageSource(HivePageSourceProvider.java:478) at com.facebook.presto.hive.HivePageSourceProvider.createPageSource(HivePageSourceProvider.java:184) at com.facebook.presto.spi.connector.classloader.ClassLoaderSafeConnectorPageSourceProvider.createPageSource(ClassLoaderSafeConnectorPageSourceProvider.java:63) at com.facebook.presto.split.PageSourceManager.createPageSource(PageSourceManager.java:80) at com.facebook.presto.operator.TableScanOperator.getOutput(TableScanOperator.java:249) at com.facebook.presto.operator.Driver.processInternal(Driver.java:424) at com.facebook.presto.operator.Driver.lambda$processFor$9(Driver.java:307) at com.facebook.presto.operator.Driver.tryWithLock(Driver.java:728) at com.facebook.presto.operator.Driver.processFor(Driver.java:300) at com.facebook.presto.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1077) at com.facebook.presto.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:162) at com.facebook.presto.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:599) at com.facebook.presto.$gen.Presto_0_265_1_ad1fce6____20211222_010139_1.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.UnsupportedOperationException: readDirect unsupported in RemoteBlockReader at org.apache.hadoop.hdfs.RemoteBlockReader.read(RemoteBlockReader.java:492) at org.apache.hadoop.hdfs.DFSInputStream$ByteBufferStrategy.doRead(DFSInputStream.java:789) at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:823) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:883) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:938) at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143) at org.apache.parquet.hadoop.util.H2SeekableInputStream$H2Reader.read(H2SeekableInputStream.java:81) at org.apache.parquet.hadoop.util.H2SeekableInputStream.readFully(H2SeekableInputStream.java:90) at org.apache.parquet.hadoop.util.H2SeekableInputStream.readFully(H2SeekableInputStream.java:75) at org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174) at org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805) at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127) at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)异常原因原因是因为hdfs block块没有一个和presto在一个节点上的,要复现这个问题,可以把hdfs副本数改为1,这样可以比较快的复现,关于如何修改hdfs副本数:修改hdfs-site.xml:<property> <name>dfs.replication</name> <value>1</value> </property>注意:只修改集群的配置,只是修改了集群的默认值,还要注意客户端里的hdfs-site.xml也要修改,如果有的话关于如何查看hdfs block块分布在哪几个节点:hdfs fsck hdfs://cluster1/warehouse/tablespace/managed/hive/test_dkl.db/test_hudi_1/069eddc2-f3bf-4efc-911b-3cb3aa523a8e-0_0-0-0_20211222172800.parquet -files -blocks -locations Connecting to namenode via http://indata-192-168-44-163.indata.com:50070/fsck?ugi=hive&files=1&blocks=1&locations=1&path=%2Fwarehouse%2Ftablespace%2Fmanaged%2Fhive%2Ftest_dkl.db%2Ftest_hudi_1%2F069eddc2-f3bf-4efc-911b-3cb3aa523a8e-0_0-0-0_20211222172800.parquet FSCK started by hive (auth:KERBEROS_SSL) from /192.168.44.162 for path /warehouse/tablespace/managed/hive/test_dkl.db/test_hudi_1/069eddc2-f3bf-4efc-911b-3cb3aa523a8e-0_0-0-0_20211222172800.parquet at Thu Dec 23 11:01:40 CST 2021 /warehouse/tablespace/managed/hive/test_dkl.db/test_hudi_1/069eddc2-f3bf-4efc-911b-3cb3aa523a8e-0_0-0-0_20211222172800.parquet 435116 bytes, replicated: replication=1, 1 block(s): OK 0. BP-551234808-192.168.44.164-1628073819744:blk_1073859319_119251 len=435116 Live_repl=1 [DatanodeInfoWithStorage[192.168.44.163:1019,DS-50d15bb1-a076-4e5f-a75a-f729f2bb8db6,DISK]] Status: HEALTHY Number of data-nodes: 3 Number of racks: 1 Total dirs: 0 Total symlinks: 0 Replicated Blocks: Total size: 435116 B Total files: 1 Total blocks (validated): 1 (avg. block size 435116 B) Minimally replicated blocks: 1 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 1 Average block replication: 1.0 Missing blocks: 0 Corrupt blocks: 0 Missing replicas: 0 (0.0 %) Erasure Coded Block Groups: Total size: 0 B Total files: 0 Total block groups (validated): 0 Minimally erasure-coded block groups: 0 Over-erasure-coded block groups: 0 Under-erasure-coded block groups: 0 Unsatisfactory placement block groups: 0 Average block group size: 0.0 Missing block groups: 0 Corrupt block groups: 0 Missing internal blocks: 0 FSCK ended at Thu Dec 23 11:01:40 CST 2021 in 1 milliseconds The filesystem under path '/warehouse/tablespace/managed/hive/test_dkl.db/test_hudi_1/069eddc2-f3bf-4efc-911b-3cb3aa523a8e-0_0-0-0_20211222172800.parquet' is HEALTHY可以看到该文件只有一个副本在163,而presto实际在162异常解决办法修改presto catalog下面的配置文件hdfs-site.xml,添加<property> <name>dfs.client.use.legacy.blockreader</name> <value>false</value> </property>重启presto服务,即可解决问题异常解决定位过程网上没有搜到该异常的解决办法,只有一个异常与此类似,不过是hive/spark sql之间的异常:https://dongkelun.com/2018/05/20/hiveQueryException/。怀疑是jar包版本问题,通过看源码解决,部分源码(可根据异常信息定位源码位置)RemoteBlockReader.read@Override public int read(ByteBuffer buf) throws IOException { throw new UnsupportedOperationException("readDirect unsupported in RemoteBlockReader"); }发现该方法直接抛异常,并且RemoteBlockReader已经弃用了,在同一个包下面有一个RemoteBlockReader2,并且它的read方法可以使用:@Override public int read(ByteBuffer buf) throws IOException { if (curDataSlice == null || curDataSlice.remaining() == 0 && bytesNeededToFinish > 0) { TraceScope scope = Trace.startSpan( "RemoteBlockReader2#readNextPacket(" + blockId + ")", Sampler.NEVER); try { readNextPacket(); } finally { scope.close(); if (curDataSlice.remaining() == 0) { // we're at EOF now return -1; int nRead = Math.min(curDataSlice.remaining(), buf.remaining()); ByteBuffer writeSlice = curDataSlice.duplicate(); writeSlice.limit(writeSlice.position() + nRead); buf.put(writeSlice); curDataSlice.position(writeSlice.position()); return nRead; }那么能不能通过分析源码,让它走RemoteBlockReader2的逻辑呢,我们看一下异常的上一个调用DFSInputStream$ByteBufferStrategy.doRead@Override public int doRead(BlockReader blockReader, int off, int len) throws ChecksumException, IOException { int oldpos = buf.position(); int oldlimit = buf.limit(); boolean success = false; try { int ret = blockReader.read(buf); success = true; updateReadStatistics(readStatistics, ret, blockReader); return ret; } finally { if (!success) { // Reset to original state so that retries work correctly. buf.position(oldpos); buf.limit(oldlimit); }查看blockReader的初始化:blockReader = new BlockReaderFactory(dfsClient.getConf()). setInetSocketAddress(targetAddr). setRemotePeerFactory(dfsClient). setDatanodeInfo(chosenNode). setStorageType(storageType). setFileName(src). setBlock(blk). setBlockToken(accessToken). setStartOffset(offsetIntoBlock). setVerifyChecksum(verifyChecksum). setClientName(dfsClient.clientName). setLength(blk.getNumBytes() - offsetIntoBlock). setCachingStrategy(curCachingStrategy). setAllowShortCircuitLocalReads(!shortCircuitForbidden). setClientCacheContext(dfsClient.getClientContext()). setUserGroupInformation(dfsClient.ugi). setConfiguration(dfsClient.getConfiguration()). build();再看BlockReaderFactory.buildpublic BlockReader build() throws IOException { BlockReader reader = null; Preconditions.checkNotNull(configuration); if (conf.shortCircuitLocalReads && allowShortCircuitLocalReads) { if (clientContext.getUseLegacyBlockReaderLocal()) { reader = getLegacyBlockReaderLocal(); if (reader != null) { if (LOG.isTraceEnabled()) { LOG.trace(this + ": returning new legacy block reader local."); return reader; } else { reader = getBlockReaderLocal(); if (reader != null) { if (LOG.isTraceEnabled()) { LOG.trace(this + ": returning new block reader local."); return reader; if (conf.domainSocketDataTraffic) { reader = getRemoteBlockReaderFromDomain(); if (reader != null) { if (LOG.isTraceEnabled()) { LOG.trace(this + ": returning new remote block reader using " + "UNIX domain socket on " + pathInfo.getPath()); return reader; Preconditions.checkState(!DFSInputStream.tcpReadsDisabledForTesting, "TCP reads were disabled for testing, but we failed to " + "do a non-TCP read."); return getRemoteBlockReaderFromTcp(); }我们发现不管是getRemoteBlockReaderFromDomain还是getRemoteBlockReaderFromTcp,都是调用getRemoteBlockReader:if (conf.useLegacyBlockReader) { return RemoteBlockReader.newBlockReader(fileName, block, token, startOffset, length, conf.ioBufferSize, verifyChecksum, clientName, peer, datanode, clientContext.getPeerCache(), cachingStrategy); } else { return RemoteBlockReader2.newBlockReader( fileName, block, token, startOffset, length, verifyChecksum, clientName, peer, datanode, clientContext.getPeerCache(), cachingStrategy); }可以看到,当conf.useLegacyBlockReader为false时,就会走到RemoteBlockReader2,那么再看一下conf.useLegacyBlockReaderuseLegacyBlockReader = conf.getBoolean( DFSConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADER, DFSConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADER_DEFAULT); DFS_CLIENT_USE_LEGACY_BLOCKREADER_DEFAULT为false,那么DFS_CLIENT_USE_LEGACY_BLOCKREADER默认的应该为true,我们不用深究DFS_CLIENT_USE_LEGACY_BLOCKREADER为啥为true,我们看看能不能通过修改配置将他的值设置为falsepublic static final String DFS_CLIENT_USE_LEGACY_BLOCKREADER = "dfs.client.use.legacy.blockreader";我们通过在hdfs-site.xml添加配置dfs.client.use.legacy.blockreader = false,重启presto,发现解决了问题,大功告成!
前言本文总结如何利用Submarin集成Spark-Ranger,通过ranger控制Spark SQL的权限前提已经安装了Spark、Hive、kerberos、Ranger,并且Hive已经集成了Ranger,本文环境基于Ambarisubmarine-spark-security 插件打包官网文档https://submarine.apache.org/docs/userDocs/submarine-security/spark-security/README (0.6.0版本)git clone https://github.com/apache/submarine 当前master分支(0.7.0)已经没有submarine-spark-security模块了,需要切换到tag:release-0.6.0-RC0然后利用mvn命令打包mvn clean package -Dmaven.javadoc.skip=true -DskipTests -pl :submarine-spark-security -Pspark-2.4 -Pranger-1.2打的submarine-spark-security-0.6.0.jar在目录submarine\submarine-security\spark-security\target然后将submarine-spark-security-0.6.0.jar上传到$SPARK_HOME/jarsRanger Admin界面添加了Spark Server策略地址:http://yourIp:16080/在原来的Hive模块下,添加一个sparkServer策略,名称自定义其中下面三个key,必须填spark,不填的话可能存在问题,至于是否都必填和具体什么含义,目前还没有研究 tag.download.auth.users policy.download.auth.users policy.grantrevoke.auth.users jdbc.url和 hive策略里的一样就行,然后测试一下连接,建完后如下图所示为了测试效果,我们先删掉其他策略,只保留all-database,table,column,这是默认的是有spark用户的配置Spark根据官网文档,在$SPARK_HOME/conf创建下面两个配置文件ranger-spark-security.xml<configuration> <property> <name>ranger.plugin.spark.policy.rest.url</name> <value>http://yourIp:16080</value> </property> <property> <name>ranger.plugin.spark.service.name</name> <value>sparkServer</value> </property> <property> <name>ranger.plugin.spark.policy.cache.dir</name> <value>/etc/ranger/sparkServer/policycache</value> </property> <property> <name>ranger.plugin.spark.policy.pollIntervalMs</name> <value>30000</value> </property> <property> <name>ranger.plugin.spark.policy.source.impl</name> <value>org.apache.ranger.admin.client.RangerAdminRESTClient</value> </property> </configuration>ranger-spark-audit.xml<configuration> <property> <name>xasecure.audit.is.enabled</name> <value>true</value> </property> <property> <name>xasecure.audit.destination.db</name> <value>false</value> </property> <property> <name>xasecure.audit.destination.db.jdbc.driver</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>xasecure.audit.destination.db.jdbc.url</name> <value>jdbc:mysql://yourIp:3306/ranger</value> </property> <property> <name>xasecure.audit.destination.db.password</name> <value>ranger</value> </property> <property> <name>xasecure.audit.destination.db.user</name> <value>ranger</value> </property> <property> <name>xasecure.audit.jaas.Client.option.keyTab</name> <value>/etc/security/keytabs/hive.service.keytab</value> </property> <property> <name>xasecure.audit.jaas.Client.option.principal</name> <value>hive/_HOST@INDATA.COM</value> </property> </configuration>至于具体的ip、端口、用户名、密码信息可以在之前已经配置好的hive-ranger插件配置文件里查看,也可以在ambari界面搜索启动Spark-SQL SparkServerSpark-SQLspark-sql --master yarn --deploy-mode client --conf 'spark.sql.extensions=org.apache.submarine.spark.security.api.RangerSparkSQLExtension' --principal spark/youIp@INDATA.COM --keytab /etc/security/keytabs/spark.service.keytabSparkServerspark-submit --master yarn --deploy-mode client --executor-memory 2G --num-executors 3 --executor-cores 2 --driver-memory 4G --driver-cores 2 --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift-11111 --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' --principal spark/yourIp --keytab /etc/security/keytabs/spark.service.keytab --hiveconf hive.server2.thrift.http.port=11111注意需要添加submarine的extensions,并且以spark用户认证kerberos,因为ranger是通过kerberos的认证用户进行权限控制的这里以Spark-SQL为例进行演示,启动spark-sql后,执行show tables,这是会根据配置文件,读取ranger里的sparkServer策略,如果读取并解析成功,则会在配置的缓存目录下生成对应的json文件,如:ll /etc/ranger/sparkServer/policycache/ total 28 -rw-r--r-- 1 root root 27223 Dec 2 09:52 sparkSql_sparkServer.json 这个时候可以看到,spark是有权限读取表信息的然后在ranger里将sparkServer策略的spark用户删掉,过个几十秒,我们可以看到,spark-sql已经刷新了ranger的策略,然后再show tables,这时已经没有权限读取表信息了到这里,我们已经验证了可以通ranger控制spark sql的权限了,后面可以自己把spark 用户再加回去,验证权限是否又有了,并且可以修改策略,比如可以看哪些表,select update 等操作权限当然正式使用应该起Spark Server问题其他kerberos用户我们通过其他kerberos用户认证时,虽然可以控制权限,但是会抛异常,Spark Server虽然没有异常,但是并不能控制其他用户的权限扩展多个spark.sql.extensions因为我们已经通过spark.sql.extensions=org.apache.submarine.spark.security.api.RangerSparkSQLExtension扩展了submarine,如果同时再扩展hudi,这样使用spark-sql建hudi表时有冲突更新上面提到的:当前master分支(0.7.0)已经没有submarine-spark-security模块了,需要切换到tag:release-0.6.0-RC0,submarine-spark-security的作者已经将该模块内置到Kyuubi中了(这俩项目是同一个作者),可以下载Kyuubi最新版代码查看,找到相应的模块验证使用对于扩展多个extensionsSpark3可以用逗号分隔来实现多个扩展Spark2只能在代码中用.withExtensions来实现多个扩展
前言在上一篇博客HUDI preCombinedField 总结中已经对preCombinedField进行总结过一次了,由于当时对源码理解还不够深入,导致分析的不全面,现在对源码有了进一步的理解,所以再进行总结补充一下。历史比较值上面总结中:DF:无论新记录的ts值是否大于历史记录的ts值,都会覆盖写,直接更新。SQL:写数据时,ts值大于等于历史ts值,才会更新,小于历史值则不更新。这里解释一下原因,首先Spark SQL PAYLOAD_CLASS_NAME 默认值为ExpressionPayload,而ExpressionPayload继承了DefaultHoodieRecordPayloadclass ExpressionPayload(record: GenericRecord, orderingVal: Comparable[_]) extends DefaultHoodieRecordPayload(record, orderingVal) { DefaultHoodieRecordPayload 里的needUpdatingPersistedRecord实现了历史值进行比较,具体实现,后面会进行分析而 Spark DF在hudi0.9.0版本 PAYLOAD_CLASS_NAME的默认值为OverwriteWithLatestAvroPayload,它是DefaultHoodieRecordPayload的父类并没有实现和历史值进行比较历史值比较实现对源码进行简单的分析,首先说明历史比较值的配置项为: HoodiePayloadProps.PAYLOAD_ORDERING_FIELD_PROP_KEY = "hoodie.payload.ordering.field"而它的默认值为ts,所以ordering_field和preCombineField并不一样,但是因为默认值一样而且实现都在PAYLOAD_CLASS里,所以给人的感觉是一样,故放在一起进行总结HoodieMergeHandlehudi 在 upsert进行小文件合并时,会走到HoodieMergeHandled的write方法:/** * Go through an old record. Here if we detect a newer version shows up, we write the new one to the file. public void write(GenericRecord oldRecord) { // 历史key值 String key = KeyGenUtils.getRecordKeyFromGenericRecord(oldRecord, keyGeneratorOpt); boolean copyOldRecord = true; if (keyToNewRecords.containsKey(key)) { //如果新记录的key值包含旧值,则进行合并逻辑 // If we have duplicate records that we are updating, then the hoodie record will be deflated after // writing the first record. So make a copy of the record to be merged HoodieRecord<T> hoodieRecord = new HoodieRecord<>(keyToNewRecords.get(key)); try { // 这里调用了 PAYLOAD_CLASS 的 combineAndGetUpdateValue方法 Option<IndexedRecord> combinedAvroRecord = hoodieRecord.getData().combineAndGetUpdateValue(oldRecord, useWriterSchema ? tableSchemaWithMetaFields : tableSchema, config.getPayloadConfig().getProps()); if (combinedAvroRecord.isPresent() && combinedAvroRecord.get().equals(IGNORE_RECORD)) { // If it is an IGNORE_RECORD, just copy the old record, and do not update the new record. copyOldRecord = true; } else if (writeUpdateRecord(hoodieRecord, oldRecord, combinedAvroRecord)) { * ONLY WHEN 1) we have an update for this key AND 2) We are able to successfully * write the the combined new * value * We no longer need to copy the old record over. copyOldRecord = false; writtenRecordKeys.add(key); } catch (Exception e) { throw new HoodieUpsertException("Failed to combine/merge new record with old value in storage, for new record {" + keyToNewRecords.get(key) + "}, old value {" + oldRecord + "}", e); if (copyOldRecord) { // this should work as it is, since this is an existing record try { fileWriter.writeAvro(key, oldRecord); } catch (IOException | RuntimeException e) { String errMsg = String.format("Failed to merge old record into new file for key %s from old file %s to new file %s with writerSchema %s", key, getOldFilePath(), newFilePath, writeSchemaWithMetaFields.toString(true)); LOG.debug("Old record is " + oldRecord); throw new HoodieUpsertException(errMsg, e); recordsWritten++; }combineAndGetUpdateValue方法看一下 DefaultHoodieRecordPayload的combineAndGetUpdateValue:@Override * currentValue 当前值,即历史记录值 * Option<IndexedRecord> combinedAvroRecord = * hoodieRecord.getData().combineAndGetUpdateValue(oldRecord, * useWriterSchema ? tableSchemaWithMetaFields : tableSchema, * config.getPayloadConfig().getProps()); public Option<IndexedRecord> combineAndGetUpdateValue(IndexedRecord currentValue, Schema schema, Properties properties) throws IOException { // recordBytes 为新数据的字节值 if (recordBytes.length == 0) { return Option.empty(); // 将recordBytes转化为Avro格式的GenericRecord GenericRecord incomingRecord = bytesToAvro(recordBytes, schema); // Null check is needed here to support schema evolution. The record in storage may be from old schema where // the new ordering column might not be present and hence returns null. // 如果不需要历史值,则返回历史记录值 if (!needUpdatingPersistedRecord(currentValue, incomingRecord, properties)) { return Option.of(currentValue); * We reached a point where the value is disk is older than the incoming record. eventTime = updateEventTime(incomingRecord, properties); * Now check if the incoming record is a delete record. return isDeleteRecord(incomingRecord) ? Option.empty() : Option.of(incomingRecord); }关于recordBytes的赋值,在父类BaseAvroPayload,我们写数据时需要先构造GenericRecord record,然后将record作为参数传给PayLoad,最后构造构造List>,调用HoodieJavaWriteClient.upsert(List> records,String instantTime)public BaseAvroPayload(GenericRecord record, Comparable orderingVal) { this.recordBytes = record != null ? HoodieAvroUtils.avroToBytes(record) : new byte[0]; this.orderingVal = orderingVal; if (orderingVal == null) { throw new HoodieException("Ordering value is null for record: " + record); }needUpdatingPersistedRecord和历史值的比较就在这里:protected boolean needUpdatingPersistedRecord(IndexedRecord currentValue, IndexedRecord incomingRecord, Properties properties) { * Combining strategy here returns currentValue on disk if incoming record is older. * The incoming record can be either a delete (sent as an upsert with _hoodie_is_deleted set to true) * or an insert/update record. In any case, if it is older than the record in disk, the currentValue * in disk is returned (to be rewritten with new commit time). * NOTE: Deletes sent via EmptyHoodieRecordPayload and/or Delete operation type do not hit this code path * and need to be dealt with separately. // 历史ts值 Object persistedOrderingVal = getNestedFieldVal((GenericRecord) currentValue, properties.getProperty(HoodiePayloadProps.PAYLOAD_ORDERING_FIELD_PROP_KEY), true); // 新数据的ts值 Comparable incomingOrderingVal = (Comparable) getNestedFieldVal((GenericRecord) incomingRecord, properties.getProperty(HoodiePayloadProps.PAYLOAD_ORDERING_FIELD_PROP_KEY), false); // 如果历史值为null或者历史值小于新值,则返回true,代表要覆盖历史值更新,反之不更新 return persistedOrderingVal == null || ((Comparable) persistedOrderingVal).compareTo(incomingOrderingVal) <= 0; }PAYLOAD_ORDERING_FIELD_PROP_KEY默认值可以看到在上面HoodieMergeHandle中传的properties参数为config.getPayloadConfig().getProps()getPayloadConfig返回HoodiePayloadConfig,而在HoodiePayloadConfig定义了PAYLOAD_ORDERING_FIELD_PROP_KEY的默认值为tspublic HoodiePayloadConfig getPayloadConfig() { return hoodiePayloadConfig; public class HoodiePayloadConfig extends HoodieConfig { public static final ConfigProperty<String> ORDERING_FIELD = ConfigProperty .key(PAYLOAD_ORDERING_FIELD_PROP_KEY) .defaultValue("ts") .withDocumentation("Table column/field name to order records that have the same key, before " + "merging and writing to storage.");预合并实现首先说明,预合并实现方法为类 OverwriteWithLatestAvroPayload.preCombinepublic class OverwriteWithLatestAvroPayload extends BaseAvroPayload implements HoodieRecordPayload<OverwriteWithLatestAvroPayload> { public OverwriteWithLatestAvroPayload(GenericRecord record, Comparable orderingVal) { super(record, orderingVal); public OverwriteWithLatestAvroPayload(Option<GenericRecord> record) { this(record.isPresent() ? record.get() : null, 0); // natural order @Override public OverwriteWithLatestAvroPayload preCombine(OverwriteWithLatestAvroPayload oldValue) { if (oldValue.recordBytes.length == 0) { // use natural order for delete record return this; // 如果旧值的orderingVal大于orderingVal,发返回旧值,否则返回当前新值,即返回较大的record if (oldValue.orderingVal.compareTo(orderingVal) > 0) { // pick the payload with greatest ordering value return oldValue; } else { return this; }所以无论是Spark SQL 还是 Spark DF都默认实现了预合并ExpressionPayload、DefaultHoodieRecordPayload都继承了(extends)OverwriteWithLatestAvroPayload,所以用这三个payload都可以实现预合并,关键看怎么构造paylod构造Paylod根据上面的代码,我们可以发现OverwriteWithLatestAvroPayload有两个构造函数,一个参数和两个参数,其中一个参数的并不能实现预合并,因为预合并方法中需要orderingVal比较,所以要用两个参数的构造函数构造OverwriteWithLatestAvroPayload,其中orderingVal 为 preCombineField对应的值,record为一行记录值。而无论是Spark SQL还是Spark DF,最终都会调用HoodieSparkSqlWriter.write,构造paylod就是在这个write方法里实现的。// Convert to RDD[HoodieRecord] // 首先将df转为RDD[HoodieRecord] val genericRecords: RDD[GenericRecord] = HoodieSparkUtils.createRdd(df, structName, nameSpace, reconcileSchema, org.apache.hudi.common.util.Option.of(schema)) // 判断是否需要预合并 val shouldCombine = parameters(INSERT_DROP_DUPS.key()).toBoolean || operation.equals(WriteOperationType.UPSERT) || parameters.getOrElse(HoodieWriteConfig.COMBINE_BEFORE_INSERT.key(), HoodieWriteConfig.COMBINE_BEFORE_INSERT.defaultValue()).toBoolean val hoodieAllIncomingRecords = genericRecords.map(gr => { val processedRecord = getProcessedRecord(partitionColumns, gr, dropPartitionColumns) val hoodieRecord = if (shouldCombine) { // 如果需要预合并 // 从record中取出PRECOMBINE_FIELD对应的值,如果值不存在,则抛出异常,因为预合并的字段不允许存在空值 val orderingVal = HoodieAvroUtils.getNestedFieldVal(gr, hoodieConfig.getString(PRECOMBINE_FIELD), false) .asInstanceOf[Comparable[_]] 然后通过反射的方法,构造PAYLOAD_CLASS_NAME对应的paylod DataSourceUtils.createHoodieRecord(processedRecord, orderingVal, keyGenerator.getKey(gr), hoodieConfig.getString(PAYLOAD_CLASS_NAME)) } else { // 如果不需要预合并,也通过反射构造paylod,但是不需要orderingVal参数 DataSourceUtils.createHoodieRecord(processedRecord, keyGenerator.getKey(gr), hoodieConfig.getString(PAYLOAD_CLASS_NAME)) hoodieRecord }).toJavaRDD()通过上面源码的注释中可以看到,如果需要进行预合并的话,则首先取出record中对应的PRECOMBINE_FIELD值orderingVal,然后构造payload,即new OverwriteWithLatestAvroPayload(record, orderingVal) 这里就构造好了payload,那么最终是在哪里实现的预合并呢?调用preCombine这里以cow表的upsert为例,即HoodieJavaCopyOnWriteTable.upsert// HoodieJavaCopyOnWriteTable @Override public HoodieWriteMetadata<List<WriteStatus>> upsert(HoodieEngineContext context, String instantTime, List<HoodieRecord<T>> records) { return new JavaUpsertCommitActionExecutor<>(context, config, this, instantTime, records).execute(); // JavaUpsertCommitActionExecutor @Override public HoodieWriteMetadata<List<WriteStatus>> execute() { return JavaWriteHelper.newInstance().write(instantTime, inputRecords, context, table, config.shouldCombineBeforeUpsert(), config.getUpsertShuffleParallelism(), this, true); // AbstractWriteHelper public HoodieWriteMetadata<O> write(String instantTime, I inputRecords, HoodieEngineContext context, HoodieTable<T, I, K, O> table, boolean shouldCombine, int shuffleParallelism, BaseCommitActionExecutor<T, I, K, O, R> executor, boolean performTagging) { try { // De-dupe/merge if needed I dedupedRecords = combineOnCondition(shouldCombine, inputRecords, shuffleParallelism, table); Instant lookupBegin = Instant.now(); I taggedRecords = dedupedRecords; if (performTagging) { // perform index loop up to get existing location of records taggedRecords = tag(dedupedRecords, context, table); Duration indexLookupDuration = Duration.between(lookupBegin, Instant.now()); HoodieWriteMetadata<O> result = executor.execute(taggedRecords); result.setIndexLookupDuration(indexLookupDuration); return result; } catch (Throwable e) { if (e instanceof HoodieUpsertException) { throw (HoodieUpsertException) e; throw new HoodieUpsertException("Failed to upsert for commit time " + instantTime, e); public I combineOnCondition( boolean condition, I records, int parallelism, HoodieTable<T, I, K, O> table) { return condition ? deduplicateRecords(records, table, parallelism) : records; * Deduplicate Hoodie records, using the given deduplication function. * @param records hoodieRecords to deduplicate * @param parallelism parallelism or partitions to be used while reducing/deduplicating * @return Collection of HoodieRecord already be deduplicated public I deduplicateRecords( I records, HoodieTable<T, I, K, O> table, int parallelism) { return deduplicateRecords(records, table.getIndex(), parallelism); // SparkWriteHelper @Override public JavaRDD<HoodieRecord<T>> deduplicateRecords( JavaRDD<HoodieRecord<T>> records, HoodieIndex<T, ?, ?, ?> index, int parallelism) { boolean isIndexingGlobal = index.isGlobal(); return records.mapToPair(record -> { HoodieKey hoodieKey = record.getKey(); // If index used is global, then records are expected to differ in their partitionPath // 获取record的key值 Object key = isIndexingGlobal ? hoodieKey.getRecordKey() : hoodieKey; // 返回 (key,record) return new Tuple2<>(key, record); }).reduceByKey((rec1, rec2) -> { @SuppressWarnings("unchecked") // key值相同的record 通过 preCombine函数,返回 preCombineField值较大那个 T reducedData = (T) rec2.getData().preCombine(rec1.getData()); HoodieKey reducedKey = rec1.getData().equals(reducedData) ? rec1.getKey() : rec2.getKey(); return new HoodieRecord<T>(reducedKey, reducedData); }, parallelism).map(Tuple2::_2); }这样就实现了预合并的功能修改历史比较值最后说一下历史比较值是怎么修改的,其实Spark SQL 和 Spark DF不用特意修改它的值,因为默认和preCombineField值是同步修改的,看一下程序怎么同步修改的。无论是是SQL还是DF最终都会调用HoodieSparkSqlWriter.write// Create a HoodieWriteClient & issue the delete. val client = hoodieWriteClient.getOrElse(DataSourceUtils.createHoodieClient(jsc, null, path, tblName, mapAsJavaMap(parameters - HoodieWriteConfig.AUTO_COMMIT_ENABLE.key))) .asInstanceOf[SparkRDDWriteClient[HoodieRecordPayload[Nothing]]] public static SparkRDDWriteClient createHoodieClient(JavaSparkContext jssc, String schemaStr, String basePath, String tblName, Map<String, String> parameters) { return new SparkRDDWriteClient<>(new HoodieSparkEngineContext(jssc), createHoodieConfig(schemaStr, basePath, tblName, parameters)); public static HoodieWriteConfig createHoodieConfig(String schemaStr, String basePath, String tblName, Map<String, String> parameters) { boolean asyncCompact = Boolean.parseBoolean(parameters.get(DataSourceWriteOptions.ASYNC_COMPACT_ENABLE().key())); boolean inlineCompact = !asyncCompact && parameters.get(DataSourceWriteOptions.TABLE_TYPE().key()) .equals(DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL()); boolean asyncClusteringEnabled = Boolean.parseBoolean(parameters.get(DataSourceWriteOptions.ASYNC_CLUSTERING_ENABLE().key())); boolean inlineClusteringEnabled = Boolean.parseBoolean(parameters.get(DataSourceWriteOptions.INLINE_CLUSTERING_ENABLE().key())); // insert/bulk-insert combining to be true, if filtering for duplicates boolean combineInserts = Boolean.parseBoolean(parameters.get(DataSourceWriteOptions.INSERT_DROP_DUPS().key())); HoodieWriteConfig.Builder builder = HoodieWriteConfig.newBuilder() .withPath(basePath).withAutoCommit(false).combineInput(combineInserts, true); if (schemaStr != null) { builder = builder.withSchema(schemaStr); return builder.forTable(tblName) .withIndexConfig(HoodieIndexConfig.newBuilder().withIndexType(IndexType.BLOOM).build()) .withCompactionConfig(HoodieCompactionConfig.newBuilder() .withPayloadClass(parameters.get(DataSourceWriteOptions.PAYLOAD_CLASS_NAME().key())) .withInlineCompaction(inlineCompact).build()) .withClusteringConfig(HoodieClusteringConfig.newBuilder() .withInlineClustering(inlineClusteringEnabled) .withAsyncClustering(asyncClusteringEnabled).build()) // 在这里设置里OrderingField 的值等于 PRECOMBINE_FIELD,所以默认和PRECOMBINE_FIELD是同步修改的 .withPayloadConfig(HoodiePayloadConfig.newBuilder().withPayloadOrderingField(parameters.get(DataSourceWriteOptions.PRECOMBINE_FIELD().key())) .build()) // override above with Hoodie configs specified as options. .withProps(parameters).build(); }如果确实想修改默认值,即和PRECOMBINE_FIELD不一样,那么sql:set hoodie.payload.ordering.field=ts; DF:.option("hoodie.payload.ordering.field", "ts") .option(HoodiePayloadProps.PAYLOAD_ORDERING_FIELD_PROP_KEY, "ts")
前言总结 HUDI preCombinedField,分两大类总结,一类是Spark SQL,这里指的是merge,因为只有merge语句中有多条记录,讨论preCombinedField才有意义;一类是Spark DF,HUDI0.9版本支持SQL建表和增删改查总结先说结论:Spark DF建表写数据时(含更新):1、UPSERT,当数据重复时(这里指同一主键对应多条记录),程序在写数据前会根据预合并字段ts进行去重,去重保留ts值最大的那条记录,且无论新记录的ts值是否大于历史记录的ts值,都会覆盖写,直接更新。2、INSERT时,没有预合并,程序依次写入,实际更新为最后一条记录,且无论新记录的ts值是否大于历史记录的ts值,都会覆盖写,直接更新。Spark SQL建表,写数据时(含更新):有ts时,预合并时如果数据重复取预合并字段值最大的那条记录,最大值相同的取第一个。写数据时,ts值大于等于历史ts值,才会更新,小于历史值则不更新。没有ts时,则默认将主键字段的第一个值作为预合并字段,如果数据重复,去重时会取第一个值,写数据时,直接覆盖历史数据(因为这里的预合并字段为主键字段,等于历史值,其实原理跟上面有ts时一样)PRECOMBINE_FIELD.key -> targetKey2SourceExpression.keySet.head, // set a default preCombine field说明:1、这里有ts代表设置了preCombinedField字段2、hudi默认使用布隆索引,布隆索引只保证同一分区下同一个主键对应的值唯一,可以使用全局索引保证所有分区值唯一,这里不展开细说private String getDefaultIndexType(EngineType engineType) { switch (engineType) { case SPARK: return HoodieIndex.IndexType.BLOOM.name(); case FLINK: case JAVA: return HoodieIndex.IndexType.INMEMORY.name(); default: throw new HoodieNotSupportedException("Unsupported engine " + engineType); }3、如果在测试过程中,发现和我的结论不一致,可能和后面的注意事项有关。4、当指定了hoodie.datasource.write.insert.drop.duplicates=true时,不管是insert还是upsert,如果存在历史数据则不更新。实际在源码中,如果为upsert,也会修改为insert。if (hoodieConfig.getBoolean(INSERT_DROP_DUPS) && operation == WriteOperationType.UPSERT) { log.warn(s"$UPSERT_OPERATION_OPT_VAL is not applicable " + s"when $INSERT_DROP_DUPS is set to be true, " + s"overriding the $OPERATION to be $INSERT_OPERATION_OPT_VAL") operation = WriteOperationType.INSERT }Spark DF先说DF建表,DF写hudi表时,默认情况下,hudi,必须指定preCombinedField,否则,会抛出异常(当为insert等其他类型时,preCombinedField可以不用设置,具体见后面的源码解读部分),示例如下import org.apache.hudi.DataSourceWriteOptions._ import org.apache.hudi.QuickstartUtils.{DataGenerator, convertToStringList, getQuickstartWriteConfigs} import org.apache.spark.sql.SparkSession val spark = SparkSession.builder() .master("local[*]") .appName("TestHuDiPreCombinedFiled") .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") .config("spark.sql.extensions", "org.apache.spark.sql.hudi.HoodieSparkSessionExtension") .getOrCreate() val tableName = "test_hudi_table" val data = Array((7, "name12", 1.21, 108L, "2021-05-06"), (7, "name2", 2.22, 108L, "2021-05-06"), (7, "name3", 3.45, 108L, "2021-05-06") val df = spark.createDataFrame(data).toDF("id", "name", "price", "ts", "dt") df.write.format("hudi"). options(getQuickstartWriteConfigs). option(PRECOMBINE_FIELD.key(), "ts"). //指定preCombinedField=ts option(RECORDKEY_FIELD.key(), "id"). option(PARTITIONPATH_FIELD.key(), "dt"). option(HIVE_STYLE_PARTITIONING.key(), true). //hive 分区路径的格式是否和hive一样,如果true,则:分区字段= option("hoodie.table.name", tableName). // option("hoodie.datasource.write.insert.drop.duplicates", true). //不更新 // option(OPERATION.key(), "INSERT"). option(KEYGENERATOR_CLASS_NAME.key(), "org.apache.hudi.keygen.ComplexKeyGenerator"). mode("append"). save(s"/tmp/${tableName}") val read_df = spark. read. format("hudi"). load(s"/tmp/${tableName}" + "/*") read_df.show()Spark DF写数据默认OPERATION为UPSERT,当数据重复时(这里指同一主键对应多条记录),程序在写数据前会根据预合并字段ts进行去重,去重保留ts值最大的那条记录,且无论新记录的ts值是否大于历史记录的ts值,都会覆盖写,直接更新。当OPERATION为INSERT时(option(OPERATION_OPT_KEY.key(), “INSERT”)),ts不是必须的,可以不设置,没有预合并,程序依次写入,实际更新为最后一条记录,且无论新记录的ts值是否大于历史记录的ts值,都会覆盖写,直接更新。附hoodie.properties:#Properties saved on Sat Jul 10 15:08:16 CST 2021 #Sat Jul 10 15:08:16 CST 2021 hoodie.table.precombine.field=ts hoodie.table.name=test_hudi_table hoodie.archivelog.folder=archived hoodie.table.type=COPY_ON_WRITE hoodie.table.version=1 hoodie.timeline.layout.version=1 hoodie.table.partition.columns=dt可见,hudi表元数据里有hoodie.table.precombine.field=ts,代表preCombinedField生效SQLSQL与DF不同,分为两种,有预合并和没有预合并没有预合并SQL默认没有预合并spark.sql( s""" | create table ${tableName} ( | id int, | name string, | price double, | ts long, | dt string |) using hudi | partitioned by (dt) | options ( | primaryKey = 'id', | type = 'cow' | location '/tmp/${tableName}' |""".stripMargin) spark.sql(s"show create table ${tableName}").show(false) spark.sql( s""" |merge into ${tableName} as t0 |using ( | select 1 as id, 'hudi' as name, 97 as price, 99 as ts, '2021-05-05' as dt,'INSERT' as opt_type union | select 1 as id, 'hudi_2' as name, 98 as price, 99 as ts, '2021-05-05' as dt,'UPDATE' as opt_type union | select 1 as id, 'hudi_2' as name, 99 as price, 99 as ts, '2021-05-05' as dt,'UPDATE' as opt_type union | select 3 as id, 'hudi' as name, 10 as price, 110 as ts, '2021-05-05' as dt ,'DELETE' as opt_type | ) as s0 |on t0.id = s0.id |when matched and opt_type!='DELETE' then update set * |when matched and opt_type='DELETE' then delete |when not matched and opt_type!='DELETE' then insert * |""".stripMargin) spark.table(tableName).show()没有设置预合并字段值,如果数据重复,去重时会取第一个值,写数据时,直接覆盖历史数据查看hoodie.properties和在spark.sql(s”show create table ${tableName}”).show(false)打印信息里发现表的元数据信息确实没有preCombinedField,示例中虽然有ts字段,但是没有没有显示设置,当然可以直接去掉ts字段,大家可以自行测试。有预合并spark.sql( s""" | create table ${tableName} ( | id int, | name string, | price double, | ts long, | dt string |) using hudi | partitioned by (dt) | options ( | primaryKey = 'id', | preCombineField = 'ts', | type = 'cow' | location '/tmp/${tableName}' |""".stripMargin) spark.sql(s"show create table ${tableName}").show(false) spark.sql( s""" |merge into ${tableName} as t0 |using ( | select 1 as id, 'hudi' as name, 97 as price, 99 as ts, '2021-05-05' as dt,'INSERT' as opt_type union | select 1 as id, 'hudi_2' as name, 98 as price, 99 as ts, '2021-05-05' as dt,'UPDATE' as opt_type union | select 1 as id, 'hudi_2' as name, 99 as price, 99 as ts, '2021-05-05' as dt,'UPDATE' as opt_type union | select 3 as id, 'hudi' as name, 10 as price, 110 as ts, '2021-05-05' as dt ,'DELETE' as opt_type | ) as s0 |on t0.id = s0.id |when matched and opt_type!='DELETE' then update set * |when matched and opt_type='DELETE' then delete |when not matched and opt_type!='DELETE' then insert * |""".stripMargin) spark.table(tableName).show()SQL的唯一的区别是在建表语句中加了配置preCombineField = ‘ts’,同样可以在hoodie.properties和打印信息里查看是否有hoodie.table.precombine.field=ts信息。预合并时如果数据重复取预合并字段值最大的那条记录,最大值相同的取第一个。写数据时,ts值大于等于历史ts值,才会更新,小于历史值则不更新。SQL与DF结合先用SQL建表,再用DF写数据。这种情况主要是想建表时不想多一列ts字段,而在预合并时可以添加一列预合并字段进行去重,因为目前的版本SQL没有实现该功能。在SQL建表时如果指定了 preCombineField = ‘ts’,则表结构中必须有ts这个字段。val tableName = "test_hudi_table4" spark.sql( s""" | create table ${tableName} ( | id int, | name string, | price double, | dt string |) using hudi | partitioned by (dt) | options ( | primaryKey = 'id', | type = 'cow' | location '/tmp/${tableName}' |""".stripMargin) val data = Array((7, "name12", 1.21, 106L, "2021-05-06"), (7, "name2", 2.22, 108L, "2021-05-06"), (7, "name3", 3.45, 107L, "2021-05-06") val df = spark.createDataFrame(data).toDF("id", "name", "price", "ts", "dt") df.show() df.write.format("hudi"). options(getQuickstartWriteConfigs). option(PRECOMBINE_FIELD.key(), "ts"). //指定preCombinedField=ts option(RECORDKEY_FIELD.key(), "id"). option(PARTITIONPATH_FIELD.key(), "dt"). option(HIVE_STYLE_PARTITIONING.key(), true). //hive 分区路径的格式是否和hive一样,如果true,则:分区字段= option("hoodie.table.name", tableName). // option("hoodie.datasource.write.insert.drop.duplicates", true). //不更新 // option(OPERATION.key(), "INSERT"). option(KEYGENERATOR_CLASS_NAME.key(), "org.apache.hudi.keygen.ComplexKeyGenerator"). save(s"/tmp/${tableName}") val read_df = spark. read. format("hudi"). load(s"/tmp/${tableName}" + "/*") read_df.show() spark.table(tableName).show()上面的程序主要是用SQL先建了表的元数据,然后再用程序指定了PRECOMBINE_FIELD_OPT_KEY=ts,这样就实现了既可以预合并去重,也不用在建表中指定ts字段。但是打印中发现用程序读parquet文件时多了ts列,读表时因为元数据里没有ts列,没有打印出来,实际文件存储的有ts这一列。上面只是模拟了这一场景,而我们想实现的是下面的spark.sql( s""" | create table ${tableName} ( | id int, | name string, | price double, | dt string |) using hudi | partitioned by (dt) | options ( | primaryKey = 'id', | preCombineField = 'ts', | type = 'cow' | location '/tmp/${tableName}' |""".stripMargin) spark.sql( s""" |merge into ${tableName} as t0 |using ( | select 1 as id, 'hudi' as name, 97 as price, 99 as ts, '2021-05-05' as dt,'INSERT' as opt_type union | select 1 as id, 'hudi_2' as name, 98 as price, 99 as ts, '2021-05-05' as dt,'UPDATE' as opt_type union | select 1 as id, 'hudi_2' as name, 99 as price, 99 as ts, '2021-05-05' as dt,'UPDATE' as opt_type union | select 3 as id, 'hudi' as name, 10 as price, 110 as ts, '2021-05-05' as dt ,'DELETE' as opt_type | ) as s0 |on t0.id = s0.id |when matched and opt_type!='DELETE' then update set * |when matched and opt_type='DELETE' then delete |when not matched and opt_type!='DELETE' then insert * |""".stripMargin)在建表时指定了preCombineField = ‘ts’,但是表结构中没有ts字段,而且后面的merge sql拼接时添加这一列。目前master分支还不支持这种情况,如果想实现这一情况,可以自己尝试修改源码支持。代码示例代码已上传到gitee,由于公司网把github屏蔽了,以后暂时转到gitee上。源码解读解读部分源码更新:2021-09-26,因为0.9.0版本已发布,故更新源代码解析,可能存在部分源代码没有更新程序写hudi时ts的必须性默认配置时,如果不指定PRECOMBINE_FIELD_OPT_KEY,则会抛出以下异常:21/07/13 20:04:14 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) org.apache.hudi.exception.HoodieException: ts(Part -ts) field not found in record. Acceptable fields were :[id, name, price, tss, dt] at org.apache.hudi.avro.HoodieAvroUtils.getNestedFieldVal(HoodieAvroUtils.java:437) at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$1.apply(HoodieSparkSqlWriter.scala:233) at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$1.apply(HoodieSparkSqlWriter.scala:230) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$$anon$10.next(Iterator.scala:393) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) at scala.collection.AbstractIterator.to(Iterator.scala:1336) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) at scala.collection.AbstractIterator.toArray(Iterator.scala:1336) at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$31.apply(RDD.scala:1409) at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$31.apply(RDD.scala:1409) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)对应的源码:HoodieSparkSqlWriter.scala 230、233行230 val hoodieAllIncomingRecords = genericRecords.map(gr => { val processedRecord = getProcessedRecord(partitionColumns, gr, dropPartitionColumns) val hoodieRecord = if (shouldCombine) { 233 val orderingVal = HoodieAvroUtils.getNestedFieldVal(gr, hoodieConfig.getString(PRECOMBINE_FIELD), false) .asInstanceOf[Comparable[_]] DataSourceUtils.createHoodieRecord(processedRecord, orderingVal, keyGenerator.getKey(gr), hoodieConfig.getString(PAYLOAD_CLASS_NAME)) } else { DataSourceUtils.createHoodieRecord(processedRecord, keyGenerator.getKey(gr), hoodieConfig.getString(PAYLOAD_CLASS_NAME)) hoodieRecord }).toJavaRDD()getNestedFieldValpublic static Object getNestedFieldVal(GenericRecord record, String fieldName, boolean returnNullIfNotFound) { String[] parts = fieldName.split("\\."); ...... if (returnNullIfNotFound) { return null; 434 } else if (valueNode.getSchema().getField(parts[i]) == null) { throw new HoodieException( fieldName + "(Part -" + parts[i] + ") field not found in record. Acceptable fields were :" 437 + valueNode.getSchema().getFields().stream().map(Field::name).collect(Collectors.toList())); } else { throw new HoodieException("The value of " + parts[i] + " can not be null"); }233行,如果shouldCombine==true,则会调用getNestedFieldVal,并将PRECOMBINE_FIELD_OPT_KEY的值作为fieldName参数传给getNestedFieldVal,而在getNestedFieldVal的434行发现当PRECOMBINE_FIELD_OPT_KEY的值==null时抛出上面的异常。可以发现当shouldCombine==true,才会调用getNestedFieldVal,才会抛出该异常,而shouldCombine何时为true呢val shouldCombine = parameters(INSERT_DROP_DUPS.key()).toBoolean || operation.equals(WriteOperationType.UPSERT) || parameters.getOrElse(HoodieWriteConfig.COMBINE_BEFORE_INSERT.key(), HoodieWriteConfig.COMBINE_BEFORE_INSERT.defaultValue()).toBoolean 当INSERT_DROP_DUPS为true或者操作类型为UPSERT时,shouldCombine为true,默认的INSERT_DROP_DUPS=false/** * Flag to indicate whether to drop duplicates upon insert. * By default insert will accept duplicates, to gain extra performance. val INSERT_DROP_DUPS: ConfigProperty[String] = ConfigProperty .key("hoodie.datasource.write.insert.drop.duplicates") .defaultValue("false") .withDocumentation("If set to true, filters out all duplicate records from incoming dataframe, during insert operations.")也就是默认情况下,upsert操作,ts是必须的,而insert等其他操作可以没有ts值。这样我们就可以根据实际情况灵活运用了。注意用SQL创建新表或者DF append模式创建新表时,如果对应的数据目录已存在,需要先将文件夹删掉,因为hoodie.properties里保存了表的元数据信息,程序里会根据文件信息判断表是否存在,如果存在,会复用旧表的元数据。这种情况存在于想用同一个表测试上面多种情况
前言因添加列在平时可能会经常用到,但是长时间不用,可能会忘记应该用哪个函数,这样再重新查找比较耽误时间,于是总结代码进行备忘。主要总结:根据现有的列添加添加自增ID添加一列常量添加当前时间转换为timestamp类型转换为date类型代码package com.dkl.blog.spark.df import java.util.Date import org.apache.commons.lang.time.DateFormatUtils import org.apache.spark.sql.expressions.Window import org.apache.spark.sql.{Row, SparkSession} import org.apache.spark.sql.functions._ import org.apache.spark.sql.types.{LongType, StructField, StructType} * Created by dongkelun on 2021/6/16 10:39 object DfAddCols { def main(args: Array[String]): Unit = { val spark = SparkSession.builder() .appName("DeltaLakeDemo") .master("local") .getOrCreate() val df = spark.range(0, 5).repartition(2) .withColumn("new_col", col("id") + 1) //根据现有的列添加 .withColumn("uuid", monotonically_increasing_id) //自带函数添加自增ID,分区不连续,分区内连续 .withColumn("year", lit("2021")) //添加一列常量,主要用lit函数 .withColumn("time", lit(DateFormatUtils.format(new Date(), "yyyy-MM-dd HH:mm:ss"))) //添加当前时间 .withColumn("timestamp", lit("2021-06-16").cast("timestamp")) //转换为timestamp类型 .withColumn("date", lit("2021-06-16").cast("date")) //转换为date类型 df.printSchema() df.show() //用zipWithIndex重建DF,分区连续 val rows = df.rdd.zipWithIndex.map { case (r: Row, id: Long) => Row.fromSeq(id +: r.toSeq) } val dfWithPK = spark.createDataFrame(rows, StructType(StructField("pk", LongType, false) +: df.schema.fields)) //用zipWithUniqueId重建DF val rows_2 = dfWithPK.rdd.zipWithUniqueId.map { case (r: Row, id: Long) => Row.fromSeq(id +: r.toSeq) } val dfWithPK_2 = spark.createDataFrame(rows_2, StructType(StructField("pk_2", LongType, false) +: dfWithPK.schema.fields)) dfWithPK_2.show() //通过窗口函数排序 val w = Window.orderBy("id") dfWithPK_2.repartition(2).withColumn("pk_3", row_number().over(w)).show() spark.stop() }运行结果|-- id: long (nullable = false) |-- new_col: long (nullable = false) |-- uuid: long (nullable = false) |-- year: string (nullable = false) |-- time: string (nullable = false) |-- timestamp: timestamp (nullable = true) |-- date: date (nullable = true) +---+-------+----------+----+-------------------+-------------------+----------+ | id|new_col| uuid|year| time| timestamp| date| +---+-------+----------+----+-------------------+-------------------+----------+ | 0| 1| 0|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| | 2| 3| 1|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| | 4| 5| 2|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| | 1| 2|8589934592|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| | 3| 4|8589934593|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| +---+-------+----------+----+-------------------+-------------------+----------+ +----+---+---+-------+----------+----+-------------------+-------------------+----------+ |pk_2| pk| id|new_col| uuid|year| time| timestamp| date| +----+---+---+-------+----------+----+-------------------+-------------------+----------+ | 0| 0| 0| 1| 0|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| | 2| 1| 2| 3| 1|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| | 4| 2| 4| 5| 2|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| | 1| 3| 1| 2|8589934592|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| | 3| 4| 3| 4|8589934593|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| +----+---+---+-------+----------+----+-------------------+-------------------+----------+ 21/06/16 11:32:36 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. +----+---+---+-------+----------+----+-------------------+-------------------+----------+----+ |pk_2| pk| id|new_col| uuid|year| time| timestamp| date|pk_3| +----+---+---+-------+----------+----+-------------------+-------------------+----------+----+ | 0| 0| 0| 1| 0|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| 1| | 1| 3| 1| 2|8589934592|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| 2| | 2| 1| 2| 3| 1|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| 3| | 3| 4| 3| 4|8589934593|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| 4| | 4| 2| 4| 5| 2|2021|2021-06-16 11:32:33|2021-06-16 11:32:33|2021-06-16| 5| +----+---+---+-------+----------+----+-------------------+-------------------+----------+----+UDF也可以使用自定义函数添加新列,具体可以参考Spark UDF使用详解及代码示例,各自的优劣可以自己总结
前言本文讲解如何通过数据库客户端界面工具DBeaver连接远程Kerberos环境下的Hive。因为在远程服务器上的命令行里写SQL查询Hive表,如果数据量和表字段比较多,命令行界面不利于分析表数据,所以需要客户端工具如DBeave远程连接Hive查询数据,但是DBeaver默认的不能访问Kerberos下的Hive,需要一些配置才可以访问,这里记录一下。1、DBeaver连接Hive关于DBeaver如何连接正常的不带Kerberos认证的Hive,请参考通过数据库客户端界面工具DBeaver连接Hive,本文只讲解如何在Windows上用DBeaver对kerberos认证2、安装Kerberos客户端官网下载https://web.mit.edu/kerberos/dist/index.html,我下载的是这个:Kerberos for Windows Release 4.1 - current release下载下来后,一键安装3、krb5.ini我的kerberos客户端安装目录为D:\program\company\kerberos,将集群上的/etc/krb5.conf,复制到此目录下,改名为krb5.ini,内容如下:[libdefaults] renew_lifetime = 7d forwardable = true default_realm = INDATA.COM ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false default_ccache_name = /tmp/krb5cc_%{uid} #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 [domain_realm] indata.com = INDATA.COM [logging] default = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log kdc = FILE:/var/log/krb5kdc.log [realms] INDATA.COM = { admin_server = indata-192.168.44.128.indata.com:17490 kdc = indata-192.168.44.128.indata.com kdc = indata-192.168.44.129.indata.com }4、配置kerberos环境变量KRB5_CONFIG:D:\program\company\kerberos\krb5.iniKRB5CCNAME:D:\tmp\krb5cache重启生效5、验证keberos命令将集群服务器上的keytab文件下载到本地,这里为D:\conf\inspur\hive.service.keytab,然后用下面的命令验证kinit -kt D:\conf\inspur\hive.service.keytab hive/indata-192.168.44.128.indata.com@INDATA.COM我的报了下面的异常Exception: too many parameters java.lang.IllegalArgumentException: too many parameters at sun.security.krb5.internal.tools.KinitOptions.<init>(KinitOptions.java:153) at sun.security.krb5.internal.tools.Kinit.<init>(Kinit.java:147) at sun.security.krb5.internal.tools.Kinit.main(Kinit.java:113)一开始很奇怪,因为kinit命令就是这么用的,为啥会报参数过多呢,网上查资料发现,原来是jdk也自带了一个kinit命令,这里用的是jdk自带的kinit命令,需要再配置一下环境变量,在path里添加D:\program\company\kerberos\bin,需要注意的是要加在java的环境变量(%JAVA_HOME%\jre\bin)的前面。然后重试上面的命令,发现不报错了,用klist验证klist Ticket cache: FILE:D:\tmp\krb5cache Default principal: hive/indata-192.168.44.128.indata.com@INDATA.COM Valid starting Expires Service principal 06/03/21 15:43:03 06/04/21 15:43:02 krbtgt/INDATA.COM@INDATA.COM renew until 06/10/21 15:43:02也可以在kerberos界面上查看,如下图:注意:后面的用DBeaver连接Hive时,首先用kinit 命令进行缓存票据6、配置DBeaver在配置文件D:\program\company\dbeaver\dbeaver.ini最后添加-Djavax.security.auth.useSubjectCredsOnly=false -Djava.security.krb5.conf=D:\program\company\kerberos\krb5.ini -Dsun.security.krb5.debug=true注意-Djava.security.krb5.conf对应的值不要加引号””,网上很多教程都是加””,这样会导致DBeaver识别不到krb5.ini的配置文件,从而导致对kerberos的认证失败。这里坑了我很长时间,一直搞不清为啥认证失败。。。7、连接Spark Thrift Server首先启动Spark Thrift Server,关于怎么启动,参考https://dongkelun.com/2021/02/19/javaSparkThriftServer/然后在Dbeaver里:新建->数据库连接->选择Hadoop-Spark Hive->输入主机IP、端口、数据库->然后编辑驱动设置,删除原来的驱动,添加这个项目的jar包https://github.com/dongkelun/java-spark-thrift-server-demo其中这里的数据库为default;principal=HTTP/indata-192.168.44.128.indata.com@INDATA.COM?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice然后测试连接如果不成功,检查一下上述的配置8、连接Hive Server测试连接前先启动hive相关的服务,然后在Dbeaver里:新建->数据库连接->选择Hadoop-Spark Hive->输入主机IP、端口、数据库->然后编辑驱动设置,删除原来的驱动,添加这个项目的jar包https://github.com/dongkelun/java-spark-thrift-server-demo其中这里的数据库为default;principal=hive/indata-192.168.44.128.indata.com@INDATA.COM然后测试连接
前言总结Java如何连接Kerberos认证下的Spark Thrift Server/Hive Server总结启动关于如何启动 Spark Thrift Server和 Hive Server 请参考https://dongkelun.com/2021/02/19/javaSparkThriftServer/Java 代码pom 依赖<dependencies> <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-jdbc</artifactId> <version>1.2.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.6.5</version> </dependency> </dependencies>关于依赖版本对应同样请参考https://dongkelun.com/2021/02/19/javaSparkThriftServer/配置文件hive.service.keytab和krb5.conf均为kerberos认证相关,从服务器上下载,krb5.conf路径在/etc下,具体根据自己服务器的配置去查找krb5.conf[libdefaults] renew_lifetime = 7d forwardable = true default_realm = INDATA.COM ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false default_ccache_name = /tmp/krb5cc_%{uid} #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 [domain_realm] indata.com = INDATA.COM [logging] default = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log kdc = FILE:/var/log/krb5kdc.log [realms] INDATA.COM = { admin_server = indata-192.168.44.128.indata.com:17490 kdc = indata-192.168.44.128.indata.com kdc = indata-192.168.44.129.indata.com }代码package com.dkl.blog; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.security.UserGroupInformation; import java.io.IOException; import java.sql.*; * Created by dongkelun on 2021/5/7 10:06 * java 访问 kerberos认证下的 Hive Server和 Spark Thrift Server public class SparkThriftServerDemoWithKerberos { private static String HIVE_JDBC_URL = "jdbc:hive2://192.168.44.128:10000/sjtt;principal=hive/indata-192-168-44-128.indata.com@INDATA.COM"; private static final String SPARK_JDBC_URL = "jdbc:hive2://192.168.44.128:20003/sjtt;" + "principal=HTTP/indata-192-168-44-128.indata.com@INDATA.COM?" + "hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice;"; private static final String PRINCIPAL = "hive/indata-192-168-44-128.indata.com@INDATA.COM"; private static final String KEYTAB = "D:\\conf\\inspur\\hive.service.keytab"; private static final String KRB5 = "D:\\conf\\inspur\\krb5.conf"; private static Configuration conf = null; static { conf = new Configuration(); public static void main(String[] args) throws SQLException { loadConfiguration(); //----------------------------------connect hive----------------------------------// System.out.println("select from hive"); jdbcDemo(HIVE_JDBC_URL); //------------------------------connect spark thrift server-----------------------// System.out.println("select from spark thrift server"); jdbcDemo(SPARK_JDBC_URL); public static void jdbcDemo(String jdbc_url) throws SQLException { Connection connection = null; try { connection = DriverManager.getConnection(jdbc_url); selectTable(connection); } catch (SQLException e) { e.printStackTrace(); } finally { connection.close(); public static void selectTable(Connection connection) { String sql = "select * from trafficbase_cljbxx limit 10"; Statement stmt = null; ResultSet rs = null; try { stmt = connection.createStatement(); rs = stmt.executeQuery(sql); System.out.println("====================================="); while (rs.next()) { System.out.println(rs.getString(1) + "," + rs.getString(2)); System.out.println("====================================="); } catch (SQLException e) { e.printStackTrace(); } finally { close(stmt); close(rs); private static void loadConfiguration() { // 初始化配置文件 try { conf.set("hadoop.security.authentication", "kerberos"); System.setProperty("java.security.krb5.conf", KRB5);// krb5文件路径 UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(PRINCIPAL, KEYTAB);// 入参:principal、keytab文件 } catch (IOException ioE) { System.err.println("使用keytab登陆失败"); ioE.printStackTrace(); * 关闭Statement * @param stmt private static void close(Statement stmt) { if (stmt != null) { try { stmt.close(); } catch (SQLException e) { e.printStackTrace(); * 关闭ResultSet * @param rs private static void close(ResultSet rs) { if (rs != null) { try { rs.close(); } catch (SQLException e) { e.printStackTrace(); }代码已上传到 github
前言因为公司的测试环境带有kerberos,而我经常需要本地连接测试集群上的hive,以进行源码调试。而本地认证远程集群的kerberos,并访问hive,和在服务器上提交Spark程序代码有些不同,所以专门研究了一下并进行总结。服务器上在服务器上提交Spark程序认证kerberos比较简单,有两种方法:使用kinit 缓存票据 kinit -kt /etc/security/keytabs/hive.service.keytab hive/indata-192-168-44-128.indata.com@INDATA.COM,然后提交Spark程序即可在spark-submit 中添加参数 –principal hive/indata-192-168-44-128.indata.com@INDATA.COM –keytab /etc/security/keytabs/hive.service.keytab本地本地连接,稍微复杂点,首先要配好环境,比如Hadoop的环境变量、winutils等,然后需要配置hosts,将服务器上的/etc/hosts里面的内容拷贝出来,粘贴Windows上的hosts文件里即可代码首先需要将集群上的hive-site.xml,core-site.xml,yarn-site.xml,hdfs-site.xml拷贝到src/main/resources文件夹中,其中hive-site.xml是为了连接hive,core-site.xml、hdfs-site.xml和yarn-site.xml是为了认证kerberospackage com.dkl.blog.spark.hive import org.apache.hadoop.conf.Configuration import org.apache.hadoop.security.UserGroupInformation import org.apache.spark.sql.SparkSession * Created by dongkelun on 2021/5/18 19:29 * Spark 本地连接远程服务器上带有kerberos认证的Hive object LocalSparkHiveWithKerberos { def main(args: Array[String]): Unit = { try { //等同于把krb5.conf放在$JAVA_HOME\jre\lib\security,一般写代码即可 System.setProperty("java.security.krb5.conf", "D:\\conf\\inspur\\krb5.conf") //下面的conf可以注释掉是因为在core-site.xml里有相关的配置,如果没有相关的配置,则下面的代码是必须的 // val conf = new Configuration // conf.set("hadoop.security.authentication", "kerberos") // UserGroupInformation.setConfiguration(conf) UserGroupInformation.loginUserFromKeytab("hive/indata-192-168-44-128.indata.com@INDATA.COM", "D:\\conf\\inspur\\hive.service.keytab") println(UserGroupInformation.getCurrentUser, UserGroupInformation.getLoginUser) } catch { case e: Exception => e.printStackTrace() val spark = SparkSession.builder() .master("local[*]") .appName("LocalSparkHiveWithKerberos") // .config("spark.kerberos.keytab", "hive/indata-192-168-44-128.indata.com@INDATA.COM") // .config("spark.kerberos.principal", "D:\\conf\\inspur\\hive.service.keytab") .enableHiveSupport() .getOrCreate() spark.table("sjtt.trafficbase_cljbxx").show() spark.stop() }代码已提交到github运行结果异常解决异常信息rg.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]异常解决过程异常再现是将core-site.xml删除,然后将代码中注释的conf打开。这样从打印的UserGroupInformation.getCurrentUser信息可以发现kerberos认证是成功的,而且代码中设置了hadoop.security.authentication为kerberos,但是依旧报authentication为SIMPLE的异常,网上查资料查了很久都没解决,只能自己进行研究,在本地的Spark UI 界面的environment中查看Spark的环境配置信息发现,虽然在Spark的代码中配置了.config(“spark.kerberos.keytab”, “hive/indata-192-168-44-128.indata.com@INDATA.COM”)、.config(“spark.kerberos.principal”, “D:\conf\inspur\hive.service.keytab”),且在ui界面中也显示相同的配置,如下图但是依旧报同样的异常信息,后来在界面上发现,除了Spark Properties还有Hadoop Properties,代码中的配置只是改变了Spark Properties,没有改变Hadoop Properties,而Hadoop Properties中的hadoop.security.authentication依旧为simple,这有可能是导致异常的原因。那么如何改变Hadoop Properties,在Spark源码搜索发现如下文档# Custom Hadoop/Hive Configuration If your Spark application is interacting with Hadoop, Hive, or both, there are probably Hadoop/Hive configuration files in Spark's classpath. Multiple running applications might require different Hadoop/Hive client side configurations. You can copy and modify `hdfs-site.xml`, `core-site.xml`, `yarn-site.xml`, `hive-site.xml` in Spark's classpath for each application. In a Spark cluster running on YARN, these configuration files are set cluster-wide, and cannot safely be changed by the application. The better choice is to use spark hadoop properties in the form of `spark.hadoop.*`, and use spark hive properties in the form of `spark.hive.*`. For example, adding configuration "spark.hadoop.abc.def=xyz" represents adding hadoop property "abc.def=xyz", and adding configuration "spark.hive.abc=xyz" represents adding hive property "hive.abc=xyz". They can be considered as same as normal spark properties which can be set in `$SPARK_HOME/conf/spark-defaults.conf`文档说最好的选择是在代码中设置Spark.hadoop.*,即.config(“Spark.hadoop.security.authentication”, “kerberos”),然后尝试了一下,发现这样仅仅是改变的Spark Properties,依旧是同样的异常,也可能是我理解的有问题。异常解决方案最后的解决方案是按文档上的将core-site.xml和hdfs-site.xml拷贝到Spark的classpath下,即上面提到的src/main/resources,但是这样依旧可能没效果,原因是,配置文件没有同步到target/classes,这里需要在idea里点Build-Rebuild Project,然后确认一下target/classes是否有了core-site.xml文件就可以了
前言本文记录Spark如何在表存在的情况时覆盖写入mysql但不修改已有的表结构,并进行主要的源码跟踪以了解其实现原理。主要场景为先用建表语句建好mysql表,然后用spark导入数据,可能会存在多次全表覆写导入的情况。代码zz已上传github主要的参数为.option(“truncate”, true),可以参考Spark官网http://spark.apache.org/docs/latest/sql-data-sources-jdbc.html主要代码逻辑为,读取csv,进行日期转化,然后覆盖写入到已经建好的mysql表中。package com.dkl.blog.spark.mysql import java.util.Properties import org.apache.spark.sql.SparkSession import org.apache.spark.sql.functions.{col, to_date} * Created by dongkelun on 2021/4/29 14:06 * Spark覆盖写入mysql表但不修改已有的表结构 object SparkMysqlOverwriteTruncateTable { def main(args: Array[String]): Unit = { val spark = SparkSession .builder() .appName("SparkMysqlOverwriteTruncateTable") .master("local[*]") .getOrCreate() //rewriteBatchedStatements参数为批量写入数据,可以增加写入效率 val url = "jdbc:mysql://192.168.44.128:3306/test?useUnicode=true&characterEncoding=utf-8&rewriteBatchedStatements=true" val tableName = "trafficbase_cljbxx" val prop = new Properties() prop.put("user", "root") prop.put("password", "Root-123456") //读取本地csv var df = spark.read.option("header", "true").csv("D:\\文档\\inspur\\csg\\功能测评\\测试数据\\trafficbase_cljbxx.csv") //字符串转为日期类型 df = df.withColumn("czsj", to_date(col("czsj"), "dd/mm/yyyy")) df.show() df.write.mode("overwrite") .option("truncate", true) //覆盖写入数据前先truncate table而不是drop table .jdbc(url = url, table = tableName, prop) spark.stop }源码跟踪本文仅进行简单的源码跟踪Spark2.2.1本来想以Spark3.0.1版本进行讲解,后来发现Spark3源码稍微做了些改动,因为本人之前主要用Spark2.2进行学习总结,所以先用Spark2.2.1的源码进行讲解,后面再在此基础上进行讲解Spark3的源码的一些变化,其实主要逻辑是一样的。JDBC先从入口jdbc函数开始def jdbc(url: String, table: String, connectionProperties: Properties): Unit = { assertNotPartitioned("jdbc") assertNotBucketed("jdbc") // connectionProperties should override settings in extraOptions. this.extraOptions ++= connectionProperties.asScala // explicit url and dbtable should override all this.extraOptions += ("url" -> url, "dbtable" -> table) format("jdbc").save() }FORMAT(“JDBC”).SAVE()format方法返回DataFrameWriterdef format(source: String): DataFrameWriter[T] = { this.source = source //source="jdbc" }def save(): Unit = { if (source.toLowerCase(Locale.ROOT) == DDLUtils.HIVE_PROVIDER) { throw new AnalysisException("Hive data source can only be used with tables, you can not " + "write files of Hive data source directly.") assertNotBucketed("save") runCommand(df.sparkSession, "save") { SaveIntoDataSourceCommand( query = df.logicalPlan, provider = source, partitionColumns = partitioningColumns.getOrElse(Nil), options = extraOptions.toMap, mode = mode) }SAVEINTODATASOURCECOMMAND接着执行SaveIntoDataSourceCommand的run方法case class SaveIntoDataSourceCommand( query: LogicalPlan, provider: String, partitionColumns: Seq[String], options: Map[String, String], mode: SaveMode) extends RunnableCommand { override protected def innerChildren: Seq[QueryPlan[_]] = Seq(query) override def run(sparkSession: SparkSession): Seq[Row] = { DataSource( sparkSession, className = provider, partitionColumns = partitionColumns, options = options).write(mode, Dataset.ofRows(sparkSession, query)) Seq.empty[Row] override def simpleString: String = { val redacted = Utils.redact(SparkEnv.get.conf, options.toSeq).toMap s"SaveIntoDataSourceCommand ${provider}, ${partitionColumns}, ${redacted}, ${mode}" }DATASOURCE.WRITErun方法里主要执行DataSource.write方法def write(mode: SaveMode, data: DataFrame): Unit = { if (data.schema.map(_.dataType).exists(_.isInstanceOf[CalendarIntervalType])) { throw new AnalysisException("Cannot save interval data type into external storage.") providingClass.newInstance() match { case dataSource: CreatableRelationProvider => dataSource.createRelation(sparkSession.sqlContext, mode, caseInsensitiveOptions, data) case format: FileFormat => writeInFileFormat(format, mode, data) case _ => sys.error(s"${providingClass.getCanonicalName} does not allow create table as select.") }这里会执行到case dataSource: CreatableRelationProvider => dataSource.createRelation(sparkSession.sqlContext, mode, caseInsensitiveOptions, data)CREATABLERELATIONPROVIDER.CREATERELATION然后看一下createRelation方法,这里的CreatableRelationProvider是一个接口,这里实际上执行其子类JdbcRelationProvider的createRelationtrait CreatableRelationProvider { * Saves a DataFrame to a destination (using data source-specific parameters) * @param sqlContext SQLContext * @param mode specifies what happens when the destination already exists * @param parameters data source-specific parameters * @param data DataFrame to save (i.e. the rows after executing the query) * @return Relation with a known schema * @since 1.3.0 def createRelation( sqlContext: SQLContext, mode: SaveMode, parameters: Map[String, String], data: DataFrame): BaseRelation }JDBCRELATIONPROVIDER.CREATERELATION下面是最后真正要执行的方法,首先判断表是否存在,我们这个场景下表示存在的,然后进行mode的模式匹配,这里为Overwrite,然后进入到第一个if语句,我们这里在上面的程序里设置了truncate为true,所以会满足条件,然后先执行truncateTable方法进行删除表数据但不会删除表结构,再执行saveTable方法将df的数据保存到表中实现覆盖写入。override def createRelation( sqlContext: SQLContext, mode: SaveMode, parameters: Map[String, String], df: DataFrame): BaseRelation = { val options = new JDBCOptions(parameters) val isCaseSensitive = sqlContext.conf.caseSensitiveAnalysis val conn = JdbcUtils.createConnectionFactory(options)() try { val tableExists = JdbcUtils.tableExists(conn, options) if (tableExists) {//首先判断表是否存在 mode match { case SaveMode.Overwrite => //如果mode为Overwrite if (options.isTruncate && isCascadingTruncateTable(options.url) == Some(false)) { //判断truncate是否为true // In this case, we should truncate table and then load. truncateTable(conn, options.table) //先truncateTable val tableSchema = JdbcUtils.getSchemaOption(conn, options) saveTable(df, tableSchema, isCaseSensitive, options) //再保存数据 } else { // Otherwise, do not truncate the table, instead drop and recreate it dropTable(conn, options.table) createTable(conn, df, options) saveTable(df, Some(df.schema), isCaseSensitive, options) case SaveMode.Append => val tableSchema = JdbcUtils.getSchemaOption(conn, options) saveTable(df, tableSchema, isCaseSensitive, options) case SaveMode.ErrorIfExists => throw new AnalysisException( s"Table or view '${options.table}' already exists. SaveMode: ErrorIfExists.") case SaveMode.Ignore => // With `SaveMode.Ignore` mode, if table already exists, the save operation is expected // to not save the contents of the DataFrame and to not change the existing data. // Therefore, it is okay to do nothing here and then just return the relation below. } else { createTable(conn, df, options) saveTable(df, Some(df.schema), isCaseSensitive, options) } finally { conn.close() createRelation(sqlContext, parameters) }TRUNCATETABLE最后看一下truncateTable实现原理,这里其实是执行的TRUNCATE TABLE命令 * Truncates a table from the JDBC database. def truncateTable(conn: Connection, table: String): Unit = { val statement = conn.createStatement try { statement.executeUpdate(s"TRUNCATE TABLE $table") } finally { statement.close() }总结本来主要讲了如何实现Spark在不删除表结构的情况下进行overwrite覆盖写入mysql表,并跟踪一下源码,了解其实现原理。代码层面主要是加了一个参数.option(“truncate”, true),源码层面主要逻辑是先判断表是否存在,如果表存在,然后判断truncate是否为true,如果为true,则不drop表,而是执行TRUNCATE TABLE表里,清空表数据然后再写表,这样就实现了我们的需求
前言记录利用Spark 创建Hive表的几种压缩格式。背景本人在测试hive表的parquet和orc文件对应的几种压缩算法性能对比。利用Spark thrift server通过sql语句创建表,对比 parquet对应的gzip、snappy,orc对应的 snappy、zlib的压缩率以及查询性能。parquet建表语句:在最后加 STORED AS PARQUET parquet默认的压缩为snappy,如果想改成其他压缩格式如gzip,可在建表语句最后加 STORED AS PARQUET TBLPROPERTIES('parquet.compression'='GZIP') 验证是否有效,查看hive表对应路径下的文件名snappy的文件后缀为:.snappy.parquetgzip的文件后缀为: .gz.parquet还可通过修改spark参数--conf spark.sql.parquet.compression.codec=gzip spark sql源码的定义:val PARQUET_COMPRESSION = buildConf("spark.sql.parquet.compression.codec") .doc("Sets the compression codec used when writing Parquet files. If either `compression` or " + "`parquet.compression` is specified in the table-specific options/properties, the " + "precedence would be `compression`, `parquet.compression`, " + "`spark.sql.parquet.compression.codec`. Acceptable values include: none, uncompressed, " + "snappy, gzip, lzo, brotli, lz4, zstd.") .version("1.1.1") .stringConf .transform(_.toLowerCase(Locale.ROOT)) .checkValues(Set("none", "uncompressed", "snappy", "gzip", "lzo", "lz4", "brotli", "zstd")) .createWithDefault("snappy")可以看出parquet的默认压缩为snappy,可选压缩格式为:”none”, “uncompressed”, “snappy”, “gzip”, “lzo”, “lz4”, “brotli”, “zstd”orc建表语句:在最后加 STORED AS ORC ORC默认的压缩也是snappy,如果想改成其他压缩格式如zlib,可在建表语句最后加STORED AS ORC TBLPROPERTIES('orc.compress'='zlib') spark 参数修改: --conf spark.sql.orc.compression.codec=zlib spark sql源码的定义:val ORC_COMPRESSION = buildConf("spark.sql.orc.compression.codec") .doc("Sets the compression codec used when writing ORC files. If either `compression` or " + "`orc.compress` is specified in the table-specific options/properties, the precedence " + "would be `compression`, `orc.compress`, `spark.sql.orc.compression.codec`." + "Acceptable values include: none, uncompressed, snappy, zlib, lzo.") .version("2.3.0") .stringConf .transform(_.toLowerCase(Locale.ROOT)) .checkValues(Set("none", "uncompressed", "snappy", "zlib", "lzo")) .createWithDefault("snappy")可选压缩格式为:”none”, “uncompressed”, “snappy”, “zlib”, “lzo”注意parquet的key为parquet.compression,orc的key为orc.compress不要弄错,一开始我写成了orc.compression结果不生效,以为不能用sql设置呢从上面的源码里“the precedence would be compression, orc.compress, spark.sql.orc.compression.codec.”可以看出parquet.compression或者orc.compress的优先级要比设置spark参数高,至于最高的compression怎么用我还不清楚json默认不压缩可用压缩格式:none, bzip2, gzip, lz4,snappy ,deflatetext默认不压缩可用压缩格式:none, bzip2, gzip, lz4, snappy , deflate参考SparkSQL的几种输出格式及压缩方式
前言总结Spark Thrift Server、Hive Server以及如何用Java连接启动hive serverhiveserver2 hive --service hiveserver2 默认端口是1000spark thrift server修改hive.server2.transport.mode为http(默认值为binary(TCP),可选值HTTP)将hive-site.xml拷贝到spark conf 目录下,并添加<property> <name>hive.server2.transport.mode</name> <value>http</value> </property>启动命令($SPARK_HOME/sbin)start-thriftserver.sh --端口默认值为10001 start-thriftserver.sh --hiveconf hive.server2.thrift.http.port=10002 --参数修改端口号 spark-submit --master yarn --deploy-mode client --executor-memory 2G --num-executors 25 --executor-cores 2 --driver-memory 16G --driver-cores 2 --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal --hiveconf hive.server2.thrift.http.port=10003 --start-thriftserver.sh实际上也是调用的类org.apache.spark.sql.hive.thriftserver.HiveThriftServer2当hive.server2.transport.mode为http时,默认端口为10001,通过–hiveconf hive.server2.thrift.http.port修改端口号,当然hive.server2.transport.mode为默认值TCP 时,默认端口为10000,通过–hiveconf hive.server2.thrift.port修改端口号, 也就是默认端口号和是否为hive或者spark无关,这里为啥spark不选默认值,因为当为默认值时,虽然也能正常使用,但是spark server日志里会有异常,原因未知,待研究Java 代码pom 依赖<dependencies> <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-jdbc</artifactId> <version>2.3.7</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.7.4</version> </dependency> </dependencies>更新:2021-05-07这里提一下依赖的版本问题,上面写的版本是我自己搭建的开源的hive和hadoop,所以版本可以很清楚的知道是多少,并且和spark版本是适配的。后来在连接hdp对应的hive和spark时,在版本对应关系上出现了问题,这里总结一下。首先提一下在连接Spark Thrift Server时,对版本适配要求比较高,而hive server对依赖的版本适配较低。总结一下hdp如何对应版本,在ambari界面添加服务即可看到各个组件包括hive对应的版本信息,或者在命令行看一下jar包,比如hive-jdbc-3.1.0.3.1.0.0-78.jar,则代表hive本本为3.1.0,后面的是hdp的版本号,这样配置依赖连接Hive Server是没有问题的,而在连接Spark Server时发现了问题,报了版本不匹配的异常,比如spark/jars下的jar包为hive-jdbc-1.21.2.3.1.0.0-78.jar,那么hive-jdbc的版本应该为1.21.2可实际上没有这个版本的依赖,且即使用上面的3.1.0版本去连接Spark Sever一样版本不匹配,那么这种情况下该如何确定hive-jdbc的版本的?我用的是下面的方法:首先确认Spark的版本为2.4.,然后我去github上查找对应的版本的spark源码的依赖,发现hive的版本号为1.2.1,hadoop的版本号为2.6.5,那么最终hive-jdbc:1.2.1,hadoop-common:2.6.5,这样配置依赖就可以连接hdp下的spark thrift server了,且该版本的hive-jdbc一样可以连接hive server即上面说的hive server对依赖的版本适配较低。最后提一下,当hive-jdbc版本为3.1.0即3..*时,不用再另外添加hadoop-common的依赖即可连接hive server,因为hive-jdbc的包里已经包含了对应的依赖,即使同时添加也会依赖冲突的。附:版本不匹配时的异常信息:14:32:23.971 [main] ERROR org.apache.hive.jdbc.HiveConnection - Error opening session org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{set:hiveconf:hive.server2.thrift.resultset.default.fetch.size=1000, use:database=sjtt}) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[libthrift-0.9.3.jar:0.9.3] at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[libthrift-0.9.3.jar:0.9.3] at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:168) ~[hive-service-rpc-2.3.7.jar:2.3.7] at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:155) ~[hive-service-rpc-2.3.7.jar:2.3.7] at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:680) [hive-jdbc-2.3.7.jar:2.3.7] at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:200) [hive-jdbc-2.3.7.jar:2.3.7] at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) [hive-jdbc-2.3.7.jar:2.3.7] at java.sql.DriverManager.getConnection(DriverManager.java:664) [?:1.8.0_161] at java.sql.DriverManager.getConnection(DriverManager.java:270) [?:1.8.0_161] at com.dkl.blog.SparkThriftServerDemoWithKerberos.jdbcDemo(SparkThriftServerDemoWithKerberos.java:41) [classes/:?] at com.dkl.blog.SparkThriftServerDemoWithKerberos.main(SparkThriftServerDemoWithKerberos.java:35) [classes/:?]代码package com.dkl.blog; import java.sql.*; * Created by dongkelun on 2021/2/5 17:07 public class SparkThriftServerDemo { private static String HIVE_JDBC_URL = "jdbc:hive2://192.168.44.128:10000/default"; private static final String SPARK_JDBC_URL = "jdbc:hive2://192.168.44.128:10001/default?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice"; public static void main(String[] args) throws SQLException { //----------------------------------connect hive----------------------------------// System.out.println("select from hive"); jdbcDemo(HIVE_JDBC_URL); //------------------------------connect spark thrift server-----------------------// System.out.println("select from spark thrift server"); jdbcDemo(SPARK_JDBC_URL); public static void jdbcDemo(String jdbc_url) throws SQLException { Connection connection = null; try { connection = DriverManager.getConnection(jdbc_url); selectTable(connection); } catch (SQLException e) { e.printStackTrace(); } finally { connection.close(); public static void selectTable(Connection connection) { String sql = "select * from test limit 10"; Statement stmt = null; ResultSet rs = null; try { stmt = connection.createStatement(); rs = stmt.executeQuery(sql); System.out.println("====================================="); while (rs.next()) { System.out.println(rs.getString(1) + "," + rs.getString(2)); System.out.println("====================================="); } catch (SQLException e) { e.printStackTrace(); } finally { close(stmt); close(rs); * 关闭Statement * @param stmt private static void close(Statement stmt) { if (stmt != null) { try { stmt.close(); } catch (SQLException e) { e.printStackTrace(); * 关闭ResultSet * @param rs private static void close(ResultSet rs) { if (rs != null) { try { rs.close(); } catch (SQLException e) { e.printStackTrace(); }代码已上传到 github
前言这个异常发生在Spark读取Windows本地CSV然后show,当然一般情况下不会发生,还有一个条件,项目里加了hbase-client和hbase-mapreduce,具体是哪一个依赖或者两个依赖合起来造成的影响我没有去细究,主要记录解决方法网上也有其他很多情况可能出现这个异常详细异常信息Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:640) at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1223) at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1428) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:468) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:678) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.org$apache$spark$sql$execution$datasources$InMemoryFileIndex$$listLeafFiles(InMemoryFileIndex.scala:281) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$bulkListLeafFiles$1.apply(InMemoryFileIndex.scala:177) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$bulkListLeafFiles$1.apply(InMemoryFileIndex.scala:176) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.bulkListLeafFiles(InMemoryFileIndex.scala:176) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.listLeafFiles(InMemoryFileIndex.scala:127) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:91) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.<init>(InMemoryFileIndex.scala:67) at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$createInMemoryFileIndex(DataSource.scala:533) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:371) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211) at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:619) at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:468) at com.inspur.demo.spark.JavaCSV2ES.main(JavaCSV2ES.java:46)解决方法先讲最终有效的解决方法:在https://github.com/steveloughran/winutils下载对应版本的hadoop.dll文件放到C:\Windows\System32下即可解决更新:2021-04-27因上面github地址对应的hadoop版本不是很全,若想下载hadoop3.1或3.2版本等其他版本,可以从以下地址下载https://github.com/dongkelun/winutils网上其他方法 :1、本地安装hadoop并将hadoop.dll 重启电脑 https://my.oschina.net/u/4307631/blog/4012671亲测无效,不知道其他情况是否有效,反正我这种情况无效2、改源码 https://www.cnblogs.com/kelly-one/p/10514371.html嫌麻烦没有测试,因为这种方法每个项目都要新建一个类,太麻烦了,所以不到最后不想尝试放到C:\Windows\System32的思路参考:http://www.bubuko.com/infodetail-1092966.html
前言先说解决办法,提交时除了添加spark-sql-kafka和kafka-clients jar包外,还要添加spark-token-provider-kafka和commons-pool jar包,具体为spark-token-provider-kafka-0-10_2.12-3.0.1.jar和commons-pool2-2.6.2.jar注意:Spark 3 版本和Spark 2有些不一样,提交Structured Streaming需要注意Kafka client 版本需要>=0.11.0.0,这个在spark官方文档里有说明:Please note that to use the headers functionality, your Kafka client version should be version 0.11.0.0 or up.版本Spark 3.0.1Scala 2.12.2kafka-clients 2.6.0异常及解决java.lang.NoClassDefFoundError: org/apache/spark/kafka010/KafkaConfigUpdater异常详细信息java.lang.NoClassDefFoundError: org/apache/spark/kafka010/KafkaConfigUpdater at org.apache.spark.sql.kafka010.KafkaSourceProvider$.kafkaParamsForDriver(KafkaSourceProvider.scala:580) at org.apache.spark.sql.kafka010.KafkaSourceProvider$KafkaScan.toMicroBatchStream(KafkaSourceProvider.scala:466) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.$anonfun$applyOrElse$3(MicroBatchExecution.scala:102) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$Lambda$1408/1011508938.apply(Unknown Source) at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:95) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:81) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:309) at org.apache.spark.sql.catalyst.trees.TreeNode$$Lambda$1019/1992844647.apply(Unknown Source) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:309) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown(AnalysisHelper.scala:149) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown$(AnalysisHelper.scala:147) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29) at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:298) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:81) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:61) at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:322) at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245) Caused by: java.lang.ClassNotFoundException: org.apache.spark.kafka010.KafkaConfigUpdater at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 21 more解决方案添加 spark-token-provider-kafka-0-10_2.12-3.0.1.jar解决思路:KafkaSourceProvider查看KafkaSourceProvider源码在580行找到KafkaConfigUpdater点进去,看看属于哪个jar包java.lang.NoClassDefFoundError: org/apache/commons/pool2/impl/GenericKeyedObjectPoolConfig异常详细信息java.lang.NoClassDefFoundError: org/apache/commons/pool2/impl/GenericKeyedObjectPoolConfig at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer$.<init>(KafkaDataConsumer.scala:606) at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer$.<clinit>(KafkaDataConsumer.scala) at org.apache.spark.sql.kafka010.KafkaBatchPartitionReader.<init>(KafkaBatchPartitionReader.scala:52) at org.apache.spark.sql.kafka010.KafkaBatchReaderFactory$.createReader(KafkaBatchPartitionReader.scala:40) at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD.compute(DataSourceRDD.scala:60) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.executor.Executor$TaskRunner$$Lambda$2110/264027921.apply(Unknown Source) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ClassNotFoundException: org.apache.commons.pool2.impl.GenericKeyedObjectPoolConfig at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 22 more 20/10/09 15:53:48 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, master, executor driver): java.lang.NoClassDefFoundError: org/apache/commons/pool2/impl/GenericKeyedObjectPoolConfig at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer$.<init>(KafkaDataConsumer.scala:606) at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer$.<clinit>(KafkaDataConsumer.scala) at org.apache.spark.sql.kafka010.KafkaBatchPartitionReader.<init>(KafkaBatchPartitionReader.scala:52) at org.apache.spark.sql.kafka010.KafkaBatchReaderFactory$.createReader(KafkaBatchPartitionReader.scala:40) at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD.compute(DataSourceRDD.scala:60) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.executor.Executor$TaskRunner$$Lambda$2110/264027921.apply(Unknown Source) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ClassNotFoundException: org.apache.commons.pool2.impl.GenericKeyedObjectPoolConfig at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 22 more解决方案添加 commons-pool2-2.6.2.jar,思路一样最终添加的jar包–jars spark-sql-kafka-0-10_2.12-3.0.1.jar,kafka-clients-2.6.0.jar,spark-token-provider-kafka-0-10_2.12-3.0.1.jar,commons-pool2-2.6.2.jar相关阅读spark-submit提交Spark Streaming+Kafka程序Spark 异常总结及解决办法
前言记录一下Java API 连接 Hbase的代码,并记录遇到的异常及解决办法代码首先pom.xml里添加hbase-client依赖:<dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-client</artifactId> <version>1.4.13</version> </dependency>然后将hbase-site.xml,core-site.xml复制到本地(如果实在本地运行的话)package com.dkl.blog.hbase; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; *Java API 连接 HBASE public class JavaHbaseExample { private static final String TABLE_NAME = "MY_TABLE_NAME_TOO"; private static final String CF_DEFAULT = "DEFAULT_COLUMN_FAMILY"; public static void createOrOverwrite(Admin admin, HTableDescriptor table) throws IOException { if (admin.tableExists(table.getTableName())) {//如果表已存在 if (admin.isTableEnabled(table.getTableName())) {//如果表状态为Enabled admin.disableTable(table.getTableName()); admin.deleteTable(table.getTableName()); admin.createTable(table); public static void createSchemaTables(Configuration config) throws IOException { try (Connection connection = ConnectionFactory.createConnection(config); Admin admin = connection.getAdmin()) { HTableDescriptor table = new HTableDescriptor(TableName.valueOf(TABLE_NAME)); table.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.NONE)); System.out.print("Creating table. "); createOrOverwrite(admin, table); System.out.println(" Done."); public static void modifySchema(Configuration config) throws IOException { try (Connection connection = ConnectionFactory.createConnection(config); Admin admin = connection.getAdmin()) { TableName tableName = TableName.valueOf(TABLE_NAME); if (!admin.tableExists(tableName)) { System.out.println("Table does not exist."); System.exit(-1); HTableDescriptor table = admin.getTableDescriptor(tableName); // Update existing table HColumnDescriptor newColumn = new HColumnDescriptor("NEWCF"); newColumn.setCompactionCompressionType(Algorithm.GZ); newColumn.setMaxVersions(HConstants.ALL_VERSIONS); //admin.addColumn(tableName, newColumn); //官方文档代码,这里我理解的是应该要添加列簇,但是该方法不生效, // 导致抛出异常org.apache.hadoop.hbase.InvalidFamilyOperationException: // Family 'DEFAULT_COLUMN_FAMILY' is the only column family in the table, so it cannot be deleted, // 用我下面这行代码 table.addFamily(newColumn); // Update existing column family HColumnDescriptor existingColumn = new HColumnDescriptor(CF_DEFAULT); existingColumn.setCompactionCompressionType(Algorithm.GZ); existingColumn.setMaxVersions(HConstants.ALL_VERSIONS); table.modifyFamily(existingColumn); admin.modifyTable(tableName, table); // Disable an existing table admin.disableTable(tableName); // Delete an existing column family admin.deleteColumn(tableName, CF_DEFAULT.getBytes("UTF-8")); // Delete a table (Need to be disabled first) admin.deleteTable(tableName); public static void main(String... args) throws IOException { Configuration config = HBaseConfiguration.create(); config.set("hbase.zookeeper.quorum", "192.168.44.128"); //hbase 服务地址,如果在hbase-site.xml里有配置,可以不用这行代码 // config.set("hbase.zookeeper.property.clientPort","2181"); //默认2181端口 // System.out.println(System.getenv("HBASE_CONF_DIR")); //Add any necessary configuration files (hbase-site.xml, core-site.xml) // config.addResource(new Path(System.getenv("HBASE_CONF_DIR"), "hbase-site.xml")); //官方文档代码,需要配置环境变量 config.addResource(new Path("D:\\data\\conf\\hbase", "hbase-site.xml")); // config.addResource(new Path(System.getenv("HADOOP_CONF_DIR"), "core-site.xml"));//官方文档代码,需要配置环境变量 config.addResource(new Path("D:\\data\\conf\\hadoop", "core-site.xml")); createSchemaTables(config); modifySchema(config); }异常及解决java.net.ConnectException: Connection refused: no further informationjava.net.ConnectException: Connection refused: no further information 解决方法:1、添加里添加config.set("hbase.zookeeper.quorum", "192.168.44.128"); //官网代码里没有这行 2、程序里用到的hbase-site.xml里添加 (如果hbase-site.xml没有下面的配置,我单机配置时没有添加)<property> <name>hbase.zookeeper.quorum</name> <value>192.168.44.128</value> </property>Call to localhost/127.0.0.1:16201 failed on connection exceptionjava.net.SocketTimeoutException: callTimeout=60000, callDuration=62839: Call to localhost/127.0.0.1:16201 failed on connection exception没有设置host 和 hostname验证:netstat -nautlp|grep 16201 tcp6 0 0 127.0.0.1:16201 :::* LISTEN 69041/java tcp6 0 0 127.0.0.1:34889 127.0.0.1:16201 ESTABLISHED 68947/java tcp6 0 0 127.0.0.1:16201 127.0.0.1:34889 ESTABLISHED 69041/java tcp6 0 0 127.0.0.1:34885 127.0.0.1:16201 ESTABLISHED 68947/java tcp6 0 0 127.0.0.1:16201 127.0.0.1:34885 ESTABLISHED 69041/java修改:vim /etc/hosts 192.168.44.128 master vim /etc/hostname master然后重启hbase和程序,验证netstat -nautlp|grep 16201 tcp6 0 0 192.168.44.128:16201 :::* LISTEN 66641/java tcp6 0 0 192.168.44.128:16201 192.168.44.128:40589 ESTABLISHED 66641/java tcp6 0 0 192.168.44.128:40589 192.168.44.128:16201 ESTABLISHED 66546/javajava.net.SocketTimeoutException: callTimeout=60000, callDuration=79924: can not resolve master,16201,1597395570081这个是本地运行程序且没有配置本地host时产生的,解决方法为编辑C:\Windows\System32\drivers\etc\hosts,添加192.168.44.128 master
前言因后续要学习研究hbase,那就先从搭建hbase开始吧。先搭建一个单机版的,方便自己学习使用。安装配置hadoop参考我的另一篇文章:centos7 hadoop 单机模式安装配置注:这里的JDK为1.8,版本支持如图下载hbase下载地址:http://mirror.bit.edu.cn/apache/hbase/ 我下载的是hbase-1.4.13-bin.tar.gz (官网下载地址 太慢了)关于hbase与hadoop版本对应关系参考http://hbase.apache.org/book.html#configuration解压tar -zxvf hbase-1.4.13-bin.tar.gz -C /opt 配置环境变量vim ~/.bashrc export HBASE_HOME=/opt/hbase-1.4.13 export PATH=$PATH:$HBASE_HOME/bin source ~/.bashrc修改配置文件cd /opt/hbase-1.4.13/conf/ 修改hbase-env.shvim hbase-env.sh export JAVA_HOME=/opt/jdk1.8.0_45/ export HBASE_MANAGES_ZK=true修改hbase-site.xmlvim hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://192.168.44.128:8888/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.tmp.dir</name> <value>/home/hadoop/data/hbase/tmp</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/hadoop/data/hbase/zookeeper</value> </property> </configuration>其中hbase.rootdir要和配置hadoop时core-site.xml 里的fs.defaultFS(或fs.default.name)配置对应启动先启动hadoop参考:centos7 hadoop 单机模式安装配置$HADOOP_HOME/sbin/start-dfs.sh $HADOOP_HOME/sbin/start-yarn.sh jps确认一下hadoop的每个进程都启动了启动hbase $HBASE_HOME/bin/start-hbase.sh jps 看一下HMaster、HRegionServer、HQuorumPeer都启动了,代表成功(单机伪分布式模式)hbase 简单操作连接HBASE hbase shell 查看帮助文档 创建表必须同时制定表名称和列簇名称hbase(main):004:0> create 'test', 'cf' 0 row(s) in 4.9150 seconds => Hbase::Table - test列出表信息hbase(main):005:0> list 'test' TABLE 1 row(s) in 0.0750 seconds => ["test"]可以用describe命令查看更详细的信息,包括默认配置hbase(main):006:0> describe 'test' Table test is ENABLED COLUMN FAMILIES DESCRIPTION {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536 ', REPLICATION_SCOPE => '0'} 1 row(s) in 0.3090 seconds往表里添加数据hbase(main):007:0> put 'test', 'row1', 'cf:a', 'value1' 0 row(s) in 0.5670 seconds hbase(main):008:0> put 'test', 'row2', 'cf:b', 'value2' 0 row(s) in 0.0680 seconds hbase(main):009:0> put 'test', 'row3', 'cf:c', 'value3' 0 row(s) in 0.0430 seconds扫描表中所有数据hbase(main):010:0> scan 'test' ROW COLUMN+CELL row1 column=cf:a, timestamp=1597366721664, value=value1 row2 column=cf:b, timestamp=1597366759913, value=value2 row3 column=cf:c, timestamp=1597366769034, value=value3 3 row(s) in 0.1230 seconds获取单行数据hbase(main):011:0> get 'test', 'row1' COLUMN CELL cf:a timestamp=1597366721664, value=value1 1 row(s) in 0.0920 seconds禁用表如果要删除表或更改其设置,以及在某些其他情况下,则需要先使用disable命令禁用该表。您可以使用enable命令重新启用它。hbase(main):001:0> disable 'test' 0 row(s) in 4.4460 seconds hbase(main):002:0> enable 'test' 0 row(s) in 1.5420 seconds测试完 enable 命令,再次禁用表hbase(main):003:0> disable 'test' 0 row(s) in 2.4260 seconds删除表删除表之前,先禁用表hbase(main):004:0> drop 'test' 0 row(s) in 1.3600 seconds退出hbase shell hbase(main):005:0> exit 停止hbase$HBASE_HOME/bin/stop-hbase.sh 停止可能需要一段时间,运行完之后,使用jps命令确认一下hbase的相关进程已关闭。
前言总结Spark覆盖写Hive分区表,如何只覆盖部分对应分区版本要求Spark版本2.3以上,亲测2.2无效配置config("spark.sql.sources.partitionOverwriteMode","dynamic") 注意1、saveAsTable方法无效,会全表覆盖写,需要用insertInto,详情见代码2、insertInto需要主要DataFrame列的顺序要和Hive表里的顺序一致,不然会数据错误!代码已上传githubpackage com.dkl.blog.spark.hive import org.apache.spark.sql.SparkSession * Created by dongkelun on 2020/1/16 15:25 * 博客:Spark 覆盖写Hive分区表,只覆盖部分对应分区 * 要求Spark版本2.3以上 object SparkHivePartitionOverwrite { def main(args: Array[String]): Unit = { val spark = SparkSession .builder() .appName("SparkHivePartitionOverwrite") .master("local") .config("spark.sql.parquet.writeLegacyFormat", true) .config("spark.sql.sources.partitionOverwriteMode","dynamic") .enableHiveSupport() .getOrCreate() import spark.sql val data = Array(("001", "张三", 21, "2018"), ("002", "李四", 18, "2017")) val df = spark.createDataFrame(data).toDF("id", "name", "age", "year") //创建临时表 df.createOrReplaceTempView("temp_table") val tableName="test_partition" //切换hive的数据库 sql("use test") // 1、创建分区表,并写入数据 df.write.mode("overwrite").partitionBy("year").saveAsTable(tableName) spark.table(tableName).show() val data1 = Array(("011", "Sam", 21, "2018")) val df1 = spark.createDataFrame(data1).toDF("id", "name", "age", "year") // df1.write.mode("overwrite").partitionBy("year").saveAsTable(tableName) //不成功,全表覆盖 // df1.write.mode("overwrite").format("Hive").partitionBy("year").saveAsTable(tableName) //不成功,全表覆盖 df1.write.mode("overwrite").insertInto(tableName) spark.table(tableName).show() spark.stop }结果+---+----+---+----+ | id|name|age|year| +---+----+---+----+ |002| 李四| 18|2017| |001| 张三| 21|2018| +---+----+---+----+ +---+----+---+----+ | id|name|age|year| +---+----+---+----+ |011| Sam| 21|2018| +---+----+---+----+相关阅读Spark操作Hive分区表Hive分区表学习总结
前言学习总结Oracle、Spark、Hive SQL 正则匹配函数-函数OralceREGEXP_LIKESparkRLIKE、REGEXPHiveRLIKE、REGEXP建表OracleCREATE TABLE TEST_REGEXP ( ID VARCHAR2(100), NAME VARCHAR2(100) INSERT INTO TEST_REGEXP (ID, NAME) VALUES('001', '张三'); INSERT INTO TEST_REGEXP (ID, NAME) VALUES('002', '张三1'); INSERT INTO TEST_REGEXP (ID, NAME) VALUES('003', '2张三'); HiveCREATE TABLE TEST_REGEXP ( ID string, NAME string INSERT INTO TEST_REGEXP VALUES('001', '张三'); INSERT INTO TEST_REGEXP VALUES('002', '张三1'); INSERT INTO TEST_REGEXP VALUES('003', '2张三');SELECT * FROM TEST_REGEXP WHERE REGEXP_LIKE(NAME,'\d+'); SELECT * FROM TEST_REGEXP WHERE REGEXP_LIKE(NAME,'[0-9]+'); select * from TEST_REGEXP where name regexp '\\d+'; select * from TEST_REGEXP where name regexp '[0-9]+';示例查询NAME含有数字的记录OracleSELECT * FROM TEST_REGEXP WHERE REGEXP_LIKE(NAME,'\d+'); SELECT * FROM TEST_REGEXP WHERE REGEXP_LIKE(NAME,'[0-9]+'); select * from TEST_REGEXP where name regexp '\\d+'; select * from TEST_REGEXP where name regexp '[0-9]+';Hiveselect * from TEST_REGEXP where name rlike '\\d+' select * from TEST_REGEXP where name rlike '[0-9]+'; SparkSpark 和 Hive一样select * from TEST_REGEXP where name rlike '\\d+' select * from TEST_REGEXP where name rlike '[0-9]+'; select * from TEST_REGEXP where name regexp '\\d+'; select * from TEST_REGEXP where name regexp '[0-9]+'; 不过在代码里 \ 需要转义package com.dkl.blog.spark.sql import org.apache.spark.sql.SparkSession * Created by dongkelun on 2019/12/2 19:27 object Test_RegExp { def main(args: Array[String]): Unit = { val spark = SparkSession.builder().appName("NewUVDemo").master("local").getOrCreate() import spark.implicits._ import spark.sql val df = spark.sparkContext.parallelize( Array( ("001", "张三"), ("002", "张三1"), ("003", "2张三") )).toDF("ID","NAME") df.createOrReplaceTempView("TEST_REGEXP") sql("select * from TEST_REGEXP where name rlike '\\\\d+'").show() sql("select * from TEST_REGEXP where name rlike '[0-9]+'").show() sql("select * from TEST_REGEXP where name regexp '\\\\d+'").show() sql("select * from TEST_REGEXP where name regexp '[0-9]+'").show() spark.close() }小结Oralce和Hive、Spark除了函数不同外,正则也多少有不同,比如上例中Oraqlce只有一个 \ 而Hive和Spark有两个\,具体的正则匹配规则可参考网上的资料,其中下面参考的资料中也有一些规则可以参考参考sparksql 正则匹配总结
前言之前用Eclipse+sbt+Scala,sbt 不支持Java,如果项目里包含Java文件打包会报错,现在有同时用Java和Scala的需求,比如写一个Java的类,用Scala去调用,所以改用Maven,第一次用,将过程记录下来。首先安装Scala插件然后新建一个maven project建好之后,配一下maven 如图:这个时候是不能使用Scala的,如图配置 Project StructureFile=>Project Structure=>Libraries=>+=>Scala SDK=>选择一个Scala…如图:这时候就可以使用Scala了,创建一个测试类,运行测试object TestScala { def main(args: Array[String]): Unit = { println("Test Scala") }新建scala文件夹上面的scala是放在java包里,我们要单独建个scala的包在src/main下面新建scala文件夹,然后右键=>Mark Directory as=>Sources Root,这样就可以在scala文件下新建类了,自己可以测试一下打包到此,可以直接在idea里运行java和scala代码了,但是打的包里不包含scala的class文件(包含java的),还需要配置pompom<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.dkl</groupId> <artifactId>MavenJavaAndScala</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.source.version>1.8</maven.compiler.source.version> <maven.compiler.target.version>1.8</maven.compiler.target.version> <encoding>UTF-8</encoding> <scala.version>2.11.8</scala.version> <scala.compat.version>2.11</scala.compat.version> </properties> <dependencies> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>${scala.version}</version> </dependency> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-compiler</artifactId> <version>${scala.version}</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <version>2.15.2</version> <executions> <execution> <id>scala-compile-first</id> <goals> <goal>compile</goal> </goals> <configuration> <includes> <include>**/*.scala</include> </includes> </configuration> </execution> <execution> <id>scala-test-compile</id> <goals> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${maven.compiler.source.version}</source> <target>${maven.compiler.target.version}</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.4</version> <!-- The configuration of the plugin --> <configuration> <!-- Configuration of the archiver --> <archive> <!-- 生成的jar中,不要包含pom.xml和pom.properties这两个文件 --> <addMavenDescriptor>false</addMavenDescriptor> <!-- Manifest specific configuration --> <manifest> <!-- 是否要把第三方jar放到manifest的classpath中 --> <addClasspath>true</addClasspath> <!-- 生成的manifest中classpath的前缀,因为要把第三方jar放到lib目录下,所以classpath的前缀是lib/ --> <classpathPrefix>lib/</classpathPrefix> </manifest> </archive> </configuration> </plugin> </plugins> </build> </project>这样配置完成后再打包就可以看到scala的class了打包后命令行测试scala -classpath MavenJavaAndScala-1.0-SNAPSHOT.jar TestScala Test Scalagithttps://github.com/dongkelun/MavenJavaAndScala
前言本人新手,本文记录简单的ELKB单机部署,ELKB分别指elasticsearch、logstash、kibana、filebeat,用的当前官网最新版本7.2.0,日志用的Nginx产生的日志。Nginx可以参考我这篇:Nginx 安装配置,我本次用的Nginx和这篇文章是一样的,包括前端。环境:Centos7 先将常用环境配置好(CentOS 初始环境配置),jdk版本为1.81、安装包下载地址:https://www.elastic.co/cn/downloads/wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz wget https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.tar.gz wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz2、解压tar -xzf elasticsearch-7.2.0-linux-x86_64.tar.gz tar -xzf logstash-7.2.0.tar.gz tar -xzf kibana-7.2.0-linux-x86_64.tar.gz tar -xzf filebeat-7.2.0-linux-x86_64.tar.gz3、创建elk用户elasticsearch和kibana 要求以非root用户启动,不信可以自己试试~groupadd elk useradd elk -g elk chown -R elk:elk elasticsearch-7.2.0 chown -R elk:elk kibana-7.2.0-linux-x86_64 4、elasticsearch创建数据和日志文件夹,并修改权限mkdir -pv /data/elk/{data,logs} chown -R elk:elk /data/elk/ 4.1 修改配置文件vim elasticsearch-7.2.0/config/elasticsearch.yml path.data: /data/elk/data path.logs: /data/elk/logs 其他配置项根据自己需求修改4.2 启动su - elk cd elasticsearch-7.2.0 nohup bin/elasticsearch & 查看启动日志,看是否有异常,如果有,根据提示修改 tail -f nohup.out 测试,如下证明启动成功$ curl 127.0.0.1:9200 "name" : "ambari.master.com", "cluster_name" : "elasticsearch", "cluster_uuid" : "WlKMJOBcTduH9FGLKj7N2w", "version" : { "number" : "7.2.0", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "508c38a", "build_date" : "2019-06-20T15:54:18.811730Z", "build_snapshot" : false, "lucene_version" : "8.0.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" "tagline" : "You Know, for Search" }5、logstashroot下cd logstash-7.2.0/ cp config/logstash-sample.conf ./ 可以自己看一下这个配置文件里的内容,根据自己的需求修改,本次使用默认 nohup bin/logstash -f logstash-sample.conf & 看一下启动日志是否成功6 kibanasu - elk cd kibana-7.2.0-linux-x86_64 vim config/kibana.yml 修改host,这样可以在其他机器上的浏览器,用ip+端口去访问kibananohup bin/kibana & 启动,Kibana也不能用root用户启动nohup bin/kibana & 在浏览器里,输入yourip:5601 访问成功即代表启动成功7 filebeatroot下 cd filebeat-7.2.0-linux-x86_64 下面主要是将filebeat.inputs的enabled改为true,paths改为需要采集的日志文件,这里使用的Nginx的日志,然后将output.elasticsearch注释掉,output.logstash打开,也就是用filebeat将日志采集传给logstash,然后再由logstash传给elasticsearch,而不是默认的直接传,注意:当然还有其他默认的注释的我没有贴出来filebeat.inputs: # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /var/log/nginx/*.log filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # Period on which files under path should be checked for changes #reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings: index.number_of_shards: 1 #-------------------------- Elasticsearch output ------------------------------ #output.elasticsearch: # Array of hosts to connect to. # hosts: ["localhost:9200"] # Optional protocol and basic auth credentials. #protocol: "https" #username: "elastic" #password: "changeme" #----------------------------- Logstash output -------------------------------- output.logstash: # The Logstash hosts hosts: ["localhost:5044"] # Configure processors to enhance or manipulate events generated by the beat. processors: - add_host_metadata: ~ - add_cloud_metadata: ~启动 nohup ./filebeat -e -c filebeat.yml & 8 Kibana界面配置启动好了之后,在浏览器里访问前端页面(Nginx作为服务器),这样使Nginx产生访问日志,之后再按下面图片的顺序进行操作点击Discover如果配置正确,并且有Nginx日志产生,那么就会出现如标1的显示,然后再按步骤填写2,出现3,再点击4~再点击Discover,就会看到Nginx的日志了9、追加日志数据源创建新的测试日志,当然你有现成的日志文件,比如后端Springboot的日志,可以直接用这个文件就可以了mkdir -p /data/test/log echo 'test1' >> /data/test/log/test.log echo '测试中文' >> /data/test/log/test.log 然后修改filebeat的配置文件,添加第二个数据源- type: log enabled: true paths: - /data/test/log/*.log 然后重启filebeat,再刷新kibana,即可看到新的日志到此,顺利完成了简单的第一步,后续可以自己个性化修改配置等,等我学习有了进展,再更新~
前言本文讲如何将Vue项目的dist文件夹部署到Github Page上,目的是可以在线访问前端效果,这样不需要自己购买服务器,当然任何静态文件夹都可以这样做,不止局限于Vue操作步骤1、首先在Git上建立一个项目,如vue-echarts-map2、然后将本地项目push到远程master (非必须)echo "# vue-echarts-map" >> README.md git init git add README.md git commit -m "first commit" git remote add origin https://github.com/dongkelun/vue-echarts-map.git git push -u origin master3、主要是下面这一步,将打包后的dist文件夹push到gh-pagesnpm run build git checkout -b gh-pages git add -f dist git commit -m 'first commit' git subtree push --prefix dist origin gh-pages这样就可以直接在浏览器查看效果了http://dongkelun.com/vue-echarts-map备注:1、我这里做了域名绑定 dongkelun.github.io=>dongkelun.com 没有域名的直接访问http://dongkelun.github.io/vue-echarts-map2、还有首先你先创建一个类似yourgithubname.github.io这样格式的GitHub仓库,然后按上面把dist文件夹(即静态文件夹)push到gh-pages分支,就会自动部署了。域名绑定只需在dongkelun.github.io仓库做一次即可,即加一个CNAME,然后域名加解析,这里不做详细说明~3、如果你不用gh-pages分支,或者你不想分享源代码,你可以直接将dist文件夹push到master分支,但是需要自己在Setting里设置,用哪个分支部署,默认是gh-pages分支如echarts-map参考:如何在 GitHub Pages 上部署 vue-cli 项目每日英语1、binomial adj. 二项式的;双名的,二种名称的 n. 二项式;双名词组,二种名称;成对词2、multinomial adj. 多项的 n. 多项式3、outcome n. 结果,结局;成果4、variant 变量(variate) n. 变体;转化 adj. 不同的;多样的5、Elastic Net 弹性网络 机器学习算法与Python实践(9) - 弹性网络(Elastic Net) Elastic net regularizationLogistic regression is a popular method to predict a categorical response. It is a special case of Generalized Linear models that predicts the probability of the outcomes. In spark.ml logistic regression can be used to predict a binary outcome by using binomial logistic regression, or it can be used to predict a multiclass outcome by using multinomial logistic regression. Use the family parameter to select between these two algorithms, or leave it unset and Spark will infer the correct variant.以下为谷歌浏览器翻译:逻辑回归是预测分类响应的常用方法。广义线性模型的一个特例是预测结果的概率。在spark.ml逻辑回归中,可以使用二项逻辑回归来预测二元结果,或者可以使用多项逻辑回归来预测多类结果。使用该family 参数在这两种算法之间进行选择,或者保持不设置,Spark将推断出正确的变量。
前言记录一个异常场景Spark读取CSV文件,文件里的某些内容编码格式有问题或者有特殊字符一种情况是 62,我碰到的这种,另一种是63,查资料查的java.lang.ArrayIndexOutOfBoundsException:62 java.lang.ArrayIndexOutOfBoundsException:63 解决方法情况1:将GBK编码的文件转文UTF-8(我碰见的),当然这种情况也可以用情况2中的解决办法解决~情况2: .option("multiline", "true") 或指定schema详情参考:CSV schema inferring fails on some UTF-8 chars注:这上面标的是2.3.0版本的Bug,在2.2.2, 2.3.1, 2.4.0已修复,我没有测试是否在这些版本修复.option(“multiline”, “true”) 这种办法详细异常19/05/30 14:03:15 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) java.lang.ArrayIndexOutOfBoundsException: 62 at org.apache.spark.unsafe.types.UTF8String.numBytesForFirstByte(UTF8String.java:191) at org.apache.spark.unsafe.types.UTF8String.numChars(UTF8String.java:206) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) 19/05/30 14:03:15 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException: 62 at org.apache.spark.unsafe.types.UTF8String.numBytesForFirstByte(UTF8String.java:191) at org.apache.spark.unsafe.types.UTF8String.numChars(UTF8String.java:206) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:363) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252) at org.apache.spark.sql.Dataset.head(Dataset.scala:2484) at org.apache.spark.sql.Dataset.take(Dataset.scala:2698) at org.apache.spark.sql.execution.datasources.csv.TextInputCSVDataSource$.infer(CSVDataSource.scala:148) at org.apache.spark.sql.execution.datasources.csv.CSVDataSource.inferSchema(CSVDataSource.scala:63) at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:57) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:201) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:392) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227) at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:594) at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:473) at com.hs.alipay.AlipayFeature$.main(AlipayFeature.scala:20) at com.hs.alipay.AlipayFeature.main(AlipayFeature.scala) Caused by: java.lang.ArrayIndexOutOfBoundsException: 62 at org.apache.spark.unsafe.types.UTF8String.numBytesForFirstByte(UTF8String.java:191) at org.apache.spark.unsafe.types.UTF8String.numChars(UTF8String.java:206) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)每日英语1、unified 统一的;一致标准的2、analytics 英 [ænə’lɪtɪks] 美 [,ænl’ɪtɪks] n. [化学][数] 分析学;解析学3、Lightning 闪电⚡ Lightning-fast 快如闪电的4、scale 规模 large-scale 大规模的5、achieves 实现Apache Spark™ is a unified analytics engine for large-scale data processing.Apache Spark™是用于大规模数据处理的统一分析引擎。
前言本文总结MySQL和Oracle的字符串截取函数的用法工作中MySQL和Oracle都用,有时会碰到两种数据库SQL用法的不同,就会上网查一下,但是时间久了,就忘记了,好记性不如烂笔头,所以写个笔记备忘一下~1、MySql函数:SUBSTRING 或 SUBSTR1.1 语法位置SUBSTRING(string,position); SUBSTRING(string FROM position); 位置和长度SUBSTRING(string,position,length); SUBSTRING(string FROM position FOR length); 1.2 下标-Hell0World正数1234567891011负数-11-10-9-8-7-6-5-4-3-2-11.3 示例详解1.3.1 位置position>0,从position(包含)开始SELECT SUBSTRING('Hello World',1); SELECT SUBSTRING('Hello World' FROM 7); Hello World Worldposition=0返回空SELECT SUBSTRING('Hello World',0); position<0,与position为正时是一样的,下面的sql的效果是相同的sELECT SUBSTRING('Hello World',-11); SELECT SUBSTRING('Hello World' FROM -5); 当position的绝对值>LENGTH(string)时,返回空,和position=0时一样SELECT SUBSTRING('Hello World',12); SELECT SUBSTRING('Hello World',-12); 1.3.2 位置和长度position的用法和上面讲的是一样的,下面仅总结lengthlength>0时返回length个字符数,当length>string的可截取的长度时,只返回可截取的长度SELECT SUBSTRING('Hello World',1,5); SELECT SUBSTRING('Hello World',6,20); Hello World length<=0时返回空SELECT SUBSTRING('Hello World',1,0); SELECT SUBSTRING('Hello World',1,-20);下面等价SELECT SUBSTRING('Hello World',6,20); SELECT SUBSTRING('Hello World' FROM 6 FOR 20); 可通过LENGTH查看字符串的长度验证(当length>string的可截取的长度时)SELECT LENGTH(SUBSTRING('Hello World' FROM 6 FOR 20)); 2、Oracle函数:SUBSTR和MySql不同的是没有SUBSTRING2.1 语法位置SUBSTR(string,position); SUBSTR(string FROM position); 位置和长度SUBSTR(string,position,length); SUBSTR(string FROM position FOR length); 2.2 下标-Hell0World正数0或1234567891011负数-11-10-9-8-7-6-5-4-3-2-12.3 示例详解2.3.1 位置与MySQL一样,position>0和position<0时是一样的效果,参照上面的下标对应即可,不同的是,position=0和position=1的效果是一样的。下面三个sql效果一样SELECT SUBSTR('Hello World',0) FROM DUAL; SELECT SUBSTR('Hello World',1) FROM DUAL; SELECT SUBSTR('Hello World',-11) FROM DUAL; Hello World 当position的绝对值>LENGTH(string)时,返回[NULL]SELECT SUBSTR('Hello World',12) FROM DUAL SELECT SUBSTR('Hello World',-12) FROM DUAL; [NULL] 2.3.2 位置和长度position的用法和上面讲的是一样的,下面仅总结lengthlength>0时返回length个字符数,当length>string的可截取的长度时,只返回可截取的长度,这点和MySQL相同SELECT SUBSTR('Hello World',1,5) FROM DUAL; SELECT SUBSTR('Hello World',6,20) FROM DUAL; Hello World length<=0时返回[NULL],这点和MySQL不同SELECT SUBSTR('Hello World',1,0) FROM DUAL; SELECT SUBSTR('Hello World',6,-20) FROM DUAL; [NULL] 3 比较总结最后比较一下MySQL和Oracle的不同1、 MySQL函数为SUBSTRING 或 SUBSTR,Oracle只有SUBSTR2、 position=0时MySQL返回空,而Oracle和position=1时一样3、 当position的绝对值>LENGTH(string)时和length<=0时,MySQL返回空,而Oracle返回[NULL]
前言功能如题,本文参考:https://www.jianshu.com/p/54daac2cc924,目的只是为了把网上查的资料做个笔记~以下均为vue cli2 创建的项目自动打开浏览器只需要在config/index.js里找到autoOpenBrowser将其设为true即可获取本地ip方法一在config/index.js 顶部添加const os = require('os') let localhost = '' try { const network = os.networkInterfaces() localhost = network[Object.keys(network)[0]][1].address } catch (e) { localhost = 'localhost'; }再找到host将其改为host:localhost即可效果代码查看:https://github.com/dongkelun/vue-echarts-map/blob/autopip-v1/config/index.js方法二安装ADDRESSnpm i address -D 在CONFIG/INDEX.JSconst address = require('address') const localhost = address.ip() || 'localhost' 再找到host将其改为host:localhost即可效果代码查看:https://github.com/dongkelun/vue-echarts-map/blob/master/config/index.js
前言总结Spark开发中遇到的异常及解决办法,之前也写过几篇,之所以不再一个异常写一篇博客,是因为现在Spark用的比较熟悉了一些,觉得没必要把异常信息写那么详细了,所以就把异常总结在一篇博客里了,这样既能备忘也方便查找。1、之前的几篇spark-submit报错:Exception in thread “main” java.sql.SQLException:No suitable driverhive查询报错:java.io.IOException:org.apache.parquet.io.ParquetDecodingExceptionspark-submit报错:Application application_1529650293575_0148 finished with failed status2、 spark.executor.memoryOverhead堆外内存(默认是executor内存的10%),当数据量比较大的时候,如果按默认的就会有下面的异常,导致程序崩溃异常container killed by YARN for exceeding memory limits. 1.8 GB of 1.8 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.解决具体值根据实际情况配置新版 --conf spark.executor.memoryOverhead=2048 --conf spark.yarn.executor.memoryOverhead=2048新版如果用旧版,会:WARN SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' has been deprecated as of Spark 2.3 and may be removed in the future. Please use the new key 'spark.executor.memoryOverhead' instead.3、No more replicas available for rdd_异常19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_73 ! 19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_12_38 ! 19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_38 ! 19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_148 ! 19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_6 ! 19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_112 ! 19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_12_100 !解决增大executor的内存--executor-memory 4G4、Failed to allocate a page异常19/01/09 09:12:39 WARN TaskMemoryManager: Failed to allocate a page (1048576 bytes), try again. 19/01/09 09:12:41 WARN TaskMemoryManager: Failed to allocate a page (1048576 bytes), try again. 19/01/09 09:12:41 WARN NioEventLoop: Unexpected exception in the selector loop. java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.Integer.valueOf(Integer.java:832) at sun.nio.ch.EPollSelectorImpl.updateSelectedKeys(EPollSelectorImpl.java:120) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:98) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:748) 19/01/09 09:12:46 WARN TransportChannelHandler: Exception in connection from /172.16.29.236:47012 java.lang.OutOfMemoryError: GC overhead limit exceeded 19/01/09 09:12:44 WARN AbstractChannelHandlerContext: An exception 'java.lang.OutOfMemoryError: GC overhead limit exceeded' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception: java.lang.OutOfMemoryError: GC overhead limit exceeded 19/01/09 09:12:42 WARN TaskMemoryManager: Failed to allocate a page (1048576 bytes), try again. Exception in thread "dispatcher-event-loop-11" java.lang.OutOfMemoryError: GC overhead limit exceeded 19/01/09 09:12:51 WARN TaskMemoryManager: Failed to allocate a page (1048576 bytes), try again. 19/01/09 09:12:53 WARN TransportChannelHandler: Exception in connection from /172.16.29.233:34226 java.lang.OutOfMemoryError: GC overhead limit exceeded解决增大driver的内存 --driver-memory 6G 参考TaskMemoryManager: Failed to allocate a page, try again5、Uncaught exception in thread task-result-getter-3异常19/01/10 09:31:50 ERROR Utils: Uncaught exception in thread task-result-getter-3 java.lang.OutOfMemoryError: Java heap space at java.lang.reflect.Array.newArray(Native Method) at java.lang.reflect.Array.newInstance(Array.java:75) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1938) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2286) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2068) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1572) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1974) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108) at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:88) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:94) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1991) at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Exception in thread "task-result-getter-3" java.lang.OutOfMemoryError: Java heap space at java.lang.reflect.Array.newArray(Native Method) at java.lang.reflect.Array.newInstance(Array.java:75) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1938) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2286) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2068) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1572) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1974) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108) at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:88) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:94) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1991) at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/01/10 09:31:51 ERROR Utils: Uncaught exception in thread task-result-getter-0 java.lang.OutOfMemoryError: Java heap space at java.lang.reflect.Array.newArray(Native Method) at java.lang.reflect.Array.newInstance(Array.java:75) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1938) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2286) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2068) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1572) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1974) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108) at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:88) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:94) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1991) at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Exception in thread "task-result-getter-0" java.lang.OutOfMemoryError: Java heap space at java.lang.reflect.Array.newArray(Native Method) at java.lang.reflect.Array.newInstance(Array.java:75) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1938) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2286) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2068) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1572) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1974) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108) at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:88) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:94) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1991) at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Exception in thread "main" java.util.concurrent.TimeoutException: Futures timed out after [300 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201) at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136) at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:367) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:149) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:145) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:145) at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:135) at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenInner(BroadcastHashJoinExec.scala:232) at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:102) at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:181) at org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:36) at org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:66) at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:181) at org.apache.spark.sql.execution.joins.SortMergeJoinExec.consume(SortMergeJoinExec.scala:36) at org.apache.spark.sql.execution.joins.SortMergeJoinExec.doProduce(SortMergeJoinExec.scala:633) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83) at org.apache.spark.sql.execution.joins.SortMergeJoinExec.produce(SortMergeJoinExec.scala:36) at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:46) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83) at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:36) at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doProduce(BroadcastHashJoinExec.scala:97) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83) at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.produce(BroadcastHashJoinExec.scala:39) at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:46) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88) at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83) at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:36) at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:524) at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:576) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:136) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:132) at org.apache.spark.sql.execution.DeserializeToObjectExec.doExecute(objects.scala:89) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:136) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:132) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:81) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81) at org.apache.spark.sql.Dataset.rdd$lzycompute(Dataset.scala:2975) at org.apache.spark.sql.Dataset.rdd(Dataset.scala:2973) at com.hs.xlzf.task.route.ServiceAreaFreq$.save_service_freq(ServiceAreaFreq.scala:161) at com.hs.xlzf.task.route.ServiceAreaFreq$.main(ServiceAreaFreq.scala:36) at com.hs.xlzf.task.route.ServiceAreaFreq.main(ServiceAreaFreq.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:896) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)解决增大driver的内存 --driver-memory 6G 具体看参考参考Driver端返回大量结果数据时出现内存不足错误6、spark.driver.maxResultSize异常ERROR TaskSetManager: Total size of serialized results of 30 tasks (1108.5 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)解决增大spark.driver.maxResultSize --conf spark.driver.maxResultSize=2G 7、Dropping event from queue eventLog异常19/05/20 11:49:54 ERROR AsyncEventQueue: Dropping event from queue eventLog. This likely means one of the listeners is too slow and cannot keep up with the rate at which tasks are being started by the scheduler. 19/05/20 11:49:54 WARN AsyncEventQueue: Dropped 1 events from eventLog since Thu Jan 01 08:00:00 CST 1970.解决增大spark.scheduler.listenerbus.eventqueue.capacity(默认为10000) --conf spark.scheduler.listenerbus.eventqueue.capacity=100000 旧版用spark.scheduler.listenerbus.eventqueue.size19/05/21 14:38:15 WARN SparkConf: The configuration key 'spark.scheduler.listenerbus.eventqueue.size' has been deprecated as of Spark 2.3 and may be removed in the future. Please use the new key 'spark.scheduler.listenerbus.eventqueue.capacity' instead.具体看参考参考Spark里Histroy Server丢task,job和Stage问题调研
前言记录自己在工作开发中遇到的SQL优化问题1、避免用in 和 not in解决方案:用exists 和 not exists代替用join代替not exists示例not in: select stepId,province_code,polyline from route_step where stepId not in (select stepId from stepIds)not exists:select stepId,province_code,polyline from route_step where not exists (select stepId from stepIds where route_step.stepId = stepIds.stepId)自己遇到的问题上面not in会抛出异常18/12/26 11:20:26 WARN TaskSetManager: Stage 3 contains a task of very large size (17358 KB). The maximum recommended task size is 100 KB. Exception in thread "dispatcher-event-loop-11" java.lang.OutOfMemoryError: Java heap space首先会导致某个task数量很大,且总task数量很少(task数目不等于rdd或df的分区数,目前不知道原因),接着报java.lang.OutOfMemoryError,试了很多方法,最后用not exists,没有上面的异常效率not in慢的原因是 not in不走索引疑问:not in是非相关子查询,not exists是相关子查询,而从理论上来说非相关子查询比相关子查询效率高(看下面的参考),但是这里却相反,矛盾,不知道为啥~参考博客:[笔记] SQL性能优化 - 避免使用 IN 和 NOT INSQL优化——避免使用Not IN嵌套查询:相关子查询和非相关子查询2、in 会导致数据倾斜longitudeAndLatitudes和lineIds都有160个分区,且数据平衡(每个分区的数目差不多),但是下面的语句则有问题select * from longitudeAndLatitudes where lineId in (select lineId from lineIds) 虽然分区数还是160,但是只有两三个分区有数,其他分区的数量都为0,这样就导致数据倾斜,程序执行很慢,如果非要用in的话,那么需要repartition一下3、大表join小表策略:将小表广播(broadcast)参数:spark.sql.autoBroadcastJoinThreshold 默认值10485760(10M),当小表或df的大小小于此值,Spark会自动的将该表广播到每个节点上原理:join是个shuffle类算子,shuffle时,各个节点上会先将相同的key写到本地磁盘,之后再通过网络传输从其他节点的磁盘文件在拉取相同的key,因此shuffle可能会发生大量的磁盘IO和网络传输,性能很低,而broadcast先将小表广播到每个节点,这样join时都是在本地完成,不需要网络传输,所以会提升性能注意:broadcast join 也称为replicated join 或者 map-side join具体操作提交代码时适当调大阈值,如将阈值修改为100M,具体看自己环境的内存限制和小表的大小--conf spark.sql.autoBroadcastJoinThreshold=104857600 如何看是否进行了broadcast join:以df为例(df是join之后的结果) df.explain 如果为broadcast join,则打印:== Physical Plan == *(14) Project [lineId#81, stepIds#85, userId#1, freq#2] +- *(14) BroadcastHashJoin [lineId#81], [lineId#42], Inner, BuildLeft ...能看到关键字BroadcastHashJoin即可,否则打印:== Physical Plan == *(17) Project [lineId#42, stepIds#85, freq#2, userId#1] +- *(17) SortMergeJoin [lineId#42], [lineId#81], Inner ...能看到SortMergeJoin即可查看阈值:val threshold = spark.conf.get("spark.sql.autoBroadcastJoinThreshold").toInt threshold / 1024 / 1024参考SparkSQL之broadcast joinSpark-sql Join优化=>(cache+BroadCast)4、写MySQL慢Spark df批量写MySQL很慢,如我900万条数据写需要5-10个小时解决办法:在url后面加上&rewriteBatchedStatements=true加上之后,写数据10分钟左右,快很多。个人环境经验:MySQL不用加就没问题,MariaDB需要加,也就是不同的MySQL版本不一样5、run at ThreadPoolExecutor.java:1149之前就在Spark Web UI经常看到这个描述,但不知道是干啥,现在在总结上面的broadcast join发现了规律:当两个表join,如果为BroadcastHashJoin则有这个描述,如果为SortMergeJoin则没有。BroadcastHashJoin 用ThreadPool进行异步广播 源码见:BroadcastHashJoinExec和BroadcastExchangeExec参考:What are ThreadPoolExecutors jobs in web UI’s Spark Jobs?
前言前面学习总结了Hive分区表,现在学习总结一下Spark如何操作Hive分区表,包括利用Spark DataFrame创建Hive的分区表和Spark向已经存在Hive分区表里插入数据,并记录一下遇到的问题以及如何解决。1、Spark创建分区表只写主要代码,完整代码见附录val data = Array(("001", "张三", 21, "2018"), ("002", "李四", 18, "2017")) val df = spark.createDataFrame(data).toDF("id", "name", "age", "year") //可以将append改为overwrite,这样如果表已存在会删掉之前的表,新建表 df.write.mode("append").partitionBy("year").saveAsTable("new_test_partition") 然后在Hive命令行里看一下,新建的表是否有分区字段year用命令 desc new_test_partition; 或show create table new_test_partition; 根据下面的结果可以看到新建的表确实有分区字段yearhive> desc new_test_partition; id string name string age int year string # Partition Information # col_name data_type comment year string Time taken: 0.432 seconds, Fetched: 9 row(s)2、向已存在的表插入数据2.1 Spark创建的分区表这种情况其实和建表语句一样就可以了不需要开启动态分区df.write.mode("append").partitionBy("year").saveAsTable("new_test_partition") 当然也有其他方式插入数据,会在后面讲到。2.2 在Hive命令行创建的表这里主要指和Spark创建的表的文件格式不一样,Spark默认的文件格式为PARQUET,为在命令行Hive默认的文件格式为TEXTFILE,这种区别,也导致了异常的出现。需要开启动态分区不开启会有异常: Exception in thread "main" org.apache.spark.SparkException: Dynamic partition strict mode requires at least one static partition column. To turn this off set hive.exec.dynamic.partition.mode=nonstrict2.2.1 建表用Hive分区表学习总结的建表语句建表(之前已经建过就不用重复建了)。create table test_partition ( id string comment 'ID', name string comment '名字', age int comment '年龄' comment '测试分区' partitioned by (year int comment '年') ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ; 2.2.2 异常试着用上面的插入语句插入数据df.write.mode("append").partitionBy("year").saveAsTable("test_partition") 抛出异常Exception in thread "main" org.apache.spark.sql.AnalysisException: The format of the existing table dkl.test_partition is `HiveFileFormat`. It doesn't match the specified format `ParquetFileFormat`.;原因就是上面说的文件格式不一致造成的。2.2.3 解决办法用fomat指定格式df.write.mode("append").format("Hive").partitionBy("year").saveAsTable("test_partition")2.3 其他方法df.createOrReplaceTempView("temp_table") sql("insert into test_partition select * from temp_table") df.write.insertInto("test_partition")其中insertInto不需要也不能将df进行partitionBy,否则会抛出异常df.write.partitionBy("year").insertInto("test_partition") Exception in thread "main" org.apache.spark.sql.AnalysisException: insertInto() can't be used together with partitionBy(). Partition columns have already be defined for the table. It is not necessary to use partitionBy().;3、完整代码package com.dkl.blog.spark.hive import org.apache.spark.sql.SparkSession * 博客:Spark操作Hive分区表 * https://dongkelun.com/2018/12/04/sparkHivePatition/ object SparkHivePatition { def main(args: Array[String]): Unit = { val spark = SparkSession .builder() .appName("SparkHive") .master("local") .config("spark.sql.parquet.writeLegacyFormat", true) .enableHiveSupport() .getOrCreate() import spark.sql val data = Array(("001", "张三", 21, "2018"), ("002", "李四", 18, "2017")) val df = spark.createDataFrame(data).toDF("id", "name", "age", "year") //创建临时表 df.createOrReplaceTempView("temp_table") //切换hive的数据库 sql("use dkl") // 1、创建分区表,可以将append改为overwrite,这样如果表已存在会删掉之前的表,新建表 df.write.mode("append").partitionBy("year").saveAsTable("new_test_partition") //2、向Spark创建的分区表写入数据 df.write.mode("append").partitionBy("year").saveAsTable("new_test_partition") sql("insert into new_test_partition select * from temp_table") df.write.insertInto("new_test_partition") //开启动态分区 sql("set hive.exec.dynamic.partition.mode=nonstrict") //3、向在Hive里用Sql创建的分区表写入数据,抛出异常 // df.write.mode("append").partitionBy("year").saveAsTable("test_partition") // 4、解决方法 df.write.mode("append").format("Hive").partitionBy("year").saveAsTable("test_partition") sql("insert into test_partition select * from temp_table") df.write.insertInto("test_partition") //这样会抛出异常 // df.write.partitionBy("year").insertInto("test_partition") spark.stop }相关阅读Hive分区表学习总结Spark连接Hive(spark-shell和Eclipse两种方式)
前言用了这么久的Hive,而没有认真的学习和使用过Hive的分区,现在学习记录一下。分区表一般在数据量比较大,且有明确的分区字段时使用,这样用分区字段作为查询条件查询效率会比较高。Hive分区分为静态分区和动态分区1、建表语句先用一个有分区字段的分区表进行学习,静态分区和动态分区的建表语句是一样的。create table test_partition ( id string comment 'ID', name string comment '名字' comment '测试分区' partitioned by (year int comment '年') ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ; 2、插入语句静态分区和动态分区的插入数据的语句是不一样的,所以分开2.1 静态分区静态分区是在语句中指定分区字段为某个固定值,多次重复插入数据是为了看看数据如何在hdfs上存储的。2.1.1 INSERT INTOinsert into table test_partition partition(year=2018) values ('001','张三'); insert into table test_partition partition(year=2018) values ('001','张三'); insert into table test_partition partition(year=2018) values ('002','李四');2.1.2 LOAD DATAdata.txt002,李四 003,王五 load data local inpath '/root/dkl/data/data.txt' into table test_partition partition (year =2018); load data local inpath '/root/dkl/data/data.txt' into table test_partition partition (year =2018); load data local inpath '/root/dkl/data/data.txt' into table test_partition partition (year =2017);2.1.3 查询及结果2.1.4 HDFS存储形式分区2018的路径为/apps/hive/warehouse/dkl.db/test_partition/year=2018 /apps/hive/warehouse 为hive的仓库路径dkl.db dkl为数据库名称test_partition为表名year为分区字段名2.2 动态分区2.2.1 INSERT INTOinsert into table test_partition partition(year) values ('001','张三',2016); 动态分区默认不开启,执行上面的语句会报错:insert into table test_partition partition(year) values ('001','张三',2016); FAILED: SemanticException [Error 10096]: Dynamic partition strict mode requires at least one static partition column. To turn this off set hive.exec.dynamic.partition.mode=nonstrict开启: set hive.exec.dynamic.partition.mode=nonstrict; 然后再执行就可以了注:上面的命令是临时生效,退出hive重新进hive需要重新执行上面的命令,才能动态分区2.2.2 load data不能使用load data进行动态分区插入data.txt002,李四,2015 003,王五,2014 load data local inpath '/root/dkl/data/data.txt' into table test_partition partition (year); hive> load data local inpath '/root/dkl/data/data.txt' into table test_partition partition (year); FAILED: NullPointerException null可以使用另一种方法解决首先创建没有分区的表create table test ( id string comment 'ID', name string comment '名字', year int comment '年' comment '测试' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ; 先将数据load进test表load data local inpath '/root/dkl/data/data.txt' into table test; 然后从表test,动态分区插入test_partition中insert into table test_partition partition(year) select * from test; 如果后面select具体字段的话,需要保证顺序一致,把分区字段放在最后。insert into table test_partition partition(year) select id,name,year from test; 3、查看分区信息show partitions test_partition; hive> show partitions test_partition; year=2017 year=2018 Time taken: 0.719 seconds, Fetched: 2 row(s) 4、添加分区字段查了一下,不能添加新的分区字段4.1 添加新分区alter table test_partition add partition (year=2012); 这样就会新建对应的hdfs路径下一个year=2012的文件夹当然也可以指定localtion,这样就不会在默认的路径下建立文件夹了 alter table test_partition add partition (year=2010) location '/tmp/dkl'; 这样如果/tmp/dkl文件夹不存在的话就会新建文件夹,如果存在就会把该文件夹下的所有的文件加载到Hive表,有一点需要注意,如果删除该分区的话,对应的文件夹也会删掉,删除语法请参考后面的第6部分。4.2 添加非分区字段alter table test_partition add columns(age int); 这样新加的字段是在非分区字段的最后,在分区字段之前不过这里有一个bug,就是往表里新插入数据后,新增的age字段查询全部显示为NULL(其实数据已经存在):新增加的分区是不存在这个bug的,比如之前没有year=2011这个分区,那么新增的话不会存在bug分区在添加age字段之前已存在(即使该分区下没有任何数据),bug存在解决方法:对已存在的分区执行下面的sql即可,以分区2018为例 alter table test_partition partition(year=2018) add columns(age int); 5、多个分区字段以两个分区字段为例5.1 建表create table test_partition2 ( id string comment 'ID', name string comment '名字' comment '测试两个分区' partitioned by (year int comment '年',month int comment '月') ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ;5.2 HDFS存储格式看一下多个分区的的表如何在HDFS上存储的,用静态分区的形式插入一条记录:insert into table test_partition2 partition(year=2018,month=12) values ('001','张三'); /apps/hive/warehouse/dkl.db/test_partition2/year=2018/month=12 6、删除分区只能删除某个分区,如删除分区2018,而不能删除整个分区year字段。6.1 单分区表alter table test_partition drop partition(year=2018); 6.2 多分区表6.2.1 删除YEAR=2018,MONTH=12alter table test_partition2 drop partition(year=2018,month=12); 6.2.2 删除YEAR=2018year=2018所有的月份都会删除alter table test_partition2 drop partition(year=2018); 6.2.3 删除MONTH=10所有月份等于10的分区都会删除,无论year=2018,还是year=2017…alter table test_partition2 drop partition(month=10);
前言最近其实一直在用Echarts写前端,之前也想过总结一下Echarts的用法,但是官网的例子已经很全了。写这篇博客是因为Echarts官网把很多地图的例子都去掉了,且不能下载地图Json的数据,而相关的博客又很少,搜到两个,但是不很符合自己的想法,所以就想自己实现总结一下最基本的地图钻取,代码上传到GitHub,这样便于后面再有相关需求的时候,直接从GitHub上下载下来,在这个基础上修改添加功能就好了。1、演示地址暂时在没有下级地图的时候会直接返回到第一级中国地图,后面可能改为:提示没有下级地图,然后增加一个回到一级地图的按钮http://dongkelun.com/echarts-map2、动图演示一张一张的截图,图片太多了,就先学了一下录制gif3、代码其中地图的json数据是同事之前下载的(可能不全,几个没用的Json没有去掉),在别人的GitHub上也可以找到代码已上传到GitHub:https://github.com/dongkelun/echarts-map4、部署本项目为静态网页,但由于需要获取.json文件的数据,所以不能直接打开index.html(会报js的错误,可以自己试试)将本项目复制到服务器下,如tomcat的webapps目录下,启动tomcat,然后在浏览器里输入http://localhost:8080/echarts-map即可因为自己在学Vue,后面可能会用Vue重新实现一下,并添加一些相对复杂的功能,比如添加数据,使每个省的颜色不一样,更多样化一些。之所以没有先用Vue实现,也没有实现比较复杂的效果,是因为考虑到不是每个人都会Vue,所以用最简单的静态html+css+js实现,这样能使每个人都能看懂,而且可以最基础的地图钻取的模版。后面如果用Vue实现的话,会新建一个项目并上传到GitHub,并及时更新本博客。更新(2019.02.19)Vue项目地址:https://github.com/dongkelun/vue-echarts-map附录附上核心代码:index.html<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width,initial-scale=1.0"> <title>Echarts 地图钻取</title> <link rel="shortcut icon" href=./favicon.ico> <link rel="stylesheet" href="./css/index.css" type="text/css"> <script type="text/javascript" src="./js/echarts.min.js"></script> <script src="https://code.jquery.com/jquery-3.0.0.min.js"></script> <script type="text/javascript" src="./js/index.js"></script> <script type="text/javascript" src="./js/china-main-city-map.js"></script> <script type="text/javascript" src="./js/rem.js"></script> </head> <body> <div class='title'>Echarts中国地图三级钻取</div> <div class="box"> <button class= "backBtn" onclick="back()">返回上级</button> <div id="mapChart" class="chart"></div> </div> </body> </html>index.cssbody{ background-image: url('../asset/images/beijingtupian.png'); .title{ width: 100%; height: 10vh; text-align: center; color:#fff; font-size: 2em; line-height: 10vh; .box { position: absolute; width: 90%; height: 80vh; left:5%; top:10%; background-color:rgba(24,27,52,0.62); .chart{ position: relative; height: 90%; top:10%; .backBtn{ position: absolute; top:0; background-color: #00C298; border: 0; color:#fff; height: 27px; font-family: Microsoft Yahei; font-size: 1em; cursor: pointer; }index.js$(function () {//dom加载后执行 mapChart('mapChart') * 根据Json里的数据构造Echarts地图所需要的数据 * @param {} mapJson function initMapData(mapJson) { var mapData = []; for (var i = 0; i < mapJson.features.length; i++) { mapData.push({ name: mapJson.features[i].properties.name, //id:mapJson.features[i].id return mapData; * 返回上一级地图 function back() { if (mapStack.length != 0) {//如果有上级目录则执行 var map = mapStack.pop(); $.get('./asset/json/map/' + map.mapId + '.json', function (mapJson) { registerAndsetOption(myChart, map.mapId, map.mapName, mapJson, false) //返回上一级后,父级的ID、Name随之改变 parentId = map.mapId; parentName = map.mapName; * Echarts地图 //中国地图(第一级地图)的ID、Name、Json数据 var chinaId = 100000; var chinaName = 'china' var chinaJson = null; //记录父级ID、Name var mapStack = []; var parentId = null; var parentName = null; //Echarts地图全局变量,主要是在返回上级地图的方法中会用到 var myChart = null; function mapChart(divid) { $.get('./asset/json/map/' + chinaId + '.json', function (mapJson) { chinaJson = mapJson; myChart = echarts.init(document.getElementById(divid)); registerAndsetOption(myChart, chinaId, chinaName, mapJson, false) parentId = chinaId; parentName = 'china' myChart.on('click', function (param) { var cityId = cityMap[param.name] if (cityId) {//代表有下级地图 $.get('./asset/json/map/' + cityId + '.json', function (mapJson) { registerAndsetOption(myChart, cityId, param.name, mapJson, true) } else { //没有下级地图,回到一级中国地图,并将mapStack清空 registerAndsetOption(myChart, chinaId, chinaName, chinaJson, false) mapStack = [] parentId = chinaId; parentName = chinaName; // $.get('./asset/json/map/'+param.data.id+'.json', function (mapJson,status) { // registerAndsetOption(myChart,param.data.id,param.name,mapJson,true) // }).fail(function () { // registerAndsetOption(myChart,chinaId,'china',chinaJson,false) // }); * @param {*} myChart * @param {*} id 省市县Id * @param {*} name 省市县名称 * @param {*} mapJson 地图Json数据 * @param {*} flag 是否往mapStack里添加parentId,parentName function registerAndsetOption(myChart, id, name, mapJson, flag) { echarts.registerMap(name, mapJson); myChart.setOption({ series: [{ type: 'map', map: name, itemStyle: { normal: { areaColor: 'rgba(23, 27, 57,0)', borderColor: '#1dc199', borderWidth: 1, data: initMapData(mapJson) if (flag) {//往mapStack里添加parentId,parentName,返回上一级使用 mapStack.push({ mapId: parentId, mapName: parentName parentId = id; parentName = name;
前言自己有个需求,如题,需要获取HDFS路径下所有的文件名,然后根据文件名用Spark进行后续操作。想了一下用Spark好像不太容易获取到,还要递归的去获取子目录下的文件名,于是查了一下,最后用Hadoop的API搞定,这里记录下,方便以后会用到。1、数据测试路径:/tmp/dkl,全路径名hdfs://ambari.master.com:8020/tmp/dkl用hadoop的命令查看一下,该路径下都有哪些文件和文件夹 hadoop fs -ls /tmp/dkl 附图:2、完整代码不多做解释了,直接看代码和结果吧(稍微封装了一下,有其它需求可以参考改写)package com.dkl.leanring.spark.hdfs import java.net.URI; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileUtil; import scala.collection.mutable.ArrayBuffer * 主要目的是打印某个hdfs目录下所有的文件名,包括子目录下的 * 其他的方法只是顺带示例,以便有其它需求可以参照改写 object FilesList { def main(args: Array[String]): Unit = { val path = "hdfs://ambari.master.com:8020/tmp/dkl" println("打印所有的文件名,包括子目录") listAllFiles(path) println("打印一级文件名") listFiles(path) println("打印一级目录名") listDirs(path) println("打印一级文件名和目录名") listFilesAndDirs(path) // getAllFiles(path).foreach(println) // getFiles(path).foreach(println) // getDirs(path).foreach(println) def getHdfs(path: String) = { val conf = new Configuration() FileSystem.get(URI.create(path), conf) def getFilesAndDirs(path: String): Array[Path] = { val fs = getHdfs(path).listStatus(new Path(path)) FileUtil.stat2Paths(fs) /**************直接打印************/ * 打印所有的文件名,包括子目录 def listAllFiles(path: String) { val hdfs = getHdfs(path) val listPath = getFilesAndDirs(path) listPath.foreach(path => { if (hdfs.getFileStatus(path).isFile()) println(path) else { listAllFiles(path.toString()) * 打印一级文件名 def listFiles(path: String) { getFilesAndDirs(path).filter(getHdfs(path).getFileStatus(_).isFile()).foreach(println) * 打印一级目录名 def listDirs(path: String) { getFilesAndDirs(path).filter(getHdfs(path).getFileStatus(_).isDirectory()).foreach(println) * 打印一级文件名和目录名 def listFilesAndDirs(path: String) { getFilesAndDirs(path).foreach(println) /**************直接打印************/ /**************返回数组************/ def getAllFiles(path: String): ArrayBuffer[Path] = { val arr = ArrayBuffer[Path]() val hdfs = getHdfs(path) val listPath = getFilesAndDirs(path) listPath.foreach(path => { if (hdfs.getFileStatus(path).isFile()) { arr += path } else { arr ++= getAllFiles(path.toString()) def getFiles(path: String): Array[Path] = { getFilesAndDirs(path).filter(getHdfs(path).getFileStatus(_).isFile()) def getDirs(path: String): Array[Path] = { getFilesAndDirs(path).filter(getHdfs(path).getFileStatus(_).isDirectory()) /**************返回数组************/ }3、结果
前言最近一直在写前端,用的是JSP,但是很多人都说JSP已经过时了。既然做了几个月的前端,那就把前端学的好一点,学点新技术,跟上潮流。感觉Vue挺火的,所以这几天学了一下Vue,开始看的官方文档,然后直接用GitHub上比较火的项目进行学习,本地跑起来,看看效果、源码和代码结构,学习相关的插件等,并部署了其中一个项目到我的二级域名下:vue.dongkelun.com(感觉这个项目里的东西挺全的)。因为一直用的github上别人搭建好的项目进行学习,担心和用Vue CLI创建的项目的代码结构有区别,所以就看了一下Vue CLI的官方文档,进行简单搭建,看看到底有什么区别,做到心中有数。本文的环境:win10Vue CLI官方文档:https://cli.vuejs.org/zh/1、前提首先你要安装好nodejs和yarn,直接在官网下载安装包,一键安装即可,不需要什么环境配置,我安装的是最新版本(node-v10.13.0、yarn-1.12.3)2、安装同时写Vue CLI 3和Vue CLI 2 的原因是官方默认的是3,而自己学习的GitHub上的项目为2~2.1 新版本 Vue CLI 3写这篇文章的时候官网默认的为Vue CLI 3npm install -g @vue/cli yarn global add @vue/cli2.2 旧版本 Vue CLI 2npm install -g @vue/cli-init # `vue init` 的运行效果将会跟 `vue-cli@2.x` 相同 3、创建项目Vue CLI 3:vue create hello-world Vue CLI 2:vue init webpack my-project 一直按回车为默认选项,手动设置请参考官方文档4、运行项目Vue CLI 3:cd hello-world yarn serveVue CLI 2:cd my-project/ npm run dev5、验证在浏览器输入localhost:8080,看见下图图所示的效果即为成功6、构建yarn build npm run build 会生成一个dist文件夹7、部署7.1 本地预览dist 目录需要启动一个 HTTP 服务器来访问 (除非你已经将 baseUrl 配置为了一个相对的值),所以以 file:// 协议直接打开 dist/index.html 是不会工作的。在本地预览生产环境构建最简单的方式就是使用一个 Node.js 静态文件服务器,例如 serve安装serve:npm install -g serve 启动:serve -s dist INFO: Accepting connections at http://localhost:5000 预览:http://localhost:50007.2 部署到tomcat默认的配置直接部署到tomcat,是有错误的(在浏览器检查里发现是找不到对应的静态文件,原因是路径不对),需要修改一下配置,版本2和版本3的配置也不一样Vue CLI 3:在hello-world新建vue.config.js,并添加如下内容module.exports = { baseUrl: process.env.NODE_ENV === 'production' ? '/hello-world/' : '/' }这里参考官方文档:https://cli.vuejs.org/zh/guide/deployment.html#platform-guidesVue CLI 2:找到my-project/config/index.js文件中build对应的assetsPublicPath键值,将其修改为assetsPublicPath: './', 这里参考:https://blog.csdn.net/Dear_Mr/article/details/72871919重新build,将生成的dist文件夹复制到tomcat目录下的webapps文件夹下,可以将dist文件夹改名为hello-world,那么访问路径就为ip/hello-world,也可以不改名那么访问路径就为ip/dist,效果查看vue.dongkelun.com/hello-world、vue.dongkelun.com/dist8、其他问题8.1 npm run dev启动后,用Ctrl+c,关闭不了对应的进程原因是在Git Bash Here里执行的命令,一种方法是在windows 的shell里执行命令(输入cmd进入),另一种办法是如果不想放弃git的命令行的话,每次启动完用tskill node杀死进程。8.2 依然想用 npm run dev 启动上面的hello world程序办法是修改package.json文件,找到scripts下的”serve”: “vue-cli-service serve”,复制这一行到下一行将key(serve)改为dev即可自己可以对比一下,两个版本的package.json文件的差异,多尝试一下就了解了相关阅读Centos7 Tomcat9 安装笔记
前言Spark有多种方式设置日志级别,这次主要记录一下如何在spark-submit设置Spark的日志级别。1、需求因为Spark的日志级别默认为INFO(log4j.rootCategory=INFO, console),这样在运行程序的时候有很多我不需要的日志信息都打印出来了,看起来比较乱,比较烦,抓不住重点,而我只想把warn和error打印出来。之前在测试环境或者在eclipse我是通过其他几种方式(下面会介绍)设置的,但是在生产环境下不允许我修改集群的配置文件(不是我负责~),而在代码里设置日志级别却不生效(原因还没找到),最后通过spark-submit里设置日志级别搞定的。2、spark-submit 设置spark-submit --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=file:log4j.properties"其中log4j.properties为我将本地的日志文件,拷贝到执行spark-submit的机器上参考:https://blog.csdn.net/xueba207/article/details/504366843、其他几种设置方法3.1 修改集群的配置文件cd $SPARK_HOME/conf cp log4j.properties.template log4j.properties vim log4j.properties将log4j.rootCategory=INFO, console改为log4j.rootCategory=WARN, console3.2 在Eclipse里设置将log4j.properties放在项目的src/main/resources即可Spark 默认日志文件:org/apache/spark/log4j-defaults.properties3.3 代码里配置(未生效)park.sparkContext.setLogLevel("WARN") 在代码里设置,不生效原因未知4、 总结1、如果在自己的测试集群上,直接修改$SPARK_HOME/conf下的log4j.properties即可2、如果在Eclipse里,将log4j.properties放在项目的src/main/resources即可3、如果在生产环境的集群,又不允许修改配置文件的话,用上面讲的spark-submit –conf 即可
前言(摘自Spark快速大数据分析)基于分区对数据进行操作可以让我们避免为每个数据元素进行重复的配置工作。诸如打开数据库连接或创建随机数生成器等操作,都是我们应当尽量避免为每个元素都配置一次的工作。Spark 提供基于分区的map 和foreach,让你的部分代码只对RDD 的每个分区运行一次,这样可以帮助降低这些操作的代价。当基于分区操作RDD 时,Spark 会为函数提供该分区中的元素的迭代器。返回值方面,也返回一个迭代器。除mapPartitions() 外,Spark 还有一些别的基于分区的操作符,见下表:函数名调用所提供的返回的对于RDD[T]的函数签名mapPartitions()该分区中元素的迭代器返回的元素的迭代器f: (Iterator[T]) → Iterator[U]mapPartitionsWithIndex()分区序号,以及每个分区中的元素的迭代器返回的元素的迭代器f: (Int, Iterator[T]) → Iterator[U]foreachPartitions()元素迭代器无f: (Iterator[T]) → Unit首先给出上面三个算子的具体代码示例。1、mapPartitions与map类似,不同点是map是对RDD的里的每一个元素进行操作,而mapPartitions是对每一个分区的数据(迭代器)进行操作,具体可以看上面的表格。下面同时用map和mapPartitions实现WordCount,看一下mapPartitions的用法以及与map的区别package com.dkl.leanring.spark.test import org.apache.spark.sql.SparkSession object WordCount { def main(args: Array[String]): Unit = { val spark = SparkSession.builder().master("local").appName("WordCount").getOrCreate() val sc = spark.sparkContext val input = sc.parallelize(Seq("Spark Hive Kafka", "Hadoop Kafka Hive Hbase", "Java Scala Spark")) val words = input.flatMap(line => line.split(" ")) val counts = words.map(word => (word, 1)).reduceByKey { (x, y) => x + y } println(counts.collect().mkString(",")) val counts1 = words.mapPartitions(it => it.map(word => (word, 1))).reduceByKey { (x, y) => x + y } println(counts1.collect().mkString(",")) spark.stop() }2、mapPartitionsWithIndex和mapPartitions一样,只是多了一个分区的序号,下面的代码实现了将Rdd的元素数字n变为(分区序号,n*n)val rdd = sc.parallelize(1 to 10, 5) val res = rdd.mapPartitionsWithIndex((index, it) => { it.map(n => (index, n * n)) println(res.collect().mkString(" "))3、foreachPartitionsforeachPartitions和foreach类似,不同点也是foreachPartitions基于分区进行操作的rdd.foreachPartition(it => it.foreach(println)) 4、关于如何避免重复配置下面以打开数据库连接举例,需求是这样的:读取mysql表里的数据,做了一系列数据处理得到结果之后,需要修改我们mysql表里的每一条数据的状态,代表程序已经处理过了,下次不需要处理了。4.1 表以最简单表结构示例字段名注释ID主键、唯一标识ISDEAL程序是否处理过建表语句CREATE TABLE test ( id INTEGER NOT NULL AUTO_INCREMENT, isdeal INTEGER DEFAULT 0 NOT NULL, primary key(id) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_general_ci;4.2 不基于分区操作一共用两种方法4.2.1 第一种package com.dkl.leanring.spark.sql.mysql import org.apache.spark.sql.SparkSession object UpdateMysqlDemo { def main(args: Array[String]): Unit = { val spark = SparkSession.builder().appName("UpdateMysqlDemo").master("local").getOrCreate() val database_url = "jdbc:mysql://192.168.44.128:3306/test?useUnicode=true&characterEncoding=utf-8&useSSL=false" val user = "root" val password = "Root-123456" val df = spark.read .format("jdbc") .option("url", database_url) .option("dbtable", "(select * from test where isDeal=0 limit 5)a") .option("user", user) .option("password", password) .option("driver", "com.mysql.jdbc.Driver") .option("numPartitions", "5") .option("partitionColumn", "ID") .option("lowerBound", "1") .option("upperBound", "10") .load() import java.sql.{ Connection, DriverManager, ResultSet }; df.rdd.foreach(row => { val conn = DriverManager.getConnection(database_url, user, password) try { // Configure to be Read Only val statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY) val prep = conn.prepareStatement(s"update test set isDeal=1 where id=?") val id = row.getAs[Int]("id") prep.setInt(1, id) prep.executeUpdate } catch { case e: Exception => e.printStackTrace } finally { conn.close() spark.stop() }上面的代码,取isDeal=0的前五条,因为造的数据量少,所以只取了前五条,然后指定了五个分区,这里只是一个代码示例,实际工作中应该数据量很大,每个分区肯定不止一条数据根据上面的代码,看到用这种方式的缺点是每一个元素都要创建一个数据库连接,这样频繁创建连接、关闭连接,在数据量很大的情况下,势必会对性能产生影响,但是优点是不用担心内存不够。4.2.2 第二种val conn = DriverManager.getConnection(database_url, user, password) try { val statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY) val prep = conn.prepareStatement(s"update test set isDeal=1 where id=?") df.select("id").collect().foreach(row => { val id = row.getAs[Int]("id") prep.setInt(1, id) prep.executeUpdate } catch { case e: Exception => e.printStackTrace }这种方式的缺点是把要操作的数据全部转成scala数组,仅在Driver端执行,但是如果数据量很大的话,可能因为Driver内存不够大而抛出异常,优点是只建立一次数据库连接,在数据量不是特别大,且确定Driver的内存足够的时候,可以采取这种方式。4.3 基于分区的方式df.rdd.foreachPartition(it => { val conn = DriverManager.getConnection(database_url, user, password) try { val statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY) val prep = conn.prepareStatement(s"update test set isDeal=1 where id=?") it.foreach(row => { val id = row.getAs[Int]("id") prep.setInt(1, id) prep.executeUpdate } catch { case e: Exception => e.printStackTrace } finally { conn.close() })这种方式就结合了上面两种方式的优点,基于分区的方式使得创建连接的次数不会那么多,然后每个分区的数据也可以平均分到每个节点的executor上,避免了内存不足产生的异常,当然前提是要合理的分配分区数,既不能让分区数太多,也不能让每个分区的数据太多,还有要注意数据倾斜的问题,因为当数据倾斜造成某个分区数据量太大同样造成OOM(内存溢出)。4.4 其他上面只是列举了一个例子,且只是在foreach这样的action算子里体现的,当然肯定也有需求需是在transformation里进行如数据库的连接这样的操作,大家可类比的使用mapPartitions即可5、其他优点(未证实)网上有很多博客提到mapPartitions还有其他优点,就是mapPartitions比map快,性能高,原因是因为map的function会执行rdd.count次,而mapPartitions的function则执行rdd.numPartitions次。但我并这么认为,因mapPartitions的function和map的function是不一样的,mapPartitions里的迭代器的每个元素还是都要执行一遍的,实际上也是执行rdd.count次。下面以其中一篇博客举例(只列出优点,大部分博客上的写的都一样的,应该出自同一篇博客吧~)博客地址:Spark—算子调优之MapPartitions提升Map类操作性能至于mapPartitions是否真的比map处理速度快,如果我有时间验证得到结果的话,我再更新一下这个地方~
java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)
![]() |
呐喊的洋葱 · 新加坡留学生陪读妈妈政策详细解读 - 知乎 1 年前 |