在HiveStatement中有一个sessHandle:

public class HiveStatement implements java.sql.Statement {
  private final TSessionHandle sessHandle; // 这个代表了hive的session,通过sessionId可以去hive服务器或者hadoop目录中获取hive的查询日志,在hiveserver2中,这个sessionId是UUID类生成的
 

TSessionHandle有个getSessionId方法可以获取到sessionId

public class TSessionHandle implements org.apache.thrift.TBase<TSessionHandle, TSessionHandle._Fields>, java.io.Serializable, Cloneable {
  private THandleIdentifier sessionId; // required
  public THandleIdentifier getSessionId() {
    return this.sessionId;
 

THandleIdentifier类有guid和secret

public class THandleIdentifier implements org.apache.thrift.TBase<THandleIdentifier, THandleIdentifier._Fields>, java.io.Serializable, Cloneable {
  private ByteBuffer guid; // 这个代表了sessionId的字符串(UUID)
  private ByteBuffer secret; // required
 

但是guid是个ByteBuffer,不知道怎么反序列化成字符串
后来我看了hiveserver2的代码,在hive-service.jar中,有个HandleIdentifier类可以对THandleIdentifier的byte数据进行反序列化:

* Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * http://www.apache.org/licenses/LICENSE-2.0 * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. package org.apache.hive.service.cli; import java.nio.ByteBuffer; import java.util.UUID; import org.apache.hive.service.cli.thrift.THandleIdentifier; * HandleIdentifier. public class HandleIdentifier { private final UUID publicId; private final UUID secretId; public HandleIdentifier() { publicId = UUID.randomUUID(); secretId = UUID.randomUUID(); public HandleIdentifier(UUID publicId, UUID secretId) { this.publicId = publicId; this.secretId = secretId; public HandleIdentifier(THandleIdentifier tHandleId) { ByteBuffer bb = ByteBuffer.wrap(tHandleId.getGuid()); this.publicId = new UUID(bb.getLong(), bb.getLong()); bb = ByteBuffer.wrap(tHandleId.getSecret()); this.secretId = new UUID(bb.getLong(), bb.getLong()); public UUID getPublicId() { return publicId; public UUID getSecretId() { return secretId; public THandleIdentifier toTHandleIdentifier() { byte[] guid = new byte[16]; byte[] secret = new byte[16]; ByteBuffer guidBB = ByteBuffer.wrap(guid); ByteBuffer secretBB = ByteBuffer.wrap(secret); guidBB.putLong(publicId.getMostSignificantBits()); guidBB.putLong(publicId.getLeastSignificantBits()); secretBB.putLong(secretId.getMostSignificantBits()); secretBB.putLong(secretId.getLeastSignificantBits()); return new THandleIdentifier(ByteBuffer.wrap(guid), ByteBuffer.wrap(secret)); @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((publicId == null) ? 0 : publicId.hashCode()); result = prime * result + ((secretId == null) ? 0 : secretId.hashCode()); return result; @Override public boolean equals(Object obj) { if (this == obj) { return true; if (obj == null) { return false; if (!(obj instanceof HandleIdentifier)) { return false; HandleIdentifier other = (HandleIdentifier) obj; if (publicId == null) { if (other.publicId != null) { return false; } else if (!publicId.equals(other.publicId)) { return false; if (secretId == null) { if (other.secretId != null) { return false; } else if (!secretId.equals(other.secretId)) { return false; return true; @Override public String toString() { return publicId.toString();

这个类没有依赖太多东西,可以直接复制到自己项目中使用

String sessionId = new HandleIdentifier(sessHandle).getPublicId();

然后hive会把一个session的日志记录在本地的一个文件(${hive.exec.local.scratchdir}/${user}/${sessionId})和hdfs的一个文件(${hive.exec.scratchdir}/${user}/${sessionId})中,可以去跟踪这些文件去看sql的执行情况。

更简单的方法是调用HiveStatement.getQueryLog()去获取查询日志

转载于:https://www.cnblogs.com/lanhj/p/7528347.html

在HiveStatement中有一个sessHandle:public class HiveStatement implements java.sql.Statement { ... private final TSessionHandle sessHandle; // 这个代表了hive的session,通过sessionId可以去hive服务器或者hadoop目录中获取hi...
Hive Session ID = c62308d5-0e71-4952-bacc-e1ce83f13005 Logging initialized using configuration in file:/etc/ecm/hive-conf-3.1.1-1.1.6/hive-log4j2.properties Async: true Hive Session ID = a5683071-8eb9-4583-b7d9-a438ab176a86 NoViableAltException(149@[])
[root@master hive]# bin/hive --service hiveserver2 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/op.
在学习和工作过程中,遇到的一些Hive常见错误,在此记录下来。 文章目录错误1:guava包冲突错误错误2:Hive启动报错错误3:执行Hive命令报错错误4:使用beeline启动报错错误5:Hive配置tez报错 错误1:guava包冲突错误 Hive3启动报错,Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditio [wxler@wxler1 hive]$ bin/schem
Error: Could not open client transport with JDBC Uri: jdbc:hive2://node01:10000: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0) 错误:无法打开具有JDBC Uri的客户端传输:JDBC:hive2://node01:10000:java.net.ConnectException:连接被
lcc@lcc conf$ lcc@lcc conf$ hive --service hiveserver2 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/Users/lcc/soft/hive/hive/lib/log4j-slf...
这几天自己部署一个测试的Hadoop集群,要测试一些hive数据获取的接口功能。Hive部署好后,使用HiveServer2连接总是出现一些模型奇妙的问题: 问题1:Connection refused 问题2:Connection reset 问题3:一个客户端使用jdbc:hive2连接后,其他客户端就连接不上 解决方法: 1、确保hive-site.xml配置的没有问题,我一开始就是没有配置hiveserver2用户名和密码,一直是使用的服务器的用户名和密码登录的。 <configuration