相关文章推荐
叛逆的沙发  ·  Convert data ...·  1 月前    · 
内向的冲锋衣  ·  pandas ...·  1 年前    · 
另类的甜瓜  ·  手把手将Visual Studio ...·  1 年前    · 

sparkproject1   192.168.124.110

sparkproject2   192.168.124.111

sparkproject3   192.168.124.112

https://blog.csdn.net/weixin_38750084/article/details/90650015

先通过:service  mysqld  status  查看mysql是否启动

(没装mysql service 也可通过 /etc/rc.d/init.d/mysqld status查看)

若没启动,则启动mysql:service  mysqld  start

启动mysql(hive元数据存储位置):

[root@sparkproject1 hiveTestJar]# service mysql start
mysql: unrecognized service
[root@sparkproject1 hiveTestJar]# service mysqld start
Starting mysqld:                                           [  OK  ]
You have new mail in /var/spool/mail/root
[root@sparkproject1 hiveTestJar]# 

之前安装的mysql流程:

Mysql官网: https://dev.mysql.com/downloads/repo/yum/

安装mysql参考:

https://www.cnblogs.com/zhangwufei/p/6957912.html

Mysql官网: https://dev.mysql.com/downloads/repo/yum/

root /root

mysql新增用户tang 密码tang 数据库 tang:

参考: https://blog.csdn.net/u013216667/article/details/70158452

登陆: mysql -u tang -p

退出:quit/exit

账号信息:

tang/tang

hive/hive

root/root

让tang和root都允许外部访问:

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'IDENTIFIED BY 'root' WITH GRANT OPTION;

(我很久没连过这个mysql的,发现在windows 连不上了,提示:Access denied for user 'tang'@'192.168.124.12' (using password: YES),后来重新执行了这句就好使了,参考: https://zhidao.baidu.com/question/420488517.html

连接成功:

Database changed
mysql> select user,host,password from user;
+------+---------------+-------------------------------------------+
| user | host          | password                                  |
+------+---------------+-------------------------------------------+
| root | localhost     |                                           |
| root | sparkproject1 |                                           |
| root | 127.0.0.1     |                                           |
|      | localhost     |                                           |
|      | sparkproject1 |                                           |
| hive | %             | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
| hive | localhost     | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
| hive | spark1        | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
| hive | sparkproject1 | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
| root | %             | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
+------+---------------+-------------------------------------------+
10 rows in set (0.00 sec)
mysql> 

查看hive的conf目录下的hive-site.xml

<property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> <description>password to use against metastore database</description> </property>

Hive启动方式:

(1)输入hive 直接启动

(2)以服务的形式后台启动 ,如下 :

可以通过 ps -ef|grep hive 来看hive 的端口号,然后kill 掉相关的进程。

nohup hive --service metastore  2>&1 &  

用来启动metastore

nohup  hive --service hiveserver2   2>&1 & 

用来启动hiveserver2

可以通过查看日志,来确认是否正常启动。

注意!如果 hiveserver2 不启动,jdbc将无法正常连接

参考 :https://www.jianshu.com/p/7b1b21bf05c2

            https://blog.csdn.net/weixin_35353187/article/details/82154151

测试Hive:

先 show functions 然后:
hive> desc function sum
sum(x) - Returns the sum of a set of numbers
Time taken: 0.932 seconds, Fetched: 1 row(s)

创建数据库报错了:

Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE:  If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.

可能是 磁盘不够了,我就给虚拟机扩容了,参考:

https://blog.csdn.net/weixin_38750084/article/details/90685295

然后重启linux,启动hadoop,就可以 执行成功了。

参考:http://wenda.chinahadoop.cn/question/23

创建数据表

use db_hive_edu;   //注意 空格 

create  table student(id int,name string)  row  format delimited fields terminated by '\t';

报错 解决:

> create table student(id int,name string) row format delimited fields terminatedby '\t'; MismatchedTokenException(26!=243) at org.antlr.runtime.BaseRecognizer.recoverFromMismatchedToken(BaseRecognizer.java:617) at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115) at org.apache.hadoop.hive.ql.parse.HiveParser.tableRowFormatFieldIdentifier(HiveParser.java:31755) at org.apache.hadoop.hive.ql.parse.HiveParser.rowFormatDelimited(HiveParser.java:30716) at org.apache.hadoop.hive.ql.parse.HiveParser.tableRowFormat(HiveParser.java:30992) at org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:4677) at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2138) at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1392) at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1030) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:417) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1026) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1091) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:952) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:221) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:431) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:800) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:694) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:633) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) FAILED: ParseException line 1:69 mismatched input 'terminatedby' expecting TERMINATED near 'fields' in table row format's field separator > create table student(id int,name string) row format delimited fields terminatedby '\\t'; MismatchedTokenException(26!=243) at org.antlr.runtime.BaseRecognizer.recoverFromMismatchedToken(BaseRecognizer.java:617) at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115) at org.apache.hadoop.hive.ql.parse.HiveParser.tableRowFormatFieldIdentifier(HiveParser.java:31755) at org.apache.hadoop.hive.ql.parse.HiveParser.rowFormatDelimited(HiveParser.java:30716) at org.apache.hadoop.hive.ql.parse.HiveParser.tableRowFormat(HiveParser.java:30992) at org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:4677) at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2138) at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1392) at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1030) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:417) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1026) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1091) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:952) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:221) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:431) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:800) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:694) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:633) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) FAILED: ParseException line 1:69 mismatched input 'terminatedby' expecting TERMINATED near 'fields' in table row format's field separator > create table student(id int,name string) row format delimited fields terminated by '\t'; Time taken: 0.091 seconds

最后 创建 成功,原因是我把terminated by  写成了terminatedby。

将文件数据写入表中

/usr/local/hive/hiveTestFile 下创建一个文件student.txt

1       zhangsan
2       lisi
3       wangwu 

内容加载到hdfs

然后在hive执行sql

load data local inpath '/usr/local/hive/hiveTestFile/student.txt' into table db_hive_edu.student;

hive> load data local inpath '/usr/local/hive/hiveTestFile/student.txt' into table db_hive_edu.student;
Loading data to table db_hive_edu.student
Table db_hive_edu.student stats: [numFiles=1, numRows=0, totalSize=27, rawDataSize=0]
Time taken: 2.021 seconds

查看下数据,已经成功上传:

hive> show databases;
db_hive_edu
db_hive_edu1
default
Time taken: 0.06 seconds, Fetched: 3 row(s)
hive> use db_hive_edu;
Time taken: 0.013 seconds
hive> show tables;
student
Time taken: 0.033 seconds, Fetched: 1 row(s)
hive> select * from student;
1	zhangsan
2	lisi
3	wangwu
Time taken: 0.68 seconds, Fetched: 3 row(s)

去hdfs的 目录中查看:

/user/hive/warehouse/db_hive_edu.db/student

可以下载直接打开:

 在mysql中也可以查看一下:

删除表:hive> drop table student1;

删除数据库:hive> DROP DATABASE IF EXISTS db_hive_edu1;

默认日志目录:

/tmp/<user.name>文件夹的hive.log文件中,全路径就是/tmp/当前用户名/hive.log。

[root@sparkproject1 hive]# cd /tmp/
You have new mail in /var/spool/mail/root
[root@sparkproject1 tmp]# cd root/
[root@sparkproject1 root]# 
[root@sparkproject1 root]# 
[root@sparkproject1 root]# tail -f hive.log
2018-10-23 08:37:19,964 INFO  [HiveServer2-Handler-Pool: Thread-61]: parse.SemanticAnalyzer (SemanticAnalyzer.java:getMetaData(1349)) - Get metadata for subqueries
2018-10-23 08:37:19,965 INFO  [HiveServer2-Handler-Pool: Thread-61]: parse.SemanticAnalyzer (SemanticAnalyzer.java:getMetaData(1373)) - Get metadata for destination tables
2018-10-23 08:37:19,981 INFO  [HiveServer2-Handler-Pool: Thread-61]: parse.SemanticAnalyzer (SemanticAnalyzer.java:analyzeInternal(9419)) - Completed getting MetaData in Semantic Analysis
2018-10-23 08:37:20,009 INFO  [HiveServer2-Handler-Pool: Thread-61]: parse.SemanticAnalyzer (SemanticAnalyzer.java:genFileSinkPlan(6069)) - Set stats collection dir : hdfs://sparkproject1:9000/tmp/hive-root/hive_2018-10-23_08-37-19_936_2568938162445185289-10/-mr-10000/.hive-staging_hive_2018-10-23_08-37-19_936_2568938162445185289-10/-ext-10002
2018-10-23 08:37:20,012 INFO  [HiveServer2-Handler-Pool: Thread-61]: parse.SemanticAnalyzer (SemanticAnalyzer.java:analyzeInternal(9501)) - Completed plan generation
2018-10-23 08:37:20,013 INFO  [HiveServer2-Handler-Pool: Thread-61]: ql.Driver (Driver.java:compile(446)) - Semantic Analysis Completed
2018-10-23 08:37:20,014 INFO  [HiveServer2-Handler-Pool: Thread-61]: ql.Driver (Driver.java:getSchema(245)) - Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:student.id, type:int, comment:null), FieldSchema(name:student.name, type:string, comment:null)], properties:null)
2018-10-23 08:37:20,017 INFO  [HiveServer2-Background-Pool: Thread-141]: ql.Driver (Driver.java:checkConcurrency(165)) - Concurrency mode is disabled, not creating a lock manager
2018-10-23 08:37:20,017 INFO  [HiveServer2-Background-Pool: Thread-141]: ql.Driver (Driver.java:execute(1243)) - Starting command: select * from student
2018-10-23 08:37:20,018 INFO  [HiveServer2-Background-Pool: Thread-141]: ql.Driver (SessionState.java:printInfo(560)) - OK
[root@sparkproject1 root]# 

hive底层存储于hdfs,资源使用yarn管理:

执行 sql

> select count(*) from student; Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1559981808383_0004, Tracking URL = http://sparkproject1:8088/proxy/application_1559981808383_0004/ Kill Command = /usr/local/hadoop/bin/hadoop job -kill job_1559981808383_0004 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2019-06-08 16:27:33,118 Stage-1 map = 0%, reduce = 0% 2019-06-08 16:27:41,082 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.08 sec 2019-06-08 16:27:52,021 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.61 sec MapReduce Total cumulative CPU time: 3 seconds 610 msec Ended Job = job_1559981808383_0004 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.61 sec HDFS Read: 503 HDFS Write: 2 SUCCESS Total MapReduce CPU Time Spent: 3 seconds 610 msec Time taken: 30.097 seconds, Fetched: 1 row(s) [3]+ Stopped hive You have new mail in /var/spool/mail/root [root@sparkproject1 local]#

student表中有3条数据,但是执行完为啥是0呢,原因是未指定是哪个库 :

hive> select count(*) from  db_hive_edu.student;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1559981808383_0007, Tracking URL = http://sparkproject1:8088/proxy/application_1559981808383_0007/
Kill Command = /usr/local/hadoop/bin/hadoop job  -kill job_1559981808383_0007
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-06-08 16:53:28,813 Stage-1 map = 0%,  reduce = 0%
2019-06-08 16:53:35,507 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
2019-06-08 16:53:44,062 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.44 sec
MapReduce Total cumulative CPU time: 2 seconds 440 msec
Ended Job = job_1559981808383_0007
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 2.44 sec   HDFS Read: 261 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 440 msec
Time taken: 25.149 seconds, Fetched: 1 row(s)
[4]+  Stopped                 hive
You have new mail in /var/spool/mail/root
[root@sparkproject1 local]# 
[root@sparkproject1 hiveTestFile]# pwd
/usr/local/hive/hiveTestFile
[root@sparkproject1 hiveTestFile]# ll
total 20
-rw-r--r-- 1 root root 82 Jun  8 20:00 hivesql.sql
-rw-r--r-- 1 root root 40 Jun  8 17:34 student2.txt
-rw-r--r-- 1 root root 27 Oct 23  2018 student.txt
-rw-r--r-- 1 root root 31 Oct 21  2018 testdata_student.txt
-rw-r--r-- 1 root root 38 Oct 21  2018 users.txt
[root@sparkproject1 hiveTestFile]# 

脚本内容为:

select * from  db_hive_edu.student union all select * from  db_hive_edu.student2;
root@sparkproject1 hiveTestFile]# ll
total 16
-rw-r--r-- 1 root root 40 Jun  8 17:34 student2.txt
-rw-r--r-- 1 root root 27 Oct 23  2018 student.txt
-rw-r--r-- 1 root root 31 Oct 21  2018 testdata_student.txt
-rw-r--r-- 1 root root 38 Oct 21  2018 users.txt
You have new mail in /var/spool/mail/root
[root@sparkproject1 hiveTestFile]# 
[root@sparkproject1 hiveTestFile]# 
[root@sparkproject1 hiveTestFile]# 
[root@sparkproject1 hiveTestFile]# vi hivesql.sql
You have new mail in /var/spool/mail/root
[root@sparkproject1 hiveTestFile]# ll
total 20
-rw-r--r-- 1 root root 56 Jun  8 19:53 hivesql.sql
-rw-r--r-- 1 root root 40 Jun  8 17:34 student2.txt
-rw-r--r-- 1 root root 27 Oct 23  2018 student.txt
-rw-r--r-- 1 root root 31 Oct 21  2018 testdata_student.txt
-rw-r--r-- 1 root root 38 Oct 21  2018 users.txt
[root@sparkproject1 hiveTestFile]# 
[root@sparkproject1 hiveTestFile]# hive -f hivesql.sql 
19/06/08 19:54:03 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-0.13.1-cdh5.3.6.jar!/hive-log4j.properties
FAILED: SemanticException [Error 10001]: Line 1:46 Table not found 'student2'
[root@sparkproject1 hiveTestFile]# vi hivesql.sql
You have new mail in /var/spool/mail/root
[root@sparkproject1 hiveTestFile]# hive -f hivesql.sql 
19/06/08 19:55:36 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-0.13.1-cdh5.3.6.jar!/hive-log4j.properties
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1559981808383_0010, Tracking URL = http://sparkproject1:8088/proxy/application_1559981808383_0010/
Kill Command = /usr/local/hadoop/bin/hadoop job  -kill job_1559981808383_0010
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 0
2019-06-08 19:55:49,916 Stage-1 map = 0%,  reduce = 0%
2019-06-08 19:55:59,131 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.53 sec
MapReduce Total cumulative CPU time: 3 seconds 530 msec
Ended Job = job_1559981808383_0010
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2   Cumulative CPU: 3.53 sec   HDFS Read: 537 HDFS Write: 67 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 530 msec
1	zhangsan2
2	lisi2
3	wangwu2
4	zhaoliu
1	zhangsan
2	lisi
3	wangwu
Time taken: 22.516 seconds, Fetched: 7 row(s)
[root@sparkproject1 hiveTestFile]# 

将结果写入一个 文件中去则在语句后面指定具体文件路径即可。

[root@sparkproject1 hiveTestFile]# hive -f hivesql.sql >/usr/local/hive/hiveTestFile/hivesql-result.txt
19/06/08 20:08:42 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-0.13.1-cdh5.3.6.jar!/hive-log4j.properties
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1559981808383_0011, Tracking URL = http://sparkproject1:8088/proxy/application_1559981808383_0011/
Kill Command = /usr/local/hadoop/bin/hadoop job  -kill job_1559981808383_0011
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 0
2019-06-08 20:08:56,892 Stage-1 map = 0%,  reduce = 0%
2019-06-08 20:09:03,631 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.37 sec
MapReduce Total cumulative CPU time: 2 seconds 370 msec
Ended Job = job_1559981808383_0011
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2   Cumulative CPU: 2.37 sec   HDFS Read: 537 HDFS Write: 67 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 370 msec
Time taken: 21.717 seconds, Fetched: 7 row(s)
You have new mail in /var/spool/mail/root
[root@sparkproject1 hiveTestFile]# ll
total 24
-rw-r--r-- 1 root root 67 Jun  8 20:09 hivesql-result.txt
-rw-r--r-- 1 root root 82 Jun  8 20:00 hivesql.sql
-rw-r--r-- 1 root root 40 Jun  8 17:34 student2.txt
-rw-r--r-- 1 root root 27 Oct 23  2018 student.txt
-rw-r--r-- 1 root root 31 Oct 21  2018 testdata_student.txt
-rw-r--r-- 1 root root 38 Oct 21  2018 users.txt
[root@sparkproject1 hiveTestFile]# cat hivesql-result.txt 
1	zhangsan2
2	lisi2
3	wangwu2
4	zhaoliu
1	zhangsan
2	lisi
3	wangwu
[root@sparkproject1 hiveTestFile]# 
本人环境:3台虚拟机分别为sparkproject1 192.168.124.110sparkproject2192.168.124.111sparkproject3192.168.124.112参考:https://blog.csdn.net/weixin_38750084/article/details/90650015启动mysql(hiv...
Hive Session ID = c62308d5-0e71-4952-bacc-e1ce83f13005 Logging initialized using configuration in file:/etc/ecm/hive-conf-3.1.1-1.1.6/hive-log4j2.properties Async: true Hive Session ID = a5683071-8eb9-4583-b7d9-a438ab176a86 NoViableAltException(149@[])
原生Hive启动停止详解一、Hive启动命令二、命令详解讲解 原生的Hive没有提供启动的脚本,需要调用命令来启动停止启动用nohup结合hive命令来启动启动Hive的时候,需要把metastore和hiveserver2一起启动,分别来讲解 一、Hive启动命令 nohup hive --service metastore > /tmp/hivemetastore.log 2>&1 & nohup hive --service hiveserver2 > /
打出日志便于差错,如不需要可省略。 [hadoop ~]# nohup hive --service metastore >> ~/metastore.log 2>&1 & ##hivemetastore [hadoop ~]# nohup hive --service hiveserver2 >> ~/hivese...
nohup hive --service metastore >> $HIVE_HOME/log/metasotre.log 2>&1 & nohup hive --service hiveserver2 >> $HIVE_HOME/log/hiveserver.log 2>&1 & 2、停止命令 #!/bin/bash process="hive" PID=$(ps x | grep
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name [(column_name data_type [COMMENT column_comment], ...)] [COMMENT table_comment] [PARTITIONED BY (column_name data_type [COMMENT column_comment], ...)] [CLUSTERED BY (column_name1, column_name2, ...)] [SOR.
无法解决 org.apache.hive:hive-exec:2.3.7 的问题可能有以下几种原因: 1. 版本不兼容:可能当前项目使用的其他依赖库与 org.apache.hive:hive-exec:2.3.7 的版本不兼容。检查一下你使用的所有依赖库,并确保它们与 Hive 的版本兼容。尝试使用与 Hive 相对应的依赖库版本或者升级 Hive 到与你的依赖库版本兼容的版本。 2. 依赖库配置错误:可能在你的项目配置文件(如 pom.xml 或 build.gradle)中没有正确引入 org.apache.hive:hive-exec:2.3.7 依赖库。检查一下你的配置文件,并确保已经正确引入了 Hive 的相关依赖库。如果配置文件中没有该依赖库,尝试添加它到你的配置文件中。 3. 仓库访问问题:如果你使用的依赖库存储在远程仓库中,可能出现了无法访问该仓库的问题。检查一下你的网络连接,并确保可以正常访问依赖库所在的仓库。如果访问受限制,可以尝试使用代理服务器或者更改依赖库的存储位置,例如将其下载到本地并通过本地路径引用。 4. Maven/Gradle 配置问题:如果你使用的是 Maven 或 Gradle 进行构建项目,可能出现了配置问题。检查一下你的构建工具的配置文件,确保已正确设置了仓库地址、依赖库的坐标和版本等信息。如果配置错误,尝试重新配置或参考官方文档以获取正确的配置。 如果以上方法都无法解决 org.apache.hive:hive-exec:2.3.7 的问题,可能需要进一步排查具体错误信息或查找其他人是否遇到了相似的问题。 ### 回答2: 无法解析 org.apache.hive:hive-exec:2.3.7 的原因有多种可能。以下是一些常见的解决方法: 1. 检查是否将正确的 Maven 仓库添加到项目的配置文件中。您可以在项目的pom.xml文件中添加 Hive 依赖项。确保将 Maven 中央仓库添加到配置文件中,以便从中央仓库下载依赖项。 2. 检查网络连接是否正常。如果您的网络连接存在问题,可能无法连接到依赖项所在的 Maven 仓库。确保您的网络连接正常,然后尝试重新构建项目。 3. 检查您正在使用的 Maven 版本是否与项目中指定的依赖项版本不兼容。尝试更新 Maven 版本,并确保使用的 Maven 版本与项目中的依赖项版本兼容。 4. 如果您正在使用的是私有 Maven 仓库,请确保正确配置了仓库的 URL 和凭据信息。有时候,无法解析依赖项是由于未正确配置私有仓库的原因导致的。 5. 检查您本地的 Maven 仓库是否已正确下载和缓存所需依赖项。如果 Maven 仓库中缺少所需的 Hive 依赖项,那么将无法解析该依赖项。您可以尝试删除本地 Maven 仓库中与 Hive 相关的文件,然后重新构建项目以重新下载依赖项。 如果上述方法都无法解决问题,您可能需要进一步检查您的项目配置和环境设置。您还可以搜索相关错误信息和日志,以获得更多关于无法解析依赖项的原因和解决方法的信息。 ### 回答3: 无法解析org.apache.hive:hive-exec:2.3.7的问题可能涉及以下几个方面: 1. 依赖库未添加或版本不正确:检查项目的依赖配置文件中是否添加了org.apache.hive:hive-exec:2.3.7的依赖,如果已添加,请确认版本是否正确。可以尝试通过更新或更换依赖版本来解决问题。 2. 仓库地址或网络连接问题:检查项目的仓库地址是否配置正确,并确认网络连接正常。如果仓库地址无误且网络正常,可能是由于仓库服务器问题导致无法解析依赖库。可以尝试更换其他仓库地址或稍后再次尝试解析。 3. 代理配置问题:如果项目处于代理环境中,需要确保代理配置正确。检查maven的settings.xml文件中的代理配置是否正确,并确认代理服务器的可用性。 4. 本地maven仓库损坏:如果本地maven仓库损坏或缺少相应的依赖库,也可能导致无法解析依赖。可以尝试清理本地maven仓库,并重新下载依赖库。 总之,无法解析org.apache.hive:hive-exec:2.3.7的问题通常是由于依赖配置问题、仓库地址或网络连接问题、代理配置问题、本地maven仓库损坏等原因引起的。根据具体情况逐一排查并解决相应问题,可以解决这个问题。