简介:本文介绍了elk(elasticsearch、logstash、kibana)的安装,配置,启动,以及创建用户角色,分享指定内容如dashboard的安全使用elk的方法,也是最全的攻略,笔者的遇到的坑都帮你填平了,一篇文章搞定整个elk。
elk-安装配置使用指南
1、安装包下载、解压、创建独立用户
1.1、安装包下载
cd /opt/elk
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.14.0-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.14.0.tar.gz
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.14.0-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.14.0-linux-x86_64.tar.gz
1.2、解压安装包
tar -xzf elasticsearch-7.14.0-linux-x86_64.tar.gz
tar -xzf logstash-7.14.0.tar.gz
tar -xzf kibana-7.14.0-linux-x86_64.tar.gz
tar -xzf filebeat-7.14.0-linux-x86_64.tar.gz
1.3、新建用户
启动elasticsearch和kibana 要求以非root用户启动
groupadd elk
useradd elk -g elk
chown -R elk:elk elasticsearch-7.14.0
chown -R elk:elk kibana-7.14.0-linux-x86_64
2、elk各模块配置安装启动
2.1、elasticsearch部分
创建数据和日志文件夹,并修改权限
mkdir -pv /data/elk/{data,logs}
chown -R elk:elk /data/elk/
2.1.1、修改配置文件config/elasticsearch.yml(单机版加安全验证)
$ vim elasticsearch-7.14.0/config/elasticsearch.yml
path.data: /data/elk/data
path.logs: /data/elk/logs
transport.port: 9301
node.master: true
node.name: node-1 # 设置节点名
network.host: 0.0.0.0 # 允许外部 ip 访问
#cluster.initial_master_nodes: ["node-1"] # 设置集群初始主节点
discovery.seed_hosts: ["127.0.0.1:9301"]
discovery.type: single-node
#http.cors.enabled: true
#http.cors.allow-origin: "*"
#http.cors.allow-headers: Authorization,X-Requested-With,Content-Type,Content-Length
#xpack.security.authc.accept_default_password: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
***使用附录一进行账号密码的配置***
保存后重启ES
进入bin目录,输入”./setup-passwords interactive”初始化密码
内置三个用户:
elastic:内置超级用户
kibana_system:仅可用于kibana用来连接elasticsearch并与之通信, 不能用于kibana登录
logstash_system:用于Logstash在Elasticsearch中存储监控信息时使用
2.1.2、启动
su elk
cd elasticsearch-7.14.0
nohup bin/elasticsearch &
查看启动日志,看是否有异常,如果有,根据提示修改
tail -f nohup.out
其他启动方式
./bin/elasticsearch
或者后台启动
./bin/elasticsearch -d
2.1.3、测试
测试,如下证明启动成功
http://192.168.2.125:9200/
elastic/123456
$ curl -u elastic:123456 192.168.2.125:9200
"name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "MNe414kdQ3ujgZTQh-LNEw",
"version" : {
"number" : "7.14.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "dd5a0a2acaa2045ff9624f3729fc8a6f40835aa1",
"build_date" : "2021-07-29T20:49:32.864135063Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
"tagline" : "You Know, for Search"
2.2、logstash
root下 启动测试案例(测试启动 可以直接看下面正式操作)
cd logstash-7.14.0/
cp config/logstash-sample.conf ./
nohup bin/logstash -f logstash-sample.conf &
root下 配置启动
启动成功之后,启动springboot服务,通过postman接口,发送测试数据,可以看到在kibana界面中存在日志记录
可以logstash传输到redis,再启动一个logstash从redis读取到elasticsearch
2.2.1、配置
1、config/logstash.yml配置
在logstash.yml中配置用户名和密码
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "123456"
xpack.monitoring.elasticsearch.hosts: ["http://ip:9200"]
2、启动文件配置bin/logstash.conf
input {
file {
path => "/opt/elk/test.log"
start_position => beginning
filter {
json{
source => "message"
target => "statinfo"
date {
match => [ "[statinfo][t]", "yyyy-MM-dd HH:mm:ss"]
target => "@timestamp"
output {
elasticsearch {
hosts => "192.168.2.125:9200"
index => "statinfo-%{+YYYY.MM.dd}"
user => "elastic" #注意这里的和config/logstash.yml配置的用户名不同,需要有写入权限
password => "123456"
stdout {
codec => rubydebug # 输出到命令窗口
在kibana中http://192.168.2.125:5601/app/management/kibana/indexPatterns创建索引模式[statinfo-*]
2.2.2、启动
root下 启动logstash
nohup ./bin/logstash -f ./bin/logstash.conf &
2.3、kibana
1、配置config/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
i18n.locale: "zh-CN"
elasticsearch.username: "kibana_system"
elasticsearch.password: "123456"
启动kibana
cd kibana-7.14.0-linux-x86_64
su elk
./bin/kibana
或者后台启动
nohup ./bin/kibana &
3、登录验证
http://192.168.2.125:5601/app/kibana
用超管登录elastic/123456
kibana.yml配置里的kibana/123456无法登录
4、创建用户和角色用于分享dashboard等指定信息
进入kibana之后
创建角色statinfo,指定索引statinfo-*,权限只读
创建用户statinfo,分配角色statinfo和kibana_dashboard_only_user
2.4、filebeat(没用到可以暂时不用考虑)
root下
cd filebeat-7.14.0-linux-x86_64
2.4.1、启动
nohup ./filebeat -e -c filebeat.yml &
3、其他相关内容
系统软硬件配置需求:
内存最低4G
/etc/security/limits.conf
# End of file
* soft nproc 262144
* hard nproc 262144
* soft nofile 262144
* hard nofile 262144
root soft nproc 262144
root hard nproc 262144
root soft nofile 262144
root hard nofile 262144
附录一、安全登录生成证书
参考https://blog.csdn.net/hhf799954772/article/details/115870012
一、生成证书:
./bin/elasticsearch-certutil ca
#第一个直接回车 第二个直接回车
二、生成秘钥:
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
#第一个直接回车 第二个直接回车 第三个直接回车
三、将凭证迁移到指定目录
# 先创建目录
mkdir ./config/certificates
# 移动凭证至指定目录下
mv ./elastic-certificates.p12 ./config/certificates/
# 赋值权限,不然会出问题
chmod 777 ./config/certificates/elastic-certificates.p12
四、凭证移动至每一台集群下面
#elastic-certificates.p12这个文件移动到每一个es安装目录的相同路径下
五、修改配置文件(每一台es都需要添加)
vim ./config/elasticsearch.yml
# 输入如下的配置
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization,X-Requested-With,Content-Type,Content-Length
xpack.security.enabled: true
xpack.security.authc.accept_default_password: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /opt/elk/elasticsearch-7.14.0/config/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /opt/elk/elasticsearch-7.14.0/config/elastic-certificates.p12
六、在各个节点上添加密码(每一台es都需要操作)
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
七、逐个启动节点
./bin/elasticsearch -d
八、设置密码
./bin/elasticsearch-setup-passwords interactive
# 下面会要输入很多密码,都要自己能记住,以后要用
# 需要设置 elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user 这些用户的密码
附录二、springboot项目logstash日志配置(选用:logstash指定文件可不需要此配置)
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--指定logstash ip:监听端口-->
<destination>192.168.2.125:9601</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<!--引用springboot默认配置-->
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<root level="INFO">
<!--使用上述订阅logstash数据tcp传输 -->
<appender-ref ref="LOGSTASH" />
<!--使用springboot默认配置 调试窗口输出-->
<appender-ref ref="CONSOLE" />
</root>
</configuration>
附录三、kibana dashboard嵌入到web中的简单做法
1、嵌入web页面中,定时刷新kibana dashboard嵌入到web中的简单做法
<!DOCTYPE html>
<style type="text/css">
html, body { margin: 0; padding 0; width: 100%; height: 100%;}
iframe { border: 0; width: 100%; height: 99%; }
</style>
<script language='javascript' type='text/javascript'>
Hello World!
</script>
<iframe src="http://xxxxxxx:5602/app/kibana#/dashboard/c964bfe0-d7f3-11eb-aba4-756408fc1114?embed=true&_g=(refreshInterval%3A(pause%3A!t%2Cvalue%3A0)%2Ctime%3A(from%3Anow-12h%2Cmode%3Aquick%2Cto%3Anow))" height="900" width="1800"></iframe>
</body>
</html>
2、kibana内嵌到web中的定制化问题
参考kibana的dashboard内嵌到web中的定制化问题
kibana内嵌到web中的定制化问题
2个通用的需求:
1.去掉AddFilter按钮
2.自定义传参过滤或者搜索
3、nginx配置(通过nginx配置指定授权的用户)
创建一个用户专门只能查看dashboard的统计图,因为这个是配置到nginx用来免登录的。
root@v:/opt/elk# echo -n 'statinfo:123456' | base64
c3RhdGluZm86MTIzNDU2
nginx添加配置
server {
server_name 192.168.2.125;
listen 5602;
#listen 443 ssl;
#ssl on;
#ssl_certificate ssl.cer;
#ssl_certificate_key ssl.pem;
location / {
proxy_set_header Host $proxy_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Authorization "Basic ZWxhc3RpYzoxMjM0NTY="; # base64-encoded username:password to pass in header