- 安装软件目录(建议安装此目录安装)
- 一.Jdk环境配置(JDK8+JDK11+JDK12)
- Elasticsearch7.x启动指定JDK12
- 二.环境配置(每个机器都操作)
- 允许远程访问(网上都是错误的)
- action.destructive_requires_name: true
node-1 - action.destructive_requires_name: true
node-2 - action.destructive_requires_name: true
node-3 - action.destructive_requires_name: true
- 10.Elasticsearch7.x启动指定JDK12
- 11.启动每个Elasticsearch7.x
- 12.查看集群状态
- 13.插件安装
- 三、kibana安装
- 配置Kibana的远程访问
- 配置es访问地址
- 汉化界面
i18n.locale: “zh-CN” - 四、安装logstash
- ">五、ELK架构图

- 六、Filebeat安装及使用
| 集群一览(VM) 配置要求:物理机 win10 vm15 16G内存(8G以上) 虚拟机:CentOS7.x,1G内存,2核CPU |
|||
|---|---|---|---|
| 名称 | IP | 备注 | 版本 |
| master | 192.168.186.128 | 主体:安装ElasticSearch, 插件:最新手动编译 安装ElasticSearch-head, 安装Kibana, 安装各种插件 |
ElasticSearch 7.3.2 |
| node-1 | 192.168.186.129 | 安装ElasticSearch | ElasticSearch 7.3.2 |
| node-2 | 192.168.186.130 | 安装ElasticSearch | ElasticSearch 7.3.2 |
| node-3 | 192.168.186.131 | 安装ElasticSearch | ElasticSearch 7.3.2 |
安装软件目录(建议安装此目录安装)
/usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2
一.Jdk环境配置(JDK8+JDK11+JDK12)
| 项目程序(电商程序) | es7.x | es7.3.2 |
|---|---|---|
| jdk8 | jdk11以上 | jdk12 |
| 同一台机器,兼容? |
- /usr/local/devops/jdk/jdk-12.0.2解压:使用tar -zxvf 文件名进行解压
以1.8为例

2.配置环境变量
vi /etc/profile#javaexport JAVA_HOME=/usr/local/devops/jdk/jdk1.8.0_221export PATH=$JAVA_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib

使用配置生效
source /etc/profile
常见问题 兼容问题
开发环境的jdk是1.8,在启动Elasticsearch7.3.2的时候,启动日志会有如下信息:
future versions of Elasticsearch will require Java 11; your Java version from [/opt/jdk1.8.0_211/jre] does not meet this requirement
这是由于Elasticsearch依赖于jdk,es和jdk有着对应的依赖关系。具体可见:
https://www.elastic.co/cn/support/matrix
https://www.elastic.co/guide/en/elasticsearch/reference/7.2/setup.html

这里是说Elasticsearch该版本内置了JDK,而内置的JDK是当前推荐的JDK版本。当然如果你本地配置了JAVA_HOME那么ES就是优先使用配置的JDK启动ES。(言外之意,你不安装JDK一样可以启动,我试了可以的。)
ES推荐使用LTS版本的JDK(这里只是推荐,JDK8就不支持),如果你使用了一些不支持的JDK版本,ES会拒绝启动。那么哪些版本的JDK支持LTS呢?https://www.oracle.com/technetwork/java/java-se-support-roadmap.html

根据启动信息我们看到Elasticsearch7.2推荐使用JDK11,并且从刚才的截图得知可以下载openjdk 11.
这是由于Elasticsearch依赖于jdk,es和jdk有着对应的依赖关系。具体可见:
https://www.elastic.co/cn/support/matrix
https://www.elastic.co/guide/en/elasticsearch/reference/7.2/setup.html
Elasticsearch7.x启动指定JDK12
二.环境配置(每个机器都操作)
1.修改文件限制
vi /etc/security/limits.conf
增加的内容
* hard nofile 65536* soft nproc 2048* hard nproc 4096* soft memlock unlimited* hard memlock unlimited

2.调整进程数
vi /etc/security/limits.d/20-nproc.conf

3.调整虚拟内存&最大并发连接
vi /etc/sysctl.conf
末尾增加的内容
vm.max_map_count=655360fs.file-max=655360
4.开放端口(学习的时候推荐关闭防火墙)
firewall-cmd —add-port=9200/tcp —permanent
firewall-cmd —add-port=9300/tcp —permanent
firewall-cmd —add-port=9100/tcp —permanent
firewall-cmd —add-port=9000/tcp —permanent
重新加载防火墙规则
firewall-cmd —reload
关闭防火墙
CentOS 7.0默认使用的是firewall作为防火墙查看防火墙状态firewall-cmd --state停止firewallsystemctl stop firewalld.service禁止firewall开机启动systemctl disable firewalld.service关闭selinux进入到/etc/selinux/config文件vi /etc/selinux/config将SELINUX=enforcing改为SELINUX=disabled

5.重启系统后生效
reboot
6.创建ELK专用用户(root es官方不支持root启动es)
创建用户
useradd elk
创建 mkdir elasticsearch完整命令:mkdir /usr/local/devops/elk/elasticsearch



创建ELK APP目录
mkdir /usr/local/devops/elk/elasticsearch

创建ELK data目录
mkdir /usr/local/devops/elk/elasticsearch/data

创建ELK logs目录
mkdir /usr/local/devops/elk/elasticsearch/log

- 前置条件:在机器上新建es用户目录,给所有es需要操作的目录设为该用户组;
2.下载安装包 (已经官网下载)

进入elk目录cd /usr/local/devops/elk/解压:tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz -C elasticsearch

- 如果机器资源足够两个节点运行,复制文件夹分别为node-1,node-2,node-3;
7-1.Elasticsearch 7.x 目录结构:解压之后(4台机器都需要操作)
Elasticsearch 7.x 目录结构如下:
bin :脚本文件,包括 ES 启动 & 安装插件等等
config : elasticsearch.yml(ES 配置文件)、jvm.options(JVM 配置文件)、日志配置文件等等
JDK : 内置的 JDK,JAVA_VERSION=”12.0.1”
lib : 类库
logs : 日志文件
modules : ES 所有模块,包括 X-pack 等
plugins : ES 已经安装的插件。默认没有插件
data : ES 启动的时候,会有该目录,用来存储文档数据。该目录可以设置
jvm.options JVM 配置文件
-Xms1g-Xmx1g
ES 默认安装后设置的堆内存是 1 GB,对于任何业务来说这个设置肯定是少了。那设置多少?
推荐:如果足够的内存,也尽量不要 超过 32 GB。即每个节点内存分配不超过 32 GB。 因为它浪费了内存,降低了 CPU 的性能,还要让 GC 应对大内存。如果你想保证其安全可靠,设置堆内存为 31 GB 是一个安全的选择。
启动日志注意:
日志中有两个信息需要注意:
- 本机环境是 JDK 8 ,它会提醒后面版本需要 JDK 11 支持。但它是向下兼容的
- 表示本机 ES 启动成功 [BYSocketdeMacBook-Pro-2.local] started
7-2.文件夹权限问题:
解压后一定再次授权elk用户
进入 elasticsearch目录
chown -R elk:elk elasticsearch在root用户下更改读写权限建议都改成777 如下语法:chmod -R 777 文件夹名称

8.Elasticsearch 7.x 修改elasticsearch.yml配置
示例:
修改elasticsearch.yml配置:
cluster.name: 统一的命名;
node.name: 该节点名;
path.data: 多个磁盘目录以逗号分隔;
path.logs: 日志目录;
network.host: 当前机器IP;
http.port: 9200;
discovery.seed_hosts: [“ip1:9300”,”ip1:9301”…]
cluster.initial_master_nodes: [“node1”,”node2”…]
bootstrap.system_call_filter: false;
允许远程访问(网上都是错误的)
vi conf/elasticsearch.yml
修改 network.host 为 0.0.0.0这是错误的
修改jvm.options配置:
- 改jvm内存大小;
- 增加其他jvm参数配置;
切换到elk用户
su elk

切换到每个节点的配置
路径
cd /usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2/config

修改命令
vi elasticsearch.yml
master的配置添加
# 主节点相关配置node.master: truenode.data: falsenode.ingest: falsenode.ml: falsecluster.remote.connect: false# 跨域http.cors.enabled: truehttp.cors.allow-origin: "*"
# 从主节点相关配置node.master: falsenode.data: truenode.ingest: falsenode.ml: falsecluster.remote.connect: false# 跨域http.cors.enabled: true
http.cors.allow-origin: “*”
master
# ======================== Elasticsearch Configuration =========================## NOTE: Elasticsearch comes with reasonable defaults for most settings.# Before you set out to tweak and tune the configuration, make sure you# understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file. This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.elastic.co/guide/en/elasticsearch/reference/index.html## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:#cluster.name: array-es-cluster## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:#node.name: master## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):#path.data: /usr/local/devops/elk/elasticsearch/data## Path to log files:#path.logs: /usr/local/devops/elk/elasticsearch/log## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:##bootstrap.memory_lock: true## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## Elasticsearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## Set the bind address to a specific IP (IPv4 or IPv6):#network.host: 192.168.186.128## Set a custom port for HTTP:#http.port: 9200## For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when this node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]#discovery.seed_hosts: ["192.168.186.128:9300", "192.168.186.130:9300","192.168.186.131:9300","192.168.186.129:9300"]## Bootstrap the cluster using an initial set of master-eligible nodes:#cluster.initial_master_nodes: ["master"]# 主节点相关配置node.master: truenode.data: falsenode.ingest: falsenode.ml: falsecluster.remote.connect: false# 跨域http.cors.enabled: truehttp.cors.allow-origin: "*"## For more information, consult the discovery and cluster formation module documentation.## ---------------------------------- Gateway -----------------------------------## Block initial recovery after a full cluster restart until N nodes are started:#gateway.recover_after_nodes: 3## For more information, consult the gateway module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:#
action.destructive_requires_name: true
node-1
[elk@bigdata001 config]$ cat elasticsearch.yml
# ======================== Elasticsearch Configuration =========================## NOTE: Elasticsearch comes with reasonable defaults for most settings.# Before you set out to tweak and tune the configuration, make sure you# understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file. This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.elastic.co/guide/en/elasticsearch/reference/index.html## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:#cluster.name: array-es-cluster## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:#node.name: node-1## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):#path.data: /usr/local/devops/elk/elasticsearch/data## Path to log files:#path.logs: /usr/local/devops/elk/elasticsearch/log## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:##bootstrap.memory_lock: true## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## Elasticsearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## Set the bind address to a specific IP (IPv4 or IPv6):#network.host: 192.168.186.129## Set a custom port for HTTP:#http.port: 9200## For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when this node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]#discovery.seed_hosts: ["192.168.186.128:9300", "192.168.186.130:9300","192.168.186.131:9300","192.168.186.129:9300"]## Bootstrap the cluster using an initial set of master-eligible nodes:#cluster.initial_master_nodes: ["master"]# 主节点相关配置node.master: falsenode.data: truenode.ingest: falsenode.ml: falsecluster.remote.connect: false# 跨域http.cors.enabled: truehttp.cors.allow-origin: "*"## For more information, consult the discovery and cluster formation module documentation.## ---------------------------------- Gateway -----------------------------------## Block initial recovery after a full cluster restart until N nodes are started:#gateway.recover_after_nodes: 3## For more information, consult the gateway module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:#
action.destructive_requires_name: true
node-2
# ======================== Elasticsearch Configuration =========================## NOTE: Elasticsearch comes with reasonable defaults for most settings.# Before you set out to tweak and tune the configuration, make sure you# understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file. This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.elastic.co/guide/en/elasticsearch/reference/index.html## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:#cluster.name: array-es-cluster## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:#node.name: node-2## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):#path.data: /usr/local/devops/elk/elasticsearch/data## Path to log files:#path.logs: /usr/local/devops/elk/elasticsearch/log## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:##bootstrap.memory_lock: true## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## Elasticsearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## Set the bind address to a specific IP (IPv4 or IPv6):#network.host: 192.168.186.130## Set a custom port for HTTP:#http.port: 9200## For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when this node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]#discovery.seed_hosts: ["192.168.186.128:9300", "192.168.186.130:9300","192.168.186.131:9300","192.168.186.129:9300"]## Bootstrap the cluster using an initial set of master-eligible nodes:#cluster.initial_master_nodes: ["master"]# 主节点相关配置node.master: falsenode.data: truenode.ingest: falsenode.ml: falsecluster.remote.connect: false# 跨域http.cors.enabled: truehttp.cors.allow-origin: "*"## For more information, consult the discovery and cluster formation module documentation.## ---------------------------------- Gateway -----------------------------------## Block initial recovery after a full cluster restart until N nodes are started:#gateway.recover_after_nodes: 3## For more information, consult the gateway module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:#
action.destructive_requires_name: true
node-3
# ======================== Elasticsearch Configuration =========================## NOTE: Elasticsearch comes with reasonable defaults for most settings.# Before you set out to tweak and tune the configuration, make sure you# understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file. This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.elastic.co/guide/en/elasticsearch/reference/index.html## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:#cluster.name: array-es-cluster## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:#node.name: node-3## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):#path.data: /usr/local/devops/elk/elasticsearch/data## Path to log files:#path.logs: /usr/local/devops/elk/elasticsearch/log## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:##bootstrap.memory_lock: true## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## Elasticsearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## Set the bind address to a specific IP (IPv4 or IPv6):#network.host: 192.168.186.131## Set a custom port for HTTP:#http.port: 9200## For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when this node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]#discovery.seed_hosts: ["192.168.186.128:9300", "192.168.186.130:9300","192.168.186.131:9300","192.168.186.129:9300"]## Bootstrap the cluster using an initial set of master-eligible nodes:#cluster.initial_master_nodes: ["master"]# 主节点相关配置node.master: falsenode.data: truenode.ingest: falsenode.ml: falsecluster.remote.connect: false# 跨域http.cors.enabled: truehttp.cors.allow-origin: "*"## For more information, consult the discovery and cluster formation module documentation.## ---------------------------------- Gateway -----------------------------------## Block initial recovery after a full cluster restart until N nodes are started:#gateway.recover_after_nodes: 3## For more information, consult the gateway module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:#
action.destructive_requires_name: true
9.Elasticsearch 7.x 执行
进入bin目录
cd/usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2/bin
错误解决


10.Elasticsearch7.x启动指定JDK12
vi elasticsearch

添加一下几行内容
#配置自定义jdk12export JAVA_HOME=/usr/local/devops/jdk/jdk-12.0.2export PATH=$JAVA_HOME/bin:$PATH#添加jdk判断if [ -x "$JAVA_HOME/bin/java" ]; thenJAVA="/usr/local/devops/jdk/jdk-12.0.2/bin/java"elseJAVA=`which java`fi
完整版本
#!/bin/bash# CONTROLLING STARTUP:## This script relies on a few environment variables to determine startup# behavior, those variables are:## ES_PATH_CONF -- Path to config directory# ES_JAVA_OPTS -- External Java Opts on top of the defaults set## Optionally, exact memory values can be set using the `ES_JAVA_OPTS`. Note that# the Xms and Xmx lines in the JVM options file must be commented out. Example# values are "512m", and "10g".## ES_JAVA_OPTS="-Xms8g -Xmx8g" ./bin/elasticsearch#配置自己的jdk12export JAVA_HOME=/usr/local/devops/jdk/jdk-12.0.2export PATH=$JAVA_HOME/bin:$PATH#添加jdk判断if [ -x "$JAVA_HOME/bin/java" ]; thenJAVA="/usr/local/devops/jdk/jdk-12.0.2/bin/java"elseJAVA=`which java`fisource "`dirname "$0"`"/elasticsearch-envif [ -z "$ES_TMPDIR" ]; thenES_TMPDIR=`"$JAVA" -cp "$ES_CLASSPATH" org.elasticsearch.tools.launchers.TempDirectory`fiES_JVM_OPTIONS="$ES_PATH_CONF"/jvm.optionsJVM_OPTIONS=`"$JAVA" -cp "$ES_CLASSPATH" org.elasticsearch.tools.launchers.JvmOptionsParser "$ES_JVM_OPTIONS"`ES_JAVA_OPTS="${JVM_OPTIONS//\$\{ES_TMPDIR\}/$ES_TMPDIR}"# manual parsing to find out, if process should be detachedif ! echo $* | grep -E '(^-d |-d$| -d |--daemonize$|--daemonize )' > /dev/null; thenexec \"$JAVA" \$ES_JAVA_OPTS \-Des.path.home="$ES_HOME" \-Des.path.conf="$ES_PATH_CONF" \-Des.distribution.flavor="$ES_DISTRIBUTION_FLAVOR" \-Des.distribution.type="$ES_DISTRIBUTION_TYPE" \-Des.bundled_jdk="$ES_BUNDLED_JDK" \-cp "$ES_CLASSPATH" \org.elasticsearch.bootstrap.Elasticsearch \"$@"elseexec \"$JAVA" \$ES_JAVA_OPTS \-Des.path.home="$ES_HOME" \-Des.path.conf="$ES_PATH_CONF" \-Des.distribution.flavor="$ES_DISTRIBUTION_FLAVOR" \-Des.distribution.type="$ES_DISTRIBUTION_TYPE" \-Des.bundled_jdk="$ES_BUNDLED_JDK" \-cp "$ES_CLASSPATH" \org.elasticsearch.bootstrap.Elasticsearch \"$@" \<&- &retval=$?pid=$![ $retval -eq 0 ] || exit $retvalif [ ! -z "$ES_STARTUP_SLEEP_TIME" ]; thensleep $ES_STARTUP_SLEEP_TIMEfiif ! ps -p $pid > /dev/null ; thenexit 1fiexit 0
fiexit $?
如果警告
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
修改jvm.options

注释 ##-XX:+UseConcMarkSweepGC
改为:-XX:+UseG1GC
完整如下:
## JVM configuration################################################################## IMPORTANT: JVM heap size#################################################################### You should always set the min and max JVM heap## size to the same value. For example, to set## the heap to 4 GB, set:#### -Xms4g## -Xmx4g#### See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html## for more information################################################################### Xms represents the initial size of total heap space# Xmx represents the maximum size of total heap space-Xms1g-Xmx1g################################################################## Expert settings#################################################################### All settings below this section are considered## expert settings. Don't tamper with them unless## you understand what you are doing#################################################################### GC configuration##-XX:+UseConcMarkSweepGC-XX:+UseG1GC-XX:CMSInitiatingOccupancyFraction=75-XX:+UseCMSInitiatingOccupancyOnly## G1GC Configuration# NOTE: G1GC is only supported on JDK version 10 or later.# To use G1GC uncomment the lines below.# 10-:-XX:-UseConcMarkSweepGC# 10-:-XX:-UseCMSInitiatingOccupancyOnly# 10-:-XX:+UseG1GC# 10-:-XX:InitiatingHeapOccupancyPercent=75## DNS cache policy# cache ttl in seconds for positive DNS lookups noting that this overrides the# JDK security property networkaddress.cache.ttl; set to -1 to cache forever-Des.networkaddress.cache.ttl=60# cache ttl in seconds for negative DNS lookups noting that this overrides the# JDK security property networkaddress.cache.negative ttl; set to -1 to cache# forever-Des.networkaddress.cache.negative.ttl=10## optimizations# pre-touch memory pages used by the JVM during initialization-XX:+AlwaysPreTouch## basic# explicitly set the stack size-Xss1m# set to headless, just in case-Djava.awt.headless=true# ensure UTF-8 encoding by default (e.g. filenames)-Dfile.encoding=UTF-8# use our provided JNA always versus the system one-Djna.nosys=true# turn off a JDK optimization that throws away stack traces for common# exceptions because stack traces are important for debugging-XX:-OmitStackTraceInFastThrow# flags to configure Netty-Dio.netty.noUnsafe=true-Dio.netty.noKeySetOptimization=true-Dio.netty.recycler.maxCapacityPerThread=0# log4j 2-Dlog4j.shutdownHookEnabled=false-Dlog4j2.disable.jmx=true-Djava.io.tmpdir=${ES_TMPDIR}## heap dumps# generate a heap dump when an allocation from the Java heap fails# heap dumps are created in the working directory of the JVM-XX:+HeapDumpOnOutOfMemoryError# specify an alternative path for heap dumps; ensure the directory exists and# has sufficient space-XX:HeapDumpPath=data# specify an alternative path for JVM fatal error logs-XX:ErrorFile=logs/hs_err_pid%p.log## JDK 8 GC logging8:-XX:+PrintGCDetails8:-XX:+PrintGCDateStamps8:-XX:+PrintTenuringDistribution8:-XX:+PrintGCApplicationStoppedTime8:-Xloggc:logs/gc.log8:-XX:+UseGCLogFileRotation8:-XX:NumberOfGCLogFiles=328:-XX:GCLogFileSize=64m# JDK 9+ GC logging9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise# time/date parsing will break in an incompatible way for some date patterns and locals9-:-Djava.locale.providers=COMPAT
11.启动每个Elasticsearch7.x
进入bin目录
cd /usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2/bin/
后台启动
第一次启动用 方便查看日志
./elasticsearch
如果报错
切换用户,不能用root用户启动
使用之前创建的elk用户启动
su elk

启动成功



chmod -R 777 elasticsearch
开通文件夹权限
问题:

解决:删除data文件夹和log文件夹下面所有内容,重新启动

之后后台启动
./elasticsearch -d
成功展示




12.查看集群状态
curl -XGET '192.168.186.128:9200/_cluster/health?pretty'
返回结果
{"cluster_name" : "array-es-cluster","status" : "green","timed_out" : false,"number_of_nodes" : 4,"number_of_data_nodes" : 0,"active_primary_shards" : 0,"active_shards" : 0,"relocating_shards" : 0,"initializing_shards" : 0,"unassigned_shards" : 0,"delayed_unassigned_shards" : 0,"number_of_pending_tasks" : 0,"number_of_in_flight_fetch" : 0,"task_max_waiting_in_queue_millis" : 0,"active_shards_percent_as_number" : 100.0
}
查看各个节点的信息:
curl -XGET 'http://192.168.186.128:9200/_nodes/process?pretty'

完整版本
{"_nodes" : {"total" : 4,"successful" : 4,"failed" : 0},"cluster_name" : "array-es-cluster","nodes" : {"g9Cauxb_TiusK1sEOqCKHA" : {"name" : "node-1","transport_address" : "192.168.186.129:9300","host" : "192.168.186.129","ip" : "192.168.186.129","version" : "7.3.2","build_flavor" : "default","build_type" : "tar","build_hash" : "1c1faf1","roles" : ["master"],"attributes" : {"xpack.installed" : "true"},"process" : {"refresh_interval_in_millis" : 1000,"id" : 7809,"mlockall" : false}},"vfPW-wmlQjyv23CbJBvQXA" : {"name" : "master","transport_address" : "192.168.186.128:9300","host" : "192.168.186.128","ip" : "192.168.186.128","version" : "7.3.2","build_flavor" : "default","build_type" : "tar","build_hash" : "1c1faf1","roles" : ["master"],"attributes" : {"xpack.installed" : "true"},"process" : {"refresh_interval_in_millis" : 1000,"id" : 8319,"mlockall" : false}},"25goFkZvR4Geakvl2w05Cg" : {"name" : "node-3","transport_address" : "192.168.186.131:9300","host" : "192.168.186.131","ip" : "192.168.186.131","version" : "7.3.2","build_flavor" : "default","build_type" : "tar","build_hash" : "1c1faf1","roles" : ["master"],"attributes" : {"xpack.installed" : "true"},"process" : {"refresh_interval_in_millis" : 1000,"id" : 8112,"mlockall" : false}},"-Az1R8HtRB2hRVP4r0EJjw" : {"name" : "node-2","transport_address" : "192.168.186.130:9300","host" : "192.168.186.130","ip" : "192.168.186.130","version" : "7.3.2","build_flavor" : "default","build_type" : "tar","build_hash" : "1c1faf1","roles" : ["master"],"attributes" : {"xpack.installed" : "true"},"process" : {"refresh_interval_in_millis" : 1000,"id" : 7557,"mlockall" : false}}}
}
查看单个节点信息:curl -XGET ‘http://192.168.186.130:9200’
13.插件安装
1.安装一个ES插件,所有节点,执行下面命令:
切换到bin目录
./elasticsearch-plugin install analysis-icu


2.安装 IK分词器(中文分词器)

可以自己maven打包
mvn clean install




执行执行命令mvn clean package进行打包


解压,重命名
analysis-ik或者elasticsearch-analysis-ik-7.3.2
放入es的插件目录
/usr/local/devops/elk/elasticsearch/elasticsearch-7.3.2/plugins
3.head插件地址:https://github.com/mobz/elasticsearch-head
3.1、下载 elasticsearch-head-master.zip
把解压后的目录上传到 4台机器上
然后再进入cd /home/elasticsearch-head-master/
elasticsearch-head-master]# curl --silent --location https://rpm.nodesource.com/setup_10.x | bash -
elasticsearch-head-master]# yum install -y nodejs

3.2查看是否下载成功
elasticsearch-head-master]# node -vv10.16.0elasticsearch-head-master]# npm -v6.9.0

3.3安装grunt
elasticsearch-head-master]# npm install -g grunt-cli
elasticsearch-head-master]# npm install

如果报错如下

npm install phantomjs-prebuilt@2.1.14 --ignore-scripts

4.修改
elasticsearch-head-master]# vi Gruntfile.js,添加hostname: ‘0.0.0.0’

- elasticsearch-head-master]# vi _site/app.js,
将this.prefs.get("app-base_uri") || "localhost:9200",修改如下

##只需要更改其中Ip
this.base_uri = this.config.base_uri || this.prefs.get(“app-base_uri”) || “http://192.168.186.128:9200";6、启动
elasticsearch-head-master]#
npm run start
nohup npm run start(后台启动)

7.访问:浏览器访问

14.cerebro插件
https://github.com/lmenezes/cerebro/releases
上传到devops目录
devops]# tar -zxvf cerebro-0.8.4.tgz

启动

第一次前台启动./cerebro
之后后台启动 nohup ./cerebro &



15.cerebro插件新建索引问题
1.新建索引

2.测试中文IK
IK分词器的两种分词模式:
ik_max_word: 会将文本做最细粒度的拆分,
ik_smart: 会做最粗粒度的拆分。
http://192.168.186.128:9200/_analyze{"analyzer":"ik_max_word","text":"这里是上海地铁站"
}

三、kibana安装
tar -zxvf kibana-7.3.2-linux-x86_64.tar.gz -C elasticsearch/kibana/
1.新建文件夹
elasticsearch/kibana
2.解压
tar -zxvf kibana-7.3.2-linux-x86_64.tar.gz -C elasticsearch/kibana/
3.修改配置文件
cd /usr/local/devops/elk/elasticsearch/kibana/kibana-7.3.2-linux-x86_64/config
修改信息如下:
配置Kibana的远程访问
server.host: 0.0.0.0
配置es访问地址
elasticsearch.hosts: ["http://192.168.186.128:9200"]
汉化界面
i18n.locale: “zh-CN”
完整版
# Kibana is served by a back end server. This setting specifies the port to use.#server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.# The default is 'localhost', which usually means remote machines will not be able to connect.# To allow connections from remote users, set this parameter to a non-loopback address.server.host: "0.0.0.0"# Enables you to specify a path to mount Kibana at if you are running behind a proxy.# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath# from requests it receives, and to prevent a deprecation warning at startup.# This setting cannot end in a slash.#server.basePath: ""# Specifies whether Kibana should rewrite requests that are prefixed with# `server.basePath` or require that they are rewritten by your reverse proxy.# This setting was effectively always `false` before Kibana 6.3 and will# default to `true` starting in Kibana 7.0.#server.rewriteBasePath: false# The maximum payload size in bytes for incoming server requests.#server.maxPayloadBytes: 1048576# The Kibana server's name. This is used for display purposes.#server.name: "your-hostname"# The URLs of the Elasticsearch instances to use for all your queries.elasticsearch.hosts: ["http://192.168.186.128:9200"]# When this setting's value is true Kibana uses the hostname specified in the server.host# setting. When the value of this setting is false, Kibana uses the hostname of the host# that connects to this Kibana instance.#elasticsearch.preserveHost: true# Kibana uses an index in Elasticsearch to store saved searches, visualizations and# dashboards. Kibana creates a new index if the index doesn't already exist.#kibana.index: ".kibana"# The default application to load.#kibana.defaultAppId: "home"# If your Elasticsearch is protected with basic authentication, these settings provide# the username and password that the Kibana server uses to perform maintenance on the Kibana# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which# is proxied through the Kibana server.#elasticsearch.username: "kibana"#elasticsearch.password: "pass"# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.# These settings enable SSL for outgoing requests from the Kibana server to the browser.#server.ssl.enabled: false#server.ssl.certificate: /path/to/your/server.crt#server.ssl.key: /path/to/your/server.key# Optional settings that provide the paths to the PEM-format SSL certificate and key files.# These files validate that your Elasticsearch backend uses the same key files.#elasticsearch.ssl.certificate: /path/to/your/client.crt#elasticsearch.ssl.key: /path/to/your/client.key# Optional setting that enables you to specify a path to the PEM file for the certificate# authority for your Elasticsearch instance.#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]# To disregard the validity of SSL certificates, change this setting's value to 'none'.#elasticsearch.ssl.verificationMode: full# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of# the elasticsearch.requestTimeout setting.#elasticsearch.pingTimeout: 1500# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value# must be a positive integer.#elasticsearch.requestTimeout: 30000# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side# headers, set this value to [] (an empty list).#elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.#elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.#elasticsearch.shardTimeout: 30000# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.#elasticsearch.startupTimeout: 5000# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.#elasticsearch.logQueries: false# Specifies the path where Kibana creates the process ID file.#pid.file: /var/run/kibana.pid# Enables you specify a file where Kibana stores log output.#logging.dest: stdout# Set the value of this setting to true to suppress all logging output.#logging.silent: false# Set the value of this setting to true to suppress all logging output other than error messages.#logging.quiet: false# Set the value of this setting to true to log all events, including system usage information# and all requests.#logging.verbose: false# Set the interval in milliseconds to sample system and process performance# metrics. Minimum is 100ms. Defaults to 5000.#ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: “zh-CN”3.启动 (切换到elk用户)
root授权
chown -R elk:elk kibana/
root付给读写权限
chmod -R 777 kibana/切换elk用户su elk
到bin目录
cd bin/
前台启动./kibana

后台启动nohup ./kibana &
浏览访问:http://192.168.186.128:5601


四、安装logstash
1.下载安装包(已经官网下载)

2.先再elk文件夹下面新建一个文件夹logstash

3.授权777权限

4.解压

tar -zxvf logstash-7.3.2.tar.gz -C logstash

4.进入config目录后,修改logstash.conf文件如下
复制文件logstash-7.3.2\config\logstash-sample.conf,并改名logstash.conf
cp logstash-sample.conf logstash.conf
vi logstash.conf
更改后导入txt
# Sample Logstash configuration for creating a simple# Beats -> Logstash -> Elasticsearch pipeline.input {#beats {# port => 5044#}file {path => "/usr/local/devops/elk/array_data/*.log"start_position => beginningsincedb_path => "/dev/null"}}filter{ }output {elasticsearch {hosts => ["http://192.168.186.128:9200"]index => "%{[@metadata][logstash]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"#user => "elastic"#password => "changeme"}
}
更改文件权限,到最外层的文件夹,这样里面的文件夹都有权限

执行启动命令
测试启动成功
在bin目录输入
./logstash -e ‘input { stdin{} } output { stdout{} }’
./logstash -f config/logstash.conf
如果报错上面的消息,是内存原因
内存不足: 减少启动程序所需内存,或加大内存,如关闭一些程序。


五、ELK架构图
ELK之间的合作机制:
L(Logstash)作为信息收集者,主要是用来对日志的搜集、分析、过滤,支持大量的数据获取方式,一般工作方式为c/s架构,client端安装在需要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操作在一并发往elasticsearch上去。
E(Elasticsearch)作为数据的保存者,保存来自L(Logstash)收集的系统日志数据。
K(Kibana )作为展示者,主要是将ES上的数据通过页面可视化的形式展现出来。包括可以通过语句查询、安装插件对指标进行可视化
ELK的工具
ELK新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具。
Filebeat隶属于Beats。目前Beats包含四种工具:
1、Packetbeat(搜集网络流量数据)
2、Topbeat(搜集系统、进程和文件系统级别的 CPU 和内存使用情况等数据)
3、Filebeat(搜集文件数据)
4、Winlogbeat(搜集 Windows 事件日志数据)
六、Filebeat安装及使用
1.创建文件夹

2.解压
tar -zxvf filebeat-7.3.2-linux-x86_64.tar.gz -C filebeat

3.启动
启动
必须切换root用户
不然报错

./filebeat -e -c filebeat.yml
-c:配置文件位置
-path.logs:日志位置
-path.data:数据位置
-path.home:家位置
-e:关闭日志输出
-d 选择器:启用对指定选择器的调试。 对于选择器,可以指定逗号分隔的组件列表,也可以使用-d“*”为所有组件启用调试.例如,-d“publish”显示所有“publish”相关的消息。
后台启动filebeat
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &将所有标准输出及标准错误输出到/dev/null空设备,即没有任何输出
nohup ./filebeat -e -c filebeat.yml > filebeat.log &
同步mysql
mysql.conf
input {stdin {}jdbc{jdbc_connection_string => "jdbc:mysql://192.168.199.170:3306/ugaoxindb?useUnicode=true&characterEncoding=UTF-8"jdbc_user => "root"jdbc_password => "1111"jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"jdbc_driver_class => "com.mysql.jdbc.Driver"jdbc_paging_enabled => "true"jdbc_page_size => "50000"statement => "select * from t_product"schedule => "* * * * *"statement => 'SELECT t1 FROM test_timestamp'jdbc_default_timezone => "Asia/Shanghai" }}output {stdout {codec => json_lines}elasticsearch {hosts => "http://192.168.186.128:9200"index => "ugaoxindb_product_%{+YYYY-MM}"document_id => "%{id}"}
}如果这种错误
[2019-09-17T11:38:00,100][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>”Java::JavaSql::SQLException: null, message from server: “Host ‘Mr.lan’ is not allowed to connect to this MySQL server””}
开启mysql远程访问连接
use mysql;grant all privileges on *.* to root@'%' identified by "password";flush privileges;select * from user;
# Sample Logstash configuration for creating a simple# Beats -> Logstash -> Elasticsearch pipeline.input {jdbc{# mysql 数据库链接jdbc_connection_string => "jdbc:mysql://192.168.199.170:3306/ugaoxindb?useUnicode=true&characterEncoding=utf-8&useSSL=false"# 用户名和密码jdbc_user => "root"jdbc_password => "1111"#驱动jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"jdbc_driver_class => "com.mysql.jdbc.Driver"jdbc_paging_enabled => "true"jdbc_page_size => "50000"jdbc_default_timezone =>"Asia/Shanghai"statement => "select * from t_product"# 这里类似crontab,可以定制定时操作,比如每分钟执行一次同步(分 时 天 月 年)schedule => "* * * * *"type => "product"# 是否记录上次执行结果, 如果为真,将会把上次执行到的 tracking_column 字段的值记录下来,保存到 last_run_metadata_path 指定的文件中record_last_run => true# 是否需要记录某个column 的值,如果record_last_run为真,可以自定义我们需要 track 的 column 名称,此时该参数就要为 true. 否则默认 track 的是 timestamp 的值.use_column_value => true# 如果 use_column_value 为真,需配置此参数. track 的数据库 column 名,该 column 必须是递增的. 一般是mysql主键tracking_column => "lastmodifiedTime"tracking_column_type => "timestamp"last_run_metadata_path => "/usr/local/devops/elk/array_data/mysql/my_info"# 是否清除 last_run_metadata_path 的记录,如果为真那么每次都相当于从头开始查询所有的数据库记录clean_run => false# 是否将 字段(column) 名称转小写lowercase_column_names => false}output {if[type] =="t_product"{# stdout {# #打印信息的时候 不同配置的id 不能一样# # id=>"%{id}"# id=>"%{userId}"# }#elasticsearch {hosts => "http://192.168.186.128:9200"index => "ugaoxindb_product_%{+YYYY-MM}"document_id => "%{id}"}}if[type] == "article"{# stdout {# #打印信息的时候 不同配置的id 不能一样# # id=>"%{id}"# id=>"%{articleId}"# }#elasticsearch {hosts => "http://192.168.186.128:9200"index => "article"document_id => "%{id}"}}}
jdbc {jdbc_connection_string => "jdbc:mysql://192.168.199.170:3306/test"jdbc_user => "root"jdbc_password => "1111"jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"jdbc_driver_class => "com.mysql.jdbc.Driver"codec => plain {charset => "UTF-8"}jdbc_paging_enabled => truejdbc_page_size => 300use_column_value => truetracking_column => "id"jdbc_default_timezone =>"Asia/Shanghai"last_run_metadata_path => "/usr/local/devops/elk/array_data/mysql/my_info"statement => "select * from t_product where id > :sql_last_value"type => "t_product"
}
export JAVA_CMD="/usr/local/devops/jdk/jdk-12.0.2/bin"export JAVA_HOME="/usr/local/devops/jdk/jdk-12.0.2"
错误问题:
Error: com.mysql.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?Exception: LogStash::PluginLoadingErrorStack: /usr/local/devops/elk/logstash/logstash-7.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/plugin_mixins/jdbc/jdbc.rb:190:in `open_jdbc_connection'/usr/local/devops/elk/logstash/logstash-7.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/plugin_mixins/jdbc/jdbc.rb:253:in `execute_statement'/usr/local/devops/elk/logstash/logstash-7.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/inputs/jdbc.rb:309:in `execute_query'/usr/local/devops/elk/logstash/logstash-7.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/inputs/jdbc.rb:281:in `run'/usr/local/devops/elk/logstash/logstash-7.3.2/logstash-core/lib/logstash/java_pipeline.rb:309:in `inputworker'/usr/local/devops/elk/logstash/logstash-7.3.2/logstash-core/lib/logstash/java_pipeline.rb:302:in `block in start_input'[2019-09-17T19:53:02,569][ERROR][logstash.javapipeline ] A plugin had an unrecoverable error. Will restart this plugin.Pipeline_id:mainPlugin: <LogStash::Inputs::Jdbc jdbc_user=>"root", jdbc_paging_enabled=>true, jdbc_password=><password>, jdbc_page_size=>50000, statement=>"select id,product_name from t_product", jdbc_driver_library=>"/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar", jdbc_default_timezone=>"Asia/Shanghai", jdbc_connection_string=>"jdbc:mysql://192.168.186.1:3306/ugaoxin_db?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true", id=>"79829f5711e386bf06504fdb41f32588d772eb6d592af3ba8e221e3e66c3e58a", jdbc_driver_class=>"com.mysql.jdbc.Driver", type=>"t_product", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_70c7c1fa-712f-4ca7-8c43-5d0113903a77", enable_metric=>true, charset=>"UTF-8">, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, plugin_timezone=>"utc", last_run_metadata_path=>"/root/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true, use_prepared_statements=>false>Error: com.mysql.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Exception: LogStash::PluginLoadingError坑:一定把mysql的jar放入
/usr/local/devops/elk/logstash/logstash-7.3.2/logstash-core/lib/jars
正确配置:
input {stdin {}jdbc{jdbc_connection_string => "jdbc:mysql://192.168.186.1:3306/ugaoxin_db?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true"jdbc_user => "root"jdbc_password => "1111"jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"jdbc_driver_class => "com.mysql.jdbc.Driver"jdbc_default_timezone =>"Asia/Shanghai"jdbc_paging_enabled => "true"jdbc_page_size => "50000"statement => "select id,product_name from t_product"type => "t_product"}}output {stdout {codec => json_lines}elasticsearch {hosts => "http://192.168.186.128:9200"index => "t_product"document_id => "%{id}"}}
输出日志
[2019-09-17T19:55:04,032][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified[2019-09-17T19:55:04,054][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.2"}[2019-09-17T19:55:06,288][INFO ][org.reflections.Reflections] Reflections took 43 ms to scan 1 urls, producing 19 keys and 39 values[2019-09-17T19:55:07,562][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.186.128:9200/]}}[2019-09-17T19:55:07,870][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.186.128:9200/"}[2019-09-17T19:55:08,271][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}[2019-09-17T19:55:08,274][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}[2019-09-17T19:55:08,407][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.186.128:9200"]}[2019-09-17T19:55:08,789][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.[2019-09-17T19:55:08,812][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x66b75283 run>"}[2019-09-17T19:55:08,954][INFO ][logstash.outputs.elasticsearch] Using default mapping template[2019-09-17T19:55:09,102][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}[2019-09-17T19:55:10,695][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}The stdin plugin is now waiting for input:[2019-09-17T19:55:10,907][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}[2019-09-17T19:55:11,892][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}[2019-09-17T19:55:13,408][INFO ][logstash.inputs.jdbc ] (0.020172s) select id,product_name from t_product{"product_name":"中国移动N1","@version":"1","type":"t_product","id":1005,"@timestamp":"2019-09-17T11:55:13.466Z"}{"product_name":"中国移动N2","@version":"1","type":"t_product","id":1006,"@timestamp":"2019-09-17T11:55:13.482Z"}{"product_name":"中国移动N66","@version":"1","type":"t_product","id":1007,"@timestamp":"2019-09-17T11:55:13.483Z"}{"product_name":"中国联通L101","@version":"1","type":"t_product","id":1008,"@timestamp":"2019-09-17T11:55:13.483Z"}{"product_name":"中国联通L209","@version":"1","type":"t_product","id":1009,"@timestamp":"2019-09-17T11:55:13.483Z"}{"product_name":"中国联通L9","@version":"1","type":"t_product","id":1010,"@timestamp":"2019-09-17T11:55:13.483Z"}{"product_name":"中国电信D1","@version":"1","type":"t_product","id":1011,"@timestamp":"2019-09-17T11:55:13.483Z"}{"product_name":"中国联通D3","@version":"1","type":"t_product","id":1012,"@timestamp":"2019-09-17T11:55:13.483Z"}{"product_name":"黑米手机","@version":"1","type":"t_product","id":1172540069139988482,"@timestamp":"2019-09-17T11:55:13.484Z"}{"product_name":"黑米Note","@version":"1","type":"t_product","id":1172550889496498178,"@timestamp":"2019-09-17T11:55:13.484Z"}{"product_name":"中国电信001","@version":"1","type":"t_product","id":1172675362308489217,"@timestamp":"2019-09-17T11:55:13.484Z"}{"product_name":"zhongguoyidong","@version":"1","type":"t_product","id":1172676267686756354,"@timestamp":"2019-09-17T11:55:13.484Z"}
增量更新
input {stdin {}jdbc{jdbc_connection_string => "jdbc:mysql://192.168.186.1:3306/ugaoxin_db?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true"jdbc_user => "root"jdbc_password => "1111"jdbc_driver_library => "/usr/local/devops/elk/array_data/mysql/mysql-connector-java-5.1.7-bin.jar"jdbc_driver_class => "com.mysql.jdbc.Driver"jdbc_default_timezone =>"Asia/Shanghai"jdbc_paging_enabled => "true"jdbc_page_size => "50000"statement => "select * from t_product where update_time >= :sql_last_value"schedule => "* * * * *"record_last_run => truetracking_column => "update_time"tracking_column_type => "timestamp"last_run_metadata_path => "/usr/local/devops/elk/array_data/mysql/last_run.log"clean_run => falselowercase_column_names => falsetype => "t_product"}}output {stdout {codec => json_lines}elasticsearch {hosts => "http://192.168.186.128:9200"index => "t_product"document_id => "%{id}"}
}
