java+log日志服务器_Logserver日志服务器结构
一個(gè)日志服務(wù)架構(gòu)主要包括3個(gè)部分:日志采集agent,日志存儲(chǔ),及日志瀏覽
本日志服務(wù)方案采用logstash+elasticsearch+kibana的組合搭建,其中l(wèi)ogstash負(fù)責(zé)日志的采集和入庫(kù),elasticsearch負(fù)責(zé)日志的存儲(chǔ)和索引,kubana負(fù)責(zé)日志的搜索和前端展示
在實(shí)際部署過(guò)程中,為保證可用和節(jié)省資源,主要注意以下幾點(diǎn):
1.logstash支持很多socket遠(yuǎn)程input和ouput插件,但為不影響主干業(yè)務(wù)流,盡量采用日志文件異步讀出寫(xiě)入es的方式,如log4j支持socketappend,但需要修改既有應(yīng)用的配置,而且一旦server端出錯(cuò),應(yīng)用日志沒(méi)有本地文件,容易丟失
2.logstash agent可以直接寫(xiě)入es,但為減小es的壓力,采用redis和logstash
indexer的較色來(lái)延長(zhǎng)消息傳遞流程,降低各環(huán)節(jié)的壓力
3.logstash基于java實(shí)現(xiàn),為節(jié)約系統(tǒng)資源,不論agent還是indexer,單臺(tái)物理機(jī)只配置一個(gè)文件,采用一個(gè)守護(hù)進(jìn)程啟動(dòng)服務(wù)
cat
/usr/local/logserver/logstash/conf/allinone_indexer.conf
input
{
redis
{
host =>
"10.241.223.112"
type =>
"log4j-account"
data_type =>
"list"
key =>
"log4j-account"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-activity"
data_type =>
"list"
key =>
"log4j-activity"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-databag"
data_type =>
"list"
key =>
"log4j-databag"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-document"
data_type =>
"list"
key =>
"log4j-document"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-job"
data_type =>
"list"
key =>
"log4j-job"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-monitor"
data_type =>
"list"
key =>
"log4j-monitor"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-notification"
data_type =>
"list"
key =>
"log4j-notification"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-qto"
data_type =>
"list"
key =>
"log4j-qto"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-search"
data_type =>
"list"
key =>
"log4j-search"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-social"
data_type =>
"list"
key =>
"log4j-social"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"log4j-storage"
data_type =>
"list"
key =>
"log4j-storage"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"tomcat-account"
data_type =>
"list"
key =>
"tomcat-account"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"tomcat-databag"
data_type =>
"list"
key =>
"tomcat-databag"
format =>
"json_event"
}
redis
{
host =>
"10.241.223.112"
type =>
"tomcat-monitor"
data_type =>
"list"
key =>
"tomcat-monitor"
format =>
"json_event"
}
}
filter {
multiline
{
type =>
"log4j-account"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-activity"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-databag"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-document"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-job"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-monitor"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-notification"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-qto"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-search"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-social"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"log4j-storage"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"tomcat-account"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"tomcat-databag"
pattern =>
"^\s"
what =>
"previous"
}
multiline
{
type =>
"tomcat-monitor"
pattern =>
"^\s"
what =>
"previous"
}
}
output {
elasticsearch {
host
=> "10.241.223.112"
cluster
=> "logstashelasticsearch"
port
=> 9300
}
}
cat
/usr/local/logserver/logstash/conf/nginx_agent.conf
input {
file {
type => "nginx"
path => ["/data/logs/nginx/*.log"
]
}
}
output {
redis {
host => "10.241.223.112"
data_type => "list"
key => "nginx"
type => "nginx"
}
}
cat
/usr/local/logserver/logstash/startup.sh
#!/bin/bash
workdir=$(cd $(dirname $0);pwd)
#echo $workdir
usage(){
cat <
This script start, stop and retart logstash agent and indexer
of apps
Usage: $(basename $0) [ agent | indexer | all ] app1 app2 app3
...
EOF1
}
[ $# -lt 2 ] && usage
&& exit
start_mode=$1
for appname in "$@";do
case "$appname" in
"agent" )
continue
;;
"indexer" )
continue
;;
"all" )
continue
;;
*)
if [ "$start_mode" =
"all" ]; then
java
-jar $workdir/logstash.jar agent -f
$workdir/conf/"$appname"_indexer.conf &
java
-jar $workdir/logstash.jar agent -f
$workdir/conf/"$appname"_agent.conf &
else
java
-jar $workdir/logstash.jar agent -f
$workdir/conf/"$appname"_"$start_mode".conf &
fi
;;
esac
echo
done
cat /usr/local/logserver/logstash/indexer_startup.sh
#!/bin/bash
#/usr/local/logserver/logstash/startup.sh indexer account activity
databag
/usr/local/logserver/logstash/startup.sh indexer allinone
總結(jié)
以上是生活随笔為你收集整理的java+log日志服务器_Logserver日志服务器结构的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 亿纬锂能:2022 年营收 363.05
- 下一篇: 苹果手机怎么下载红手指(苹果官网报价)