备案官方网站,什么是网络营销请举几个例子说明,wordpress门户插件,青海网页设计与网站建设Logback日志发送到Kafka 文章目录Logback日志发送到Kafka一、使用logback将日志发送至kafka1.1 引入依赖1.2 logback.xml简单Demo1.3 兼容性1.4 完整的样例1.5 启动程序收集日志1.6 项目Git地址一、使用logback将日志发送至kafka
1.1 引入依赖 如果存在则跳过该步骤 pom.xml
…Logback日志发送到Kafka 文章目录Logback日志发送到Kafka一、使用logback将日志发送至kafka1.1 引入依赖1.2 logback.xml简单Demo1.3 兼容性1.4 完整的样例1.5 启动程序收集日志1.6 项目Git地址一、使用logback将日志发送至kafka
1.1 引入依赖 如果存在则跳过该步骤 pom.xml
dependencygroupIdcom.github.danielwegener/groupIdartifactIdlogback-kafka-appender/artifactIdversion0.2.0/versionscoperuntime/scope
/dependency
dependencygroupIdch.qos.logback/groupIdartifactIdlogback-classic/artifactIdversion1.2.3/versionscoperuntime/scope
/dependency1.2 logback.xml简单Demo
configurationappender nameSTDOUT classch.qos.logback.core.ConsoleAppenderencoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encoder/appender!-- This is the kafkaAppender --appender namekafkaAppender classcom.github.danielwegener.logback.kafka.KafkaAppenderencoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encodertopiclogs/topickeyingStrategy classcom.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy /deliveryStrategy classcom.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy /!-- Optional parameter to use a fixed partition --!-- partition0/partition --!-- Optional parameter to include log timestamps into the kafka message --!-- appendTimestamptrue/appendTimestamp --!-- each producerConfig translates to regular kafka-client config (format: keyvalue) --!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --!-- bootstrap.servers is the only mandatory producerConfig --producerConfigbootstrap.serverslocalhost:9092/producerConfig!-- this is the fallback appender if kafka is not available. --appender-ref refSTDOUT //appenderroot levelinfoappender-ref refkafkaAppender //root
/configuration1.3 兼容性
logback-kafka-appender 依赖于org.apache.kafka:kafka-clients:1.0.0:jar. 它可以将日志附加到版本为 0.9.0.0 或更高版本的 kafka 代理。
对 kafka-clients 的依赖不会被隐藏并且可以通过依赖覆盖升级到更高的、api 兼容的版本。
1.4 完整的样例
configurationappender nameSTDOUT classch.qos.logback.core.ConsoleAppendertargetSystem.out/targetencoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encoder/appenderappender nameSTDERR classch.qos.logback.core.ConsoleAppendertargetSystem.err/targetencoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encoder/appender!-- This example configuration is probably most unreliable underfailure conditions but wont block your application at all --appender namevery-relaxed-and-fast-kafka-appender classcom.github.danielwegener.logback.kafka.KafkaAppenderencoder classch.qos.logback.classic.encoder.PatternLayoutEncoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encodertopicboring-logs/topic!-- we dont care how the log messages will be partitioned --keyingStrategy classcom.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy /!-- use async delivery. the application threads are not blocked by logging --deliveryStrategy classcom.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy /!-- each producerConfig translates to regular kafka-client config (format: keyvalue) --!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --!-- bootstrap.servers is the only mandatory producerConfig --producerConfigbootstrap.serverslocalhost:9092/producerConfig!-- dont wait for a broker to ack the reception of a batch. --producerConfigacks0/producerConfig!-- wait up to 1000ms and collect log messages before sending them as a batch --producerConfiglinger.ms1000/producerConfig!-- even if the producer buffer runs full, do not block the application but start to drop messages --producerConfigmax.block.ms0/producerConfig!-- define a client-id that you use to identify yourself against the kafka broker --producerConfigclient.id${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed/producerConfig!-- there is no fallback appender-ref. If this appender cannot deliver, it will drop its messages. --/appender!-- This example configuration is more restrictive and will try to ensure that every messageis eventually delivered in an ordered fashion (as long the logging application stays alive) --appender namevery-restrictive-kafka-appender classcom.github.danielwegener.logback.kafka.KafkaAppenderencoder classch.qos.logback.classic.encoder.PatternLayoutEncoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encodertopicimportant-logs/topic!-- ensure that every message sent by the executing host is partitioned to the same partition strategy --keyingStrategy classcom.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy /!-- block the logging application thread if the kafka appender cannot keep up with sending the log messages --deliveryStrategy classcom.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy!-- wait indefinitely until the kafka producer was able to send the message --timeout0/timeout/deliveryStrategy!-- each producerConfig translates to regular kafka-client config (format: keyvalue) --!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --!-- bootstrap.servers is the only mandatory producerConfig --producerConfigbootstrap.serverslocalhost:9092/producerConfig!-- restrict the size of the buffered batches to 8MB (default is 32MB) --producerConfigbuffer.memory8388608/producerConfig!-- If the kafka broker is not online when we try to log, just block until it becomes available --producerConfigmetadata.fetch.timeout.ms99999999999/producerConfig!-- define a client-id that you use to identify yourself against the kafka broker --producerConfigclient.id${HOSTNAME}-${CONTEXT_NAME}-logback-restrictive/producerConfig!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy --producerConfigcompression.typegzip/producerConfig!-- Log every log message that could not be sent to kafka to STDERR --appender-ref refSTDERR//appenderroot levelinfoappender-ref refvery-relaxed-and-fast-kafka-appender /appender-ref refvery-restrictive-kafka-appender //root
/configuration1.5 启动程序收集日志
创建接收日志的topic启动程序即可将Kafka数据发送至Topic
1.6 项目Git地址
https://github.com/danielwegener/logback-kafka-appender