当前位置: 首页 > news >正文

嘉兴市网站建设_网站建设公司_Windows Server_seo优化

备案官方网站,什么是网络营销请举几个例子说明,wordpress门户插件,青海网页设计与网站建设Logback日志发送到Kafka 文章目录Logback日志发送到Kafka一、使用logback将日志发送至kafka1.1 引入依赖1.2 logback.xml简单Demo1.3 兼容性1.4 完整的样例1.5 启动程序收集日志1.6 项目Git地址一、使用logback将日志发送至kafka 1.1 引入依赖 如果存在则跳过该步骤 pom.xml …Logback日志发送到Kafka 文章目录Logback日志发送到Kafka一、使用logback将日志发送至kafka1.1 引入依赖1.2 logback.xml简单Demo1.3 兼容性1.4 完整的样例1.5 启动程序收集日志1.6 项目Git地址一、使用logback将日志发送至kafka 1.1 引入依赖 如果存在则跳过该步骤 pom.xml dependencygroupIdcom.github.danielwegener/groupIdartifactIdlogback-kafka-appender/artifactIdversion0.2.0/versionscoperuntime/scope /dependency dependencygroupIdch.qos.logback/groupIdartifactIdlogback-classic/artifactIdversion1.2.3/versionscoperuntime/scope /dependency1.2 logback.xml简单Demo configurationappender nameSTDOUT classch.qos.logback.core.ConsoleAppenderencoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encoder/appender!-- This is the kafkaAppender --appender namekafkaAppender classcom.github.danielwegener.logback.kafka.KafkaAppenderencoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encodertopiclogs/topickeyingStrategy classcom.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy /deliveryStrategy classcom.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy /!-- Optional parameter to use a fixed partition --!-- partition0/partition --!-- Optional parameter to include log timestamps into the kafka message --!-- appendTimestamptrue/appendTimestamp --!-- each producerConfig translates to regular kafka-client config (format: keyvalue) --!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --!-- bootstrap.servers is the only mandatory producerConfig --producerConfigbootstrap.serverslocalhost:9092/producerConfig!-- this is the fallback appender if kafka is not available. --appender-ref refSTDOUT //appenderroot levelinfoappender-ref refkafkaAppender //root /configuration1.3 兼容性 logback-kafka-appender 依赖于org.apache.kafka:kafka-clients:1.0.0:jar. 它可以将日志附加到版本为 0.9.0.0 或更高版本的 kafka 代理。 对 kafka-clients 的依赖不会被隐藏并且可以通过依赖覆盖升级到更高的、api 兼容的版本。 1.4 完整的样例 configurationappender nameSTDOUT classch.qos.logback.core.ConsoleAppendertargetSystem.out/targetencoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encoder/appenderappender nameSTDERR classch.qos.logback.core.ConsoleAppendertargetSystem.err/targetencoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encoder/appender!-- This example configuration is probably most unreliable underfailure conditions but wont block your application at all --appender namevery-relaxed-and-fast-kafka-appender classcom.github.danielwegener.logback.kafka.KafkaAppenderencoder classch.qos.logback.classic.encoder.PatternLayoutEncoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encodertopicboring-logs/topic!-- we dont care how the log messages will be partitioned --keyingStrategy classcom.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy /!-- use async delivery. the application threads are not blocked by logging --deliveryStrategy classcom.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy /!-- each producerConfig translates to regular kafka-client config (format: keyvalue) --!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --!-- bootstrap.servers is the only mandatory producerConfig --producerConfigbootstrap.serverslocalhost:9092/producerConfig!-- dont wait for a broker to ack the reception of a batch. --producerConfigacks0/producerConfig!-- wait up to 1000ms and collect log messages before sending them as a batch --producerConfiglinger.ms1000/producerConfig!-- even if the producer buffer runs full, do not block the application but start to drop messages --producerConfigmax.block.ms0/producerConfig!-- define a client-id that you use to identify yourself against the kafka broker --producerConfigclient.id${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed/producerConfig!-- there is no fallback appender-ref. If this appender cannot deliver, it will drop its messages. --/appender!-- This example configuration is more restrictive and will try to ensure that every messageis eventually delivered in an ordered fashion (as long the logging application stays alive) --appender namevery-restrictive-kafka-appender classcom.github.danielwegener.logback.kafka.KafkaAppenderencoder classch.qos.logback.classic.encoder.PatternLayoutEncoderpattern%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n/pattern/encodertopicimportant-logs/topic!-- ensure that every message sent by the executing host is partitioned to the same partition strategy --keyingStrategy classcom.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy /!-- block the logging application thread if the kafka appender cannot keep up with sending the log messages --deliveryStrategy classcom.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy!-- wait indefinitely until the kafka producer was able to send the message --timeout0/timeout/deliveryStrategy!-- each producerConfig translates to regular kafka-client config (format: keyvalue) --!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --!-- bootstrap.servers is the only mandatory producerConfig --producerConfigbootstrap.serverslocalhost:9092/producerConfig!-- restrict the size of the buffered batches to 8MB (default is 32MB) --producerConfigbuffer.memory8388608/producerConfig!-- If the kafka broker is not online when we try to log, just block until it becomes available --producerConfigmetadata.fetch.timeout.ms99999999999/producerConfig!-- define a client-id that you use to identify yourself against the kafka broker --producerConfigclient.id${HOSTNAME}-${CONTEXT_NAME}-logback-restrictive/producerConfig!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy --producerConfigcompression.typegzip/producerConfig!-- Log every log message that could not be sent to kafka to STDERR --appender-ref refSTDERR//appenderroot levelinfoappender-ref refvery-relaxed-and-fast-kafka-appender /appender-ref refvery-restrictive-kafka-appender //root /configuration1.5 启动程序收集日志 创建接收日志的topic启动程序即可将Kafka数据发送至Topic 1.6 项目Git地址 https://github.com/danielwegener/logback-kafka-appender
http://www.lebaoying.cn/news/26137.html

相关文章:

  • 如何做网站拉动条软文推广的标准类型
  • 昌吉网站建设电话气动喷枪网站建设
  • 网站开发流程原理wordpress 替换编辑器
  • 河南企业网站建设价格最好的网站设计公
  • 网站建设硬件方案中国企业网站建设
  • 南宁网站建设q.479185700強wordpress 前台插件
  • 怎么看网站备案号南宁比优建站
  • 做任务兼职赚钱的网站有哪些网站中加入企业qq
  • 电子商务网站建设收益市场调研分析报告范文
  • php 建设网站制作亿网互联
  • 企业网站的特点是什么建设网站需要哪些元素
  • 网站建设服务器域名适合大型网站的流量套餐
  • 公司做个网站活动策划流程及细节
  • 百拓公司做网站怎么样上海建材网站
  • 厦门旋挖建筑公司网站做化学科普网站的目的
  • 群晖可以做网站服务器南京seo招聘
  • 网站制作模板教案自己建个购物网站
  • 怎么做网站文章伪原创2023全民核酸又开始了
  • 网站云优化营销推广策略
  • wordpress企业建站wordpress网站logo没显示
  • 网站的虚拟主机到期创建网站花钱吗
  • 惠州住房和城乡建设部网站郑州淘宝网站建设
  • 盐城整站优化网页的制作软件
  • 福州网站建设推广建站公司还有前途吗
  • 做音箱木工网站游戏 网站模板
  • 化妆品企业网站建设的缺点营销网站规划的要点包括( )
  • 合肥在线网站免费云服务器官网
  • asp科技公司网站源码化工网站关键词优化
  • 企业的网站建设策划书大型游戏网页游戏大全
  • 网络公司 网站设计wordpress 代码生成器