当前位置: 首页 > news >正文

Docker部署Spark大数据组件:配置log4j日志

上一篇《Docker部署Spark大数据组件》中,日志是输出到console的,如果有将日志输出到文件的需要,需要进一步配置。

配置将日志同时输出到console和file

1、停止spark集群

docker-compose down -v

 2、使用自带log4j日志配置模板配置

cp -f log4j2.properties.template log4j2.properties

编辑log4j2.properties,进行如下修改;但是,如下方案,日志无法轮转,也就是说日志一直会写到spark.log中。

# Set everything to be logged to the console and file

……

rootLogger.appenderRef.file.ref = file

# File appender
appender.file.type = File
appender.file.name = file
appender.file.fileName = spark.log
appender.file.layout.type = PatternLayout
appender.file.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex

3、配置支持日志轮转

rootLogger.appenderRef.file.ref = file

改为

rootLogger.appenderRef.rolling.ref = rolling

# File appender 下的配置删掉,增加如下配置:

# RollingFile appender
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = logs/spark.log
appender.rolling.filePattern = logs/spark-%d{yyyy-MM-dd}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30

可以直接使用如下配置模板:

cat >log4j2.properties <<'EOF'
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Set everything to be logged to the console and rolling file
rootLogger.level = info
rootLogger.appenderRef.stdout.ref = console
rootLogger.appenderRef.rolling.ref = rolling# Console appender
appender.console.type = Console
appender.console.name = console
appender.console.target = SYSTEM_ERR
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex# RollingFile appender
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = logs/spark.log
appender.rolling.filePattern = logs/spark-%d{yyyy-MM-dd}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30# Set the default spark-shell/spark-sql log level to WARN. When running the
# spark-shell/spark-sql, the log level for these classes is used to overwrite
# the root logger's log level, so that the user can have different defaults
# for the shell and regular Spark apps.
logger.repl.name = org.apache.spark.repl.Main
logger.repl.level = warnlogger.thriftserver.name = org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver
logger.thriftserver.level = warn# Settings to quiet third party logs that are too verbose
logger.jetty1.name = org.sparkproject.jetty
logger.jetty1.level = warn
logger.jetty2.name = org.sparkproject.jetty.util.component.AbstractLifeCycle
logger.jetty2.level = error
logger.replexprTyper.name = org.apache.spark.repl.SparkIMain$exprTyper
logger.replexprTyper.level = info
logger.replSparkILoopInterpreter.name = org.apache.spark.repl.SparkILoop$SparkILoopInterpreter
logger.replSparkILoopInterpreter.level = info
logger.parquet1.name = org.apache.parquet
logger.parquet1.level = error
logger.parquet2.name = parquet
logger.parquet2.level = error# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
logger.RetryingHMSHandler.name = org.apache.hadoop.hive.metastore.RetryingHMSHandler
logger.RetryingHMSHandler.level = fatal
logger.FunctionRegistry.name = org.apache.hadoop.hive.ql.exec.FunctionRegistry
logger.FunctionRegistry.level = error# For deploying Spark ThriftServer
# SPARK-34128: Suppress undesirable TTransportException warnings involved in THRIFT-4805
appender.console.filter.1.type = RegexFilter
appender.console.filter.1.regex = .*Thrift error occurred during processing of message.*
appender.console.filter.1.onMatch = deny
appender.console.filter.1.onMismatch = neutral
EOF

验证生效

1、启动spark集群

2、查看日志文件

http://www.xdnf.cn/news/687151.html

相关文章:

  • Vue开发系列——零基础HTML引入 Vue.js 实现页面之间传参
  • Kotlin 中的数据类型有隐式转换吗?为什么?
  • 天津工作机会:技术文档工程师 - 华海清科股份有限公司
  • 【Linux】分页式存储管理:深刻理解页表映射
  • 【Doris基础】Apache Doris 基本架构深度解析:从存储到查询的完整技术演进
  • 金砖国家人工智能高级别论坛在巴西召开,华院计算应邀出席并发表主题演讲
  • 960g轻薄本,把科技塞进巧克力盒子
  • 从零开始学安全:服务器被入侵后的自救指南
  • 第二章 1.5 数据采集安全风险防范之数据采集安全管理
  • git和gitee的常用语句命令
  • JS语言基础
  • LiveNVR 直播流拉转:Onvif/RTSP/RTMP/FLV/HLS 支持海康宇视天地 SDK 接入-视频广场页面集成与视频播放说明
  • 实验设计与分析(第6版,Montgomery)第3章单因子实验:方差分析3.11思考题3.7 R语言解题
  • RuoYi前后端分离项目集成magic-api,并继承RuoYi的权限认证体系来管理magic-api
  • mongodb集群之副本集
  • IP证书的作用与申请全解析:从安全验证到部署实践
  • 【数据集】无缝1 km地表温度数据集(US)
  • 树莓派搭配 Tailscale 搭建个人云网盘
  • 使用 Kali Linux 入侵 Metasploitable 2 虚拟机
  • Bert和GPT区别
  • 生成式引擎优化(GEO):构建AI时代的内容霸权
  • 8卡910B4-32G测试Qwen2.5-VL-72B-instruct模型兼容性
  • (九)深度学习---自然语言处理基础
  • 设计模式25——中介者模式
  • 如何在 CentOS / RHEL 上修改 MySQL 默认数据目录 ?
  • 【前端】【css预处理器】Sass与Less全面对比与构建对应知识体系
  • 欧拉角转为旋转矩阵
  • X-plore v4.43.05 强大的安卓文件管理器-MOD解锁高级版 手机平板/电视TV通用
  • 欢乐熊大话蓝牙知识12:用 BLE 打造家庭 IoT 网络的三种方式
  • 基于深度学习的三维图像生成项目开发方案