前言:
当前看官们对“nginx日志合并”可能比较珍视,大家都需要了解一些“nginx日志合并”的相关资讯。那么小编也在网摘上搜集了一些有关“nginx日志合并””的相关资讯,希望大家能喜欢,我们一起来学习一下吧!一、收集方式介绍
由于公司业务存在前端和后端两种业务,分别是java日志和nginx日志输出。目前分别使用featbeat的log和container两种模式进行收集日志。
1、container模式收集java:需将pod的日志输出到终端上,这是可以直接用filebeat采集node节点上的/var/log/containers/*.log日志,然后将日志输出到kafka、logstash、elasticsearch中。
2、log模式收集nginx日志:需将nginx的日志挂载到各个node节点上,filebeat再通过hostpath挂载到容器进行收集。
以上两种模式的数据传输流程大体相同,使用container模式,可以有效的获取k8s的信息。
数据传输流程:
1、Pod -> /var/log/containers/\*.log -> Filebeat -> Logstash -> Elasticsearch -> Kibana
该方案的Logstash会存在瓶颈,但对于业务量不大的企业已足够。
2、Pod -> /var/log/containers/\*.log -> Filebeat ->Kafka集群-> Logstash -> Elasticsearch -> Kibana
市面上使用这种传输流程居多。Kafka能有效的抵御业务高峰是产生日志,以防filebeat发生堆积。
二、RBAC权限配置
[root@k8s-master01 filebeat]# vim rbac.yaml---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata: name: filebeatsubjects:- kind: ServiceAccount name: filebeat namespace: kube-systemroleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata: name: filebeat labels: k8s-app: filebeatrules:- apiGroups: [""] resources: - namespaces - pods verbs: - get - watch - list---apiVersion: v1kind: ServiceAccountmetadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat三、ConfigMap
[root@k8s-master01 filebeat]# vim configMap.yaml---apiVersion: v1kind: ConfigMapmetadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeatdata: filebeat.yml: |- filebeat.inputs: - type: log enabled: true tags: ["nginx"] paths: - /data/logs/web1/*.log fields: log_topics: web1 exclude_lines: ['index.html'] - type: log enabled: true tags: ["nginx"] paths: - /data/logs/web2/*.log fields: log_topics: web2 #添加log_topics字段,logstash编写规则是可以定义相应的索引。 exclude_lines: ['index.html'] - type: container enabled: true tags: ["java"] paths: - /var/log/containers/*.log fields: log_topics: java multiline: pattern: '^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}' #以时间开始合并行 negate: true match: after max_lines: 500 processors: - add_kubernetes_metadata: default_matchers.enabled: true host: ${NODE_NAME} matchers: - logs_path: logs_path: "/var/log/containers/" - drop_event.when.regexp: or: kubernetes.pod.name: "filebeat.*" kubernetes.pod.name: "external-dns.*" kubernetes.pod.name: "coredns.*" - script: lang: javascript id: format_k8s tag: enable source: > function process(event) { var k8s=event.Get("kubernetes"); var newK8s = { podName: k8s.pod.name, nameSpace: k8s.namespace, imageAddr: k8s.container.name, hostName: k8s.node.hostname } event.Put("k8s", newK8s); } - drop_fields: #删除不必要的k8s字段。 ignore_missing: false fields: - host - ecs - log - prospector - agent - input - beat - offset - stream - container - kubernetes setup.ilm: policy_file: /etc/indice-lifecycle.json #filebeat索引什么周期策略 output.logstash: hosts: ['192.168.10.130:5044'] #输出到logstash---apiVersion: v1kind: ConfigMapmetadata: namespace: kube-system name: filebeat-indice-lifecycle labels: app: filebeatdata: indice-lifecycle.json: |- { "policy": { "phases": { "hot": { "actions": { "rollover": { "max_size": "5GB" , "max_age": "1d" } } }, "delete": { "min_age": "10d", #保留10天的索引 "actions": { "delete": {} } } } } }四、demonset部署
[root@k8s-master01 filebeat]# vim filebeat-demonset.yamlapiVersion: apps/v1kind: DaemonSetmetadata: name: filebeat namespace: kube-system labels: k8s-app: filebeatspec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: filebeat image: docker.elastic.co/beats/filebeat:7.8.0 args: [ "-c", "/etc/filebeat.yml", "-e", ] env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: filebeat-indice-lifecycle mountPath: /etc/indice-lifecycle.json readOnly: true subPath: indice-lifecycle.json - name: data mountPath: /usr/share/filebeat/data - name: varlog mountPath: /var/log readOnly: true - name: datalog mountPath: /data/logs #使用hostpath挂载nginx日志 readOnly: true - name: varlibdockercontainers mountPath: /var/lib/docker/containers #container模式日志。 readOnly: true - name: dockersock mountPath: /var/run/docker.sock volumes: - name: config configMap: defaultMode: 0600 name: filebeat-config - name: filebeat-indice-lifecycle configMap: defaultMode: 0600 name: filebeat-indice-lifecycle - name: varlog hostPath: path: /var/log - name: datalog hostPath: path: /data/logs - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: dockersock hostPath: path: /var/run/docker.sock - name: data hostPath: path: /var/lib/filebeat-data type: DirectoryOrCreate
如果各个node节点的docker部署时,将数据存放位置有修改则需要修改varlibdockercontainers。
如:修改为/data/docker,则需要将/var/lib/docker/containers修改为/data/docker/containers
文章部分内容通过网络收集整理,如有侵权或技术交流请联系作者!
#Author : mayi
#wchat : a403182580
标签: #nginx日志合并