使用Kubernetes部署基于Redis的PHP留言板应用程序
首先
我想尝试在Kubernetes的手册中的教程部分中做以下的实际例子。
例: 部署使用Redis的PHP留言板应用程序
例: 在使用PHP / Redis的留言板示例中添加日志记录和度量指标
客人留言簿应用程序的架构
在本教程中,我们将介绍使用Kubernetes和Docker构建和部署简单多层次Web应用程序的方法。该示例由以下组件组成:
– 单个Redis主服务器用于保存留言板条目
– 多个复制的Redis实例用于数据读取分发
– 多个Web前端实例
但是如果将要构建的留言簿应用程序的架构绘制成图表,则会如下所示(根据我的理解)。

-
- redis master
-
- redis slave
- frontend
按照这个顺序进行制作。
Redis主节点部署
豆荚
在客留言簿应用中,我们使用Redis来存储数据。客留言会被写入到Redis的主实例中。
我将应用以下清单文件。使用Deployment在集群中部署一个副本数为1的Pod。
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: k8s.gcr.io/redis:e2e
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
$ kubectl apply -f redis-master-deployment.yaml
deployment.apps/redis-master created
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-master-6b54579d85-l8hsn 1/1 Running 0 80s
当检查日志时,发现Redis的logo以ASCII艺术的形式呈现出来。
$ kubectl logs redis-master-6b54579d85-l8hsn
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 2.8.19 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in stand alone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
[1] 23 Aug 12:56:53.934 # Server started, Redis version 2.8.19
[1] 23 Aug 12:56:53.935 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
[1] 23 Aug 12:56:53.935 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
[1] 23 Aug 12:56:53.935 * The server is now ready to accept connections on port 6379
服务
为了控制与Redis主节点的通信,需要使用Service。在这里,通信仅限于集群内部,因此将部署ClusterIP。
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
$ kubectl apply -f redis-master-service.yaml
service/redis-master created
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31d
redis-master ClusterIP 10.101.227.29 <none> 6379/TCP 10s
部署Redis从节点
播客
在访客留言本应用程序中,从从属实例读取数据。虽然我不了解Redis,但数据应该从主服务器复制到从属服务器。肯定是这样的。
使用部署(Deployment)来部署2个副本(Replica)的Pod。
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-slave
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: slave
tier: backend
replicas: 2
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 6379
$ kubectl apply -f redis-slave-deployment.yaml
deployment.apps/redis-slave created
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-master-6b54579d85-l8hsn 1/1 Running 0 7m10s
redis-slave-799788557c-599w2 1/1 Running 0 31s
redis-slave-799788557c-f2lc6 1/1 Running 0 32s
服务
与主节点一样,为从节点部署ClusterIP。
apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
app: redis
role: slave
tier: backend
spec:
ports:
- port: 6379
selector:
app: redis
role: slave
tier: backend
$ kubectl apply -f redis-slave-service.yaml
service/redis-slave created
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31d
redis-master ClusterIP 10.101.227.29 <none> 6379/TCP 4m59s
redis-slave ClusterIP 10.109.67.11 <none> 6379/TCP 9s
前端
这个客人留言应用程序有一个用PHP编写的Web前端,用来为HTTP请求提供服务。该应用程序被配置为将写请求连接到redis-master服务,将读请求连接到redis-slave服务。
播客
使用 Deployment 部署三个 Replica 的 Pod。
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: guestbook
spec:
selector:
matchLabels:
app: guestbook
tier: frontend
replicas: 3
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google-samples/gb-frontend:v4
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 80
$ kubectl apply -f frontend-deployment.yaml
deployment.apps/frontend created
$ kubectl get pods -l app=guestbook -l tier=frontend
NAME READY STATUS RESTARTS AGE
frontend-56fc5b6b47-fx4gv 1/1 Running 0 3m3s
frontend-56fc5b6b47-qblhv 1/1 Running 0 3m3s
frontend-56fc5b6b47-qlph7 1/1 Running 0 3m3s
服务
为了将Guestbook应用程序公开给外部用户,我们在这里部署了负载均衡器(LoadBalancer)。
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: guestbook
tier: frontend
$ kubectl apply -f frontend-service.yaml
service/frontend created
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 10.99.203.0 10.20.30.150 80:31257/TCP 4s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31d
redis-master ClusterIP 10.101.227.29 <none> 6379/TCP 16m
redis-slave ClusterIP 10.109.67.11 <none> 6379/TCP 11m
确认行动
使用外部浏览器连接,并确认访客留言应用程序的运行情况。
可以确认能够像下图一样连接和读写访客留言。

由于自己构建的过程只需要应用公开的清单即可,因此很容易实现,但通过将每个角色绘制到图中并进行确认,我们能够学习微服务的基础知识。
添加日志记录和指标
作为继续的另一个教程,我们将在已创建的留言簿应用程序中添加日志记录和指标功能。
这个教程是基于使用Redis和PHP创建一个留言板的教程而设计的。我们将在与留言板相同的Kubernetes集群中部署Beats,它是一个由Elastic开发的开源轻量级数据传输工具,用于传输日志、指标和网络数据。通过Beats,我们可以收集、分析和建立索引数据,然后在Kibana上展示和分析结果。这个示例包括以下组件:
– 运行中的使用Redis和PHP创建的留言板实例
– Elasticsearch和Kibana
– Filebeat
– Metricbeat
– Packetbeat
添加 ClusterRoleBinding
我们将添加一个ClusterRoleBinding,以便将即将创建的kube-state-metrics和Beats部署到kube-system命名空间。
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=kosuke
clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
$ kubectl describe clusterrolebindings.rbac.authorization.k8s.io cluster-admin-binding
Name: cluster-admin-binding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
User kosuke
安装kube-state-metrics
Kubernetes的kube-state-metrics是一个简单的服务,它监听Kubernetes API服务器并生成与对象状态相关的指标。
在Red Hat的OpenShift中,使用Prometheus进行集群监控。
从Github上克隆并应用。
$ git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
Cloning into 'kube-state-metrics'...
remote: Enumerating objects: 38, done.
remote: Counting objects: 100% (38/38), done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 19889 (delta 15), reused 24 (delta 10), pack-reused 19851
Receiving objects: 100% (19889/19889), 16.64 MiB | 3.81 MiB/s, done.
Resolving deltas: 100% (12629/12629), done.
$ kubectl apply -f kube-state-metrics/examples/standard
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
service/kube-state-metrics created
$ kubectl get pod -n kube-system kube-state-metrics-5c5cb55b4-pf5r5
NAME READY STATUS RESTARTS AGE
kube-state-metrics-5c5cb55b4-pf5r5 1/1 Running 0 31s
准备Elasticsearch
在这里,将使用Elasticsearch的云服务。
https://cloud.elastic.co/
我在登录页面上使用Google登录了。

部署Elasticsearch。
选择云平台和区域。在这里,我们选择了GCP的东京区域。

我忘记截取图像了,但是请向下滚动,设置名称并点击“创建”类似的按钮。
操作在如下画面显示后,您可以选择将密码记录下来或下载。
稍后,将在创建“Secret”时使用。

我会等一会儿以进行部署。

一旦部署完成,您会看到以下画面,请记下 CloudID。

这样,Elasticsearch的准备工作已经完成了。
下载Beats软件包
我从Github克隆了一个项目。顺便提一下,它大约有400MB,在我的环境中硬盘已经满了,所以我得重新开始。
$ git clone https://github.com/elastic/examples.git
Cloning into 'examples'...
remote: Enumerating objects: 63, done.
remote: Counting objects: 100% (63/63), done.
remote: Compressing objects: 100% (55/55), done.
remote: Total 6738 (delta 22), reused 20 (delta 6), pack-reused 6675
Receiving objects: 100% (6738/6738), 124.89 MiB | 4.09 MiB/s, done.
Resolving deltas: 100% (3258/3258), done.
Checking out files: 100% (804/804), done.
$ cd examples/beats-k8s-send-anywhere/
创建一个秘密
创建密钥时,使用在创建 Elasticsearch 时记录的密码和 CloudID。
编辑下载的软件包中的文件。为了安全起见,先复制原始文件。
$ cp ELASTIC_CLOUD_AUTH ELASTIC_CLOUD_AUTH.org
$ cp ELASTIC_CLOUD_ID ELASTIC_CLOUD_ID.org
复制后,使用vi等进行编辑。
使用编辑后的文件创建Secret。
$ kubectl create secret generic dynamic-logging --from-file=./ELASTIC_CLOUD_ID --from-file=./ELASTIC_CLOUD_AUTH --namespace=kube-system
secret/dynamic-logging created
Beats的部署
每个Beat都提供一个清单文件。这些清单文件将使用上面创建的密钥配置Beat以连接到Elasticsearch和Kibana服务器。
Filebeat的部署
Filebeat会收集Kubernetes节点以及每个节点上运行的每个Pod内的容器日志。Filebeat将以DaemonSet的形式部署。
我会申请宣言书。
$ kubectl apply -f filebeat-kubernetes.yaml
configmap/filebeat-dynamic-config created
clusterrolebinding.rbac.authorization.k8s.io/filebeat-dynamic created
clusterrole.rbac.authorization.k8s.io/filebeat-dynamic created
serviceaccount/filebeat-dynamic created
error: unable to recognize "filebeat-kubernetes.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
检测到DaemonSet出现错误。看起来API版本过低。
我们将验证Kubernetes v1.18的DaemonSet API版本。
$ kubectl explain DaemonSet | head
KIND: DaemonSet
VERSION: apps/v1
DESCRIPTION:
DaemonSet represents the configuration of a daemon set.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
我已经按照以下方式编辑了宣言书。
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-dynamic-config
namespace: kube-system
labels:
k8s-app: filebeat-dynamic
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
setup.dashboards.enabled: true
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 1
filebeat.modules:
- module: system
syslog:
enabled: true
#var.paths: ["/var/log/syslog"]
auth:
enabled: true
#var.paths: ["/var/log/authlog"]
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition.equals:
kubernetes.labels.app: redis
config:
- module: redis
log:
input:
type: docker
containers.ids:
- ${data.kubernetes.container.id}
slowlog:
enabled: true
var.hosts: ["${data.host}:${data.port}"]
- condition.contains:
kubernetes.labels.tier: frontend
config:
- module: apache2
access:
input:
type: docker
containers.ids:
- ${data.kubernetes.container.id}
error:
input:
type: docker
containers.ids:
- ${data.kubernetes.container.id}
- condition.equals:
kubernetes.labels.app: mysql
config:
- module: mysql
error:
input:
type: docker
containers.ids:
- ${data.kubernetes.container.id}
slowlog:
input:
type: docker
containers.ids:
- ${data.kubernetes.container.id}
processors:
- drop_event:
when.or:
- and:
- regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: error
- and:
- not:
regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: access
- add_cloud_metadata:
- add_kubernetes_metadata:
- add_docker_metadata:
cloud.auth: ${ELASTIC_CLOUD_AUTH}
cloud.id: ${ELASTIC_CLOUD_ID}
output.elasticsearch:
hosts: ${ELASTICSEARCH_HOSTS}
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
setup.kibana:
host: ${KIBANA_HOST}
---
apiVersion: apps/v1 #changed
kind: DaemonSet
metadata:
name: filebeat-dynamic
namespace: kube-system
labels:
k8s-app: filebeat-dynamic
kubernetes.io/cluster-service: "true"
spec:
selector: #added
matchLabels: #added
k8s-app: filebeat-dynamic #added
template:
metadata:
labels:
k8s-app: filebeat-dynamic
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: filebeat-dynamic
terminationGracePeriodSeconds: 30
containers:
- name: filebeat-dynamic
image: docker.elastic.co/beats/filebeat:7.6.2
imagePullPolicy: Always
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTIC_CLOUD_ID
valueFrom:
secretKeyRef:
name: dynamic-logging
key: ELASTIC_CLOUD_ID
optional: true
- name: ELASTIC_CLOUD_AUTH
valueFrom:
secretKeyRef:
name: dynamic-logging
key: ELASTIC_CLOUD_AUTH
optional: true
- name: ELASTICSEARCH_HOSTS
valueFrom:
secretKeyRef:
name: dynamic-logging
key: ELASTICSEARCH_HOSTS
optional: true
- name: KIBANA_HOST
valueFrom:
secretKeyRef:
name: dynamic-logging
key: KIBANA_HOST
optional: true
- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
name: dynamic-logging
key: ELASTICSEARCH_USERNAME
optional: true
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: dynamic-logging
key: ELASTICSEARCH_PASSWORD
optional: true
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-dynamic-config
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: data
emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat-dynamic
subjects:
- kind: ServiceAccount
name: filebeat-dynamic
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat-dynamic
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat-dynamic
labels:
k8s-app: filebeat-dynamic
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat-dynamic
namespace: kube-system
labels:
k8s-app: filebeat-dynamic
我会使用编辑过的宣言再次申请。
$ kubectl apply -f filebeat-kubernetes.yaml
configmap/filebeat-dynamic-config unchanged
daemonset.apps/filebeat-dynamic created
clusterrolebinding.rbac.authorization.k8s.io/filebeat-dynamic unchanged
clusterrole.rbac.authorization.k8s.io/filebeat-dynamic unchanged
serviceaccount/filebeat-dynamic unchanged
$ kubectl get pod -n kube-system -l k8s-app=filebeat-dynamic
NAME READY STATUS RESTARTS AGE
filebeat-dynamic-2bxv7 1/1 Running 0 119s
filebeat-dynamic-bvs22 1/1 Running 0 119s
这次成功了。
部署Metricbeat
Metricbeat的自动检测设置与Filebeat相同。
由于API的版本较旧,因此需要像FileBeat一样编辑清单。要编辑DaemonSet和Deployment的以下内容。
-
- APIバージョンの変更
- selectorの追加
$ kubectl apply -f metricbeat-kubernetes.yaml
configmap/metricbeat-setup-config created
job.batch/metricbeat-setup created
configmap/metricbeat-daemonset-config created
configmap/metricbeat-daemonset-modules created
daemonset.apps/metricbeat created
configmap/metricbeat-deployment-config created
configmap/metricbeat-deployment-modules created
deployment.apps/metricbeat created
clusterrolebinding.rbac.authorization.k8s.io/metricbeat created
clusterrole.rbac.authorization.k8s.io/metricbeat created
serviceaccount/metricbeat created
$ kubectl get pods -n kube-system -l k8s-app=metricbeat
NAME READY STATUS RESTARTS AGE
metricbeat-5b9dfd9f8d-t7j7t 1/1 Running 0 2m
metricbeat-vs7rf 1/1 Running 0 2m
metricbeat-wznq4 1/1 Running 0 2m
Packetbeat部署
Packetbeat的配置与Filebeat和Metricbeat不同。与指定容器标签的模式匹配不同,我们需要根据相关协议和端口号来进行配置。
应用程序并添加了相同的DaemonSet API版本更改和选择器。
kubectl apply -f packetbeat-kubernetes.yaml
configmap/packetbeat-dynamic-config created
daemonset.apps/packetbeat-dynamic created
clusterrolebinding.rbac.authorization.k8s.io/packetbeat-dynamic created
clusterrole.rbac.authorization.k8s.io/packetbeat-dynamic created
serviceaccount/packetbeat-dynamic created
这样就完成了。
在Kibana中进行确认
我之前几乎没有使用过Elasticsearch和Kibana,所以不确定这样是否正确,但似乎可以收集到日志。
很容易做到。

总结
这次我尝试按照教程进行实际操作。
自己进行构建很简单,只需按照手册进行即可,但是在整理实际架构等内容的同时进行操作,对理解有所加深。
关于Elasticsearch和Kibana,虽然已经能够使用,但我还没有完全理解,所以我想在免费试用期内再多尝试一下。还有10天。