前言:
今天小伙伴们对“centoslinklocal”都比较关怀,你们都需要分析一些“centoslinklocal”的相关文章。那么小编同时在网上收集了一些有关“centoslinklocal””的相关文章,希望我们能喜欢,大家一起来了解一下吧!服务发现要解决的问题
如图所示,服务发现要解决的问题就是如何通过名称能够找到指定服务;
服务调用背景知识点
在当前我们使用的万维网网络模型下,服务调用需要通过IP,PORT来找到目标服务; 如果目标的IP不变化,PORT不变化其实不需要服务发现,直接在代码中写死即可调用; 但是真实场景是每个IP绑定的服务器并非100%可靠,服务器有重启,服务有发布上线模型,因此每个服务对外暴露的IP,PORT都可以是多个,这样服务发现模型就可以理解为以下方式:
现实万维网中端口属性有大范围的默认定义,比如:
HTTPS:443HTTP:80
但是实际局域网内服务调用并非都是使用默认端口,比如微服务间调用关系可能走的是约定端口。
serviceA:8080serviceB:8090serviceC:9090
以上就是背景知识点,通过简单背景知识点说明可以了解到服务发现就是要通过一个约定的名称找到对应服务的IP:PORT
常见的服务发现模型
客户端服务发现模型
客户端的服务发现模型一般都是走SDK的模式,通过SDK接口方式获取服务发现对应的节点的配置 如果不走SDK的模型也可以通用协议即DNS,通过域名解析的方式实现获取对应的域名的后端节点信息,但是域名解析一般只支持域名到IP的解析,不支持IP:PORT都解析
服务端服务发现模型
服务端服务发现模型一般基于proxy,proxy层支持服务发现,业务方调用通过proxy发送请求
K8S的服务发现模型
kubernetes是目前比较常用的容器化编排调度平台,其内置了服务发现模型帮助业务方简化该部门的流程和逻辑,因为其太流行并且内置非常多标准化服务发现设计理念,因此我拿K8S来简单讲一下
K8S客户端层服务发现
K8S服务端服务发现(kube-proxy实现)
基于容器模型进行服务发现模型设计
docker在设计之初就集成标签和环境变量的方式为容器进行标准化建设,我们可以通过docker inspect <containerid>方式查看容器就可以发现其标签和环境变量,如下所示:
inspect结果如下:
[ { "Id": "9af7624e5f06d47c82b700660c9a9f4446d530ee1caa2d5fdf3f6811462df24f", "Created": "2021-03-09T09:17:53.048284414Z", "Path": "/usr/local/bin/dumb-init", "Args": [ "/bin/bash", "-c", "while true;do sleep 5;date;done" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 7631, "ExitCode": 0, "Error": "", "StartedAt": "2021-03-09T09:17:53.380484326Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:a81e90ee27fca06cbdb512c13331b6a5dd59eec549bb9976e0baadea777cb38a", "ResolvConfPath": "/data/docker/containers/9af7624e5f06d47c82b700660c9a9f4446d530ee1caa2d5fdf3f6811462df24f/resolv.conf", "HostnamePath": "/data/docker/containers/9af7624e5f06d47c82b700660c9a9f4446d530ee1caa2d5fdf3f6811462df24f/hostname", "HostsPath": "/data/docker/containers/9af7624e5f06d47c82b700660c9a9f4446d530ee1caa2d5fdf3f6811462df24f/hosts", "LogPath": "/data/docker/containers/9af7624e5f06d47c82b700660c9a9f4446d530ee1caa2d5fdf3f6811462df24f/9af7624e5f06d47c82b700660c9a9f4446d530ee1caa2d5fdf3f6811462df24f-json.log", "Name": "/arch.demo.task.1", "RestartCount": 0, "Driver": "overlay2", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ "/etc/localtime:/etc/localtime:ro", "/data/logs/archapi:/data/logs/archapi:rw", "/tmp:/tmp:rw" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "host", "PortBindings": {}, "RestartPolicy": { "Name": "on-failure", "MaximumRetryCount": 10 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": [], "CapAdd": null, "CapDrop": null, "Dns": null, "DnsOptions": null, "DnsSearch": null, "ExtraHosts": null, "GroupAdd": null, "IpcMode": "host", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "label=disable" ], "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 512, "Memory": 1073741824, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 2147483648, "MemorySwappiness": -1, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": [ { "Name": "nofile", "Hard": 65535, "Soft": 65535 } ], "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0 }, "GraphDriver": { "Name": "overlay2", "Data": { "LowerDir": "/data/docker/overlay2/b82932468fd5eb10b077c77f9398d64fd5f547f97986c2768cb34429e7a8d7eb-init/diff:/data/docker/overlay2/24756727fea57cd26c3baf4a7c1329006bb91fe72dc048a9e668ac595e446c03/diff:/data/docker/overlay2/3930f8f25b0ec503c719c3a37400d1ac10a23df9dcd1da791daaec595d4f6caf/diff:/data/docker/overlay2/9f05677f9c03a2768bc544083cb67f78e5eccb7c92029eee53e3866f1ff59748/diff:/data/docker/overlay2/ba448992e53b0aef341f62eea58e8ea672c33f34c4fd60ea333f9c167eec1764/diff:/data/docker/overlay2/3e0697e95db4b02138967e3a8dce2c9f866c973c48335fe088b23608f0b687ef/diff:/data/docker/overlay2/9f982949ea768260494ddb46fd94de78165013bd63f7bebef3ba70b7fcd51f38/diff:/data/docker/overlay2/e554b45667fa1b0f900e6db4f280d8d8d5b7c7282375f7f8dd2980848ea54518/diff:/data/docker/overlay2/4bc7e6e8409decf7080e62d6ddad70a2696ce8ca6921ff83c1c7103e94ea73ef/diff:/data/docker/overlay2/9f1fa68da01a9a6ccecf409f4b97babb7dab15c80240cb7fa7941dcd082295fe/diff:/data/docker/overlay2/76bd7d26d6e74efaa08a97adc9f9b8a86a6a7dbe40676eb3964e1fb8114a9ad4/diff:/data/docker/overlay2/dee3e49be43be9d1c7992553e405f4b960d77998d270258cde1915d80a388b16/diff:/data/docker/overlay2/e32c32c0aeb1ce95f5bf2c1304e4388d584dbcb2e9bb76e6dde2bbb80814460a/diff:/data/docker/overlay2/2148da1f1c31317b02e705943606e802151f66ea331ce72787666348de8a93d3/diff:/data/docker/overlay2/f9717cddc3d1cc17f378fcba725784332a2d1ef0e9338b0454834c6031a03542/diff", "MergedDir": "/data/docker/overlay2/b82932468fd5eb10b077c77f9398d64fd5f547f97986c2768cb34429e7a8d7eb/merged", "UpperDir": "/data/docker/overlay2/b82932468fd5eb10b077c77f9398d64fd5f547f97986c2768cb34429e7a8d7eb/diff", "WorkDir": "/data/docker/overlay2/b82932468fd5eb10b077c77f9398d64fd5f547f97986c2768cb34429e7a8d7eb/work" } }, "Mounts": [ { "Source": "/etc/localtime", "Destination": "/etc/localtime", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Source": "/data/logs/archapi", "Destination": "/data/logs/archapi", "Mode": "rw", "RW": true, "Propagation": "rprivate" }, { "Source": "/tmp", "Destination": "/tmp", "Mode": "rw", "RW": true, "Propagation": "rprivate" } ], "Config": { "Hostname": "localhost.localdomain", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "ZYAGENT_HTTPPORT=1", "NAMESPACE=arch.task.http", "ARCHAPI_ROUTE=demo", "ZYAGENT_LOCAL_IP=10.0.2.15", "ZYAGENT_CURRENT_IDC=m5", "ZYAGENT_CURRENT_QCONF_CLUSTER=qconf_online", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "TERM=xterm", "DOWNLOAD_URL=;, "PYTHON_VERSION=2.7.18" ], "Cmd": [ "/bin/bash", "-c", "while true;do sleep 5;date;done" ], "Image": "192.168.6.88:5000/python:v2.7.18", "Volumes": { "/data/logs/archapi": {}, "/etc/localtime": {}, "/tmp": {} }, "WorkingDir": "/root", "Entrypoint": [ "/usr/local/bin/dumb-init" ], "OnBuild": null, "Labels": { "com.docker.compose.config-hash": "aef76130d77c8e42a5cf1686279d754be2121aaee336f39511eaf451a2ccae90", "com.docker.compose.container-number": "1", "com.docker.compose.oneoff": "False", "com.docker.compose.project": "demo", "com.docker.compose.service": "task.1", "com.docker.compose.version": "1.6.2", "group": "arch", "org.label-schema.build-date": "20181204", "org.label-schema.license": "GPLv2", "org.label-schema.name": "CentOS Base Image", "org.label-schema.schema-version": "1.0", "org.label-schema.vendor": "CentOS", "service": "task" } }, "NetworkSettings": { "Bridge": "", "SandboxID": "fc6427258fdac18a956577716e67e1f3c390cd66771342372812ccb07880eace", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/default", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "host": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "b87cedffcbded55524330350cec581cf3f86b4684fe7ab1f54cafa6aef1b17a6", "EndpointID": "b73d603ec2bf5b5577937e2d2d59f81e47bf7bb36111c4e910ad1390e59828d5", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "" } } } }]
ENV和Labels对应的配置项即容器服务的配置信息,常规的基于容器做服务发现模型的都会在labels和env上做文章, 比如我们可以通过labels配置服务的名称和服务的端口,然后提供外部的一个进程周期性扫描容器的配置信息然后提取对应的配置项,将容器的配置项放到配置中心即可完成服务发现的第一个步骤服务注册;
服务发现模型难点
以上服务发现模型只是标注了将服务通过标准化协议的方式注册到配置中心,但是服务发现模型中要解决问题远不止将服务注册到配置中心,我举例如下:
服务注册到配置中心后客户端如何感知配置变更,服务端服务发现模型如何感知配置变更服务异常时是否需要联动摘除,摘除配置后是否有DNS缓存问题,是否需要调整流量,是否需要重新拉起服务服务上线时如何配合服务发现模型进行动态的流量调度
如果要做一个合格的服务发现体系远不止以上的几个问题需要解决,还会有很多分布式相关的理论基础在服务发现模型里,上述的问题我会在后续的文章逐步介绍解决方案以及涉及到的架构。
标签: #centoslinklocal