com.alibaba.cloud
spring-cloud-starter-alibaba-nacos-discovery
${spring-cloud-alibaba.version}
com.alibaba.cloud
spring-cloud-starter-alibaba-nacos-config
${spring-cloud-alibaba.version}
所谓
配置中心
:一般SpringBoot项目都使用在本地
resources
下创建类似
application.yml
之类的配置文件来管理整个项目的一些配置信息。
但当微服务部署的实例越来越多,达到数十、数百时,逐个修改微服务配置就会让人抓狂,而且很容易出错。
为此,我们需要一种
统一配置管理方案
,可以集中管理所有实例的配置。
动态配置服务可以让您以中心化、外部化和动态化的方式管理所有环境的应用配置和服务配置。
动态配置消除了配置变更时重新部署应用和服务的需要,让配置管理变得更加高效和敏捷。
配置中心化管理让实现无状态服务变得更简单,让服务按需弹性扩展变得更容易。
Nacos 提供了一个简洁易用的UI (控制台样例 Demo) 帮助您管理所有的服务和应用的配置。
Nacos 还提供包括配置版本跟踪、金丝雀发布、一键回滚配置以及客户端配置更新状态跟踪在内的一系列开箱即用的配置管理特性,帮助您更安全地在生产环境中管理配置变更和降低配置变更带来的风险。
多配置格式编辑器
Nacos支持 YAML、Properties、TEXT、JSON、XML、HTML 等常见配置格式在线编辑、语法高亮、格式校验,帮助用户高效编辑的同时大幅降低格式错误带来的风险。
Nacos支持配置标签的能力,帮助用户更好、更灵活的做到基于标签的配置分类及管理。同时支持用户对配置及其变更进行描述,方便多人或者跨团队协作管理配置。
SpringCloud微服务启动时,拉取配置的流程
微服务要拉取nacos中管理的配置,并且与本地的
application.yml
配置合并
,才能完成
项目启动
。但如果尚未读取
application.yml
,又如何得知nacos地址呢?
为此,
spring cloud
引入了一种新的配置文件:
bootstrap.yaml/yml/properties
文件,会在本地
application.yml
之前被读取。
bootstrap.yml
用来
程序引导
时执行,应用于更加早期配置信息读取。可以理解成
系统级别
的一些参数配置,这些参数一般是不会变动的。一旦 bootStrap.yml 被加载,则内容不会被覆盖。
application.yml
可以用来定义
应用
级别的,
应用程序特有配置
信息,可以用来配置后续各个模块中需使用的公共参数等。
启动上下文时,Spring Cloud 会创建一个 Bootstrap Context,作为 Spring 应用的 Application Context 的父上下文。
Bootstrap 属性有高优先级,默认情况下,它们不会被本地配置覆盖。
需要引入依赖:
spring-cloud-starter-bootstrap
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
spring cloud中配置文件的加载是存在优先级顺序的,bootstrap优先级高于application
Data ID
配置文件的默认命名公式
${spring.application.name}-${spring.profiles.active}.${spring.cloud.nacos.config.file-extension}
prefix 默认为 spring.application.name 的值
spring.profile.active 即为当前环境对应的 profile,可以通过配置项 spring.profile.active 来配置。
file-exetension 为配置内容的数据格式,可以通过配置项 spring.cloud.nacos.config.file-extension 来配置
SpringCloud 使用 @RefreshScope 注解,实现配置文件的动态加载。
@RefreshScope 注解是一个基于 Spring Cloud Config 的注解。它允许 Spring Boot 应用程序在运行时动态地刷新配置,而无需重启应用程序。使用 @RefreshScope 注解,可以在不停止应用程序的情况下修改配置。
在 Spring Boot 中,@RefreshScope 注解是基于 Spring Cloud Config 实现的。Spring Cloud Config 是一个用于集中化配置管理的工具。它可以将配置存储在 Git、SVN 或本地文件系统中,并将其提供给多个应用程序。
当应用程序中使用了 @RefreshScope 注解时,Spring Boot 将会监控配置文件的变化。当配置文件发生变化时,Spring Boot 将会重新加载配置并重新初始化相关的 Bean。这样,就可以在应用程序运行时动态地修改配置。
配置共享的优先级
当nacos、服务本地同时出现相同属性时,优先级有高低之分:
可通过spring.cloud.nacos.config.remote-first配置项来修改优先级
remote-first: true # true 代表nacos配置中心的配置优先级高 否则本地配置优先级高 默认 false
命名空间Namespace
玩法1:用于环境隔离
一个大型分布式微服务系统会有很多微服务子项目,每个微服务项目又都会有相应的开发环境、测试环境、预发环境、正式环境,那怎么对这些微服务配置进行管理呢?
Nacos 引入命名空间 Namespace 的概念来进行多环境配置和服务的管理及隔离。
例如,你可能存在本地开发环境dev、测试环境test、生产环境prod 三个不同的环境,那么可以创建三个不同的 Namespace 区分不同的环境。
成功创建新命名空间后,就可以在 springboot 的配置文件配置命名空间的 id 切换到对应的命名空间,并获取对应空间下的配置文件。
但在没有指定命名空间配置的情况下,默认的配置都是在 public 空间中。
玩法2:用于业务、业务团队隔离
Group分组
玩法1:业务、业务团队隔离
Group 也可以实现环境隔离的功能,但 Group 设计的目的主要是做同一个环境中的不同服务分组,把不同的微服务的配置文件划分到同一个分组里面去。
Nacos 如果不指定 Group,则默认的分组是 DEFAULT_GROUP。
spring:
cloud:
nacos:
config:
group: xxxx
玩法2:用于子团队/个人隔离
完整Demo
spring : 5.2.15.RELEASE
spring-boot : 2.3.12.RELEASE
spring-cloud-starter : 2.2.9.RELEASE
spring-cloud-starter-nacos-config : 2.2.7.RELEASE
spring-cloud-starter-nacos-discovery : 2.2.7.RELEASE
discovery:
enabled: true # (默认为 true ,可不配置此属性)
# server-addr: http://127.0.0.1:8848 # nacos-local | 服务端地址(Nacos Server 启动监听的ip地址和端口) | 默认值: 无 | 此配置项取决于 生效的 spring.profiles
server-addr: ${NACOS_PROTOCOL:http}://${NACOS_ENDPOINT:nacos.nacos:8848}
# server-addr: ${NACOS_PROTOCOL:https}://${NACOS_ENDPOINT:config-dev.xxx.com:443}
# server-addr: ${NACOS_SERVER_ADDR:https://config-dev.xxx.com} # huawei-cloud:cn-dev
# server-addr: http://mse-xxxxxx-nacos-ans.mse.aliyuncs.com:8848 # aliyun-cloud:cn-dev
# server-addr: http://mse-yyyyyy-nacos-ans.mse.aliyuncs.com:8848 # aliyun-cloud:cn-test
# server-addr: http://mse-zzzzzz-nacos-ans.mse.aliyuncs.com:8848 # aliyun-cloud:cn-prod
# server-addr: https://config-dev.xxx.com # huawei-cloud:cn-dev
# server-addr: https://config-test.xxx.com # huawei-cloud:cn-test
# server-addr: https://config-pt.xxx.com # huawei-cloud:cn-pt
service: ${spring.application.name} # 服务名(service-name, 给当前的服务命名) | 默认值: ${spring.application.name}
namespace: ${NACOS_SERVER_NAMESPACE:xxxx_office} # 命名空间(主要用于运行环境的隔离) | 默认值: 无
group: ${NACOS_SERVER_GROUP:xxx_GROUP} # 服务分组(设置服务所处的分组) | 默认值: DEFAULT_GROUP
username: ${NACOS_USER:nacos}
password: ${NACOS_PASSWORD:nacos}
# logName: # 日志文件名 | 默认值: 无
# clusterName: DEFAULT
# weight
# metadata:
# secure: false
# accessKey: xx
# secretKey: yy
config: # nacos 配置中心的配置加载项必须放在 test.yml ,因为其启动顺序/优先级 高于 application.yml,否则会带来 spring ioc 的一系列加载问题
# reference-doc https://blog.csdn.net/m0_57752520/article/details/124169746
# reference-doc https://github.com/nacos-group/nacos-spring-boot-project/wiki/spring-boot-0.2.2-%E4%BB%A5%E5%8F%8A-0.1.2%E7%89%88%E6%9C%AC%E6%96%B0%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8%E6%89%8B%E5%86%8C
# Spring Boot 通过 从 namespace 找到配置文件的命名空间,然后通过 {prefix}−{spring.profile.active}.${file-extension} 找到对应的 NACOS 配置文件
enabled: true
bootstrap:
enable: true # 开启配置预加载功能
log-enable: false
server-addr: ${spring.cloud.nacos.discovery.server-addr} # 主配置服务器地址 # 此配置项取决于 生效的 spring.profiles
namespace: ${spring.cloud.nacos.discovery.namespace} # 主配置
group: ${spring.cloud.nacos.discovery.group} # 主配置 group-id ; 这里约定环境参数 ; eg: ${spring.profiles.active}
name: ${spring.application.name} # 要指向需要改变的微服务的注册中心的名称
username: ${spring.cloud.nacos.discovery.username}
password: ${spring.cloud.nacos.discovery.password}
# prefix: # eg: user-config
# type: yaml # 主配置 配置文件类型
file-extension: yaml # yml or yaml / properties / ...
data-id: application-${spring.application.name}.yml # 主配置 data-id
remote-first: true # true 代表nacos配置中心的配置优先级高 否则本地配置优先级高 默认 false
auto-refresh: true # 主配置 开启自动刷新
refresh-enabled: true
enable-remote-sync-config: true # 主配置 开启注册监听器预加载配置服务(除非特殊业务需求,否则不推荐打开该参数)
max-retry: 2 # 主配置 最大重试次数
config-retry-time: 3000 # 主配置 重试时间
config-long-poll-timeout: 6000 # 主配置 配置监听长轮询超时时间
extension-configs:
- dataId: application-env.yml
group: DEFAULT_GROUP
refresh: true
- dataId: application-${spring.application.name}.yml
group: ${spring.cloud.nacos.discovery.group}
refresh: true
# extension-configs: # 扩展配置:是一个数组,可有多个
# nacos配置中心配置 dataId 时最好加上文件名后缀
# 如果不加后缀,在扩展配置和共享配置读取配置时会出现以前两种情况:
# 1、在下方dataId中不加后缀读取配置时会默认以properties读取,若文件是properties则没有问题,若不是那么配置就读取不到了。
# 2、在下方dataId中加上后缀那么在nacos配置中心就找不到对应dataId的配置。
# 注:最好是 nacos 配置中心 和 代码中配置的 dataId 是带有后缀且是一致的。避免踩坑!!!!!!!!!!!!!
# - dataId: xx-yy-zz.yaml
# group: USER_GROUP
# refresh: true
# shared-configs: # 共享配置:是一个数组,可以有多个,配置方式与扩展配置一模一样 | 配置优先级:shared-configs < extension-configs < 默认
# - dataId: xx-yy-zz.yml
# group: USER_GROUP
# refresh: true
# shared-dataids: xx-yy-zz.yml
X 参考文献
SpringCloud之Nacos配置中心解读 - CSDN
场景:指定NACOS注册中心中spring cloud微服务应用的IP
spring:
cloud:
nacos:
discovery:
ip: 127.0.0.1
修改完成、并重启服务之后在nacos查看的地址如下:
场景:curl请求NACOS常用功能接口
nacos-client : 2.0.3
nacos-server : 2.1.2
https://nacos.io/zh-cn/docs/open-api.html
登录/获取 accessToken
# 获取 accessToken
curl -X POST '127.0.0.1:8848/nacos/v1/auth/login' -d 'username=nacos&password=nacos'
{"accessToken":"xxx.xxx.xxxx-xxx-xxx","tokenTtl":86400000,"globalAdmin":false,"username":"read_bdp"}
# 获取配置 | NACOS-SERVER 2.1.2 实测
//特别注意: namespaceId 在创建时建议人工指定下,否则 nacos 会默认生成一个毫无规律的 id (如: xx833-434323sd-453243-w-d453)
curl -X GET 'https://config.xx.com/nacos/v1/cs/configs?show=all&dataId=application-xx-service.yml&group=XXX_GROUP&tenant=bdp_office&namespaceId=bdp_office&accessToken=xxxxxxx'
# 服务注册 | NACOS-SERVER 2.1.2 实测
accessToken="xxxxxxx"
echo "https://nacos-config.xx.com/nacos/v1/ns/instance?port=8848&healthy=true&ip=11.11.11.11&weight=1.0&serviceName=nacos.test.3&encoding=GBK&namespaceId=bigdata_office&accessToken=${accessToken}"
curl -X POST "https://nacos-config.xx.com/nacos/v1/ns/instance?port=8848&healthy=true&ip=11.11.11.11&weight=1.0&serviceName=nacos.test.3&encoding=GBK&namespaceId=bdp_office&accessToken=$accessToken"
nacos-client : 2.0.3
nacos-server : 2.1.2
com.alibaba.nacos.client.naming.remote.NamingClientProxyDelegate#registerService
--> com.alibaba.nacos.client.naming.remote.NamingClientProxy#registerService
--> com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy#registerService
@Override
public void registerService(String serviceName, String groupName, Instance instance) throws NacosException {
NAMING_LOGGER.info("[REGISTER-SERVICE] {} registering service {} with instance {}", namespaceId, serviceName,
instance);
redoService.cacheInstanceForRedo(serviceName, groupName, instance);
doRegisterService(serviceName, groupName, instance);
--> com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy#doRegisterService
public void doRegisterService(String serviceName, String groupName, Instance instance) throws NacosException {
InstanceRequest request = new InstanceRequest(namespaceId, serviceName, groupName,
NamingRemoteConstants.REGISTER_INSTANCE, instance);
requestToServer(request, Response.class); // 调用 :NamingGrpcClientProxy#requestToServer
redoService.instanceRegistered(serviceName, groupName);
--> com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy#requestToServer
private <T extends Response> T requestToServer(AbstractNamingRequest request, Class<T> responseClass)
throws NacosException {
try {
request.putAllHeader(getSecurityHeaders());
request.putAllHeader(getSpasHeaders(
NamingUtils.getGroupedNameOptional(request.getServiceName(), request.getGroupName())));
Response response =
requestTimeout < 0 ? rpcClient.request(request) : rpcClient.request(request, requestTimeout);
if (ResponseCode.SUCCESS.getCode() != response.getResultCode()) {
throw new NacosException(response.getErrorCode(), response.getMessage());
if (responseClass.isAssignableFrom(response.getClass())) {
return (T) response;
NAMING_LOGGER.error("Server return unexpected response '{}', expected response should be '{}'",
response.getClass().getName(), responseClass.getName());
} catch (Exception e) {
throw new NacosException(NacosException.SERVER_ERROR, "Request nacos server failed: ", e);
throw new NacosException(NacosException.SERVER_ERROR, "Server return invalid response");
注册成功的响应:
https://nacos-config.xx.com/nacos/v1/ns/instance?port=8848&healthy=true&ip=11.11.11.11&weight=1.0&serviceName=nacos.test.3&encoding=GBK&namespaceId=bdp_office&accessToken=xxxxxxx
[TID: N/A] [my-xxl-job-executor] [system] [2024/09/06 11:18:21.248] [INFO ] [main] [NamingGrpcClientProxy] registerService:112__||__[REGISTER-SERVICE] bdp_office registering service my-xxl-job-executor with instance Instance{instanceId='null', ip='192.168.19.181', port=9527, weight=1.0, healthy=true, enabled=true, ephemeral=true, clusterName='DEFAULT', serviceName='null', metadata={management.endpoints.web.base-path=/actuator, preserved.register.source=SPRING_CLOUD}}
[TID: N/A] [my-xxl-job-executor] [system] [2024/09/06 11:18:21.260] [INFO ] [main] [NacosServiceRegistry] register:75__||__nacos registry, BDP_GROUP my-xxl-job-executor-data-distribute 192.168.19.181:9527 register finished
当前用户无服务注册权限的响应: (即 可写权限)
以 NACOS SERVER 2.1.2 为例,此错误发生在 强制启用身份认证之后
{"timestamp":"2024-09-06T10:42:11.602+08:00","status":403,"error":"Forbidden","message":"authorization failed!","path":"/nacos/v1/ns/instance"}
Caused by: java.lang.reflect.UndeclaredThrowableException
at org.springframework.util.ReflectionUtils.rethrowRuntimeException(ReflectionUtils.java:147) ~[spring-core-5.2.15.RELEASE.jar!/:5.2.15.RELEASE]
at com.alibaba.cloud.nacos.registry.NacosServiceRegistry.register(NacosServiceRegistry.java:82) ~[spring-cloud-starter-alibaba-nacos-discovery-2.2.7.RELEASE.jar!/:2.2.7.RELEASE]
at org.springframework.cloud.client.serviceregistry.AbstractAutoServiceRegistration.register(AbstractAutoServiceRegistration.java:239) ~[spring-cloud-commons-2.2.9.RELEASE.jar!/:2.2.9.RELEASE]
at com.alibaba.cloud.nacos.registry.NacosAutoServiceRegistration.register(NacosAutoServiceRegistration.java:78) ~[spring-cloud-starter-alibaba-nacos-discovery-2.2.7.RELEASE.
Caused by: com.alibaba.nacos.api.exception.NacosException: Request nacos server failed:
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.requestToServer(NamingGrpcClientProxy.java:279) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.doRegisterService(NamingGrpcClientProxy.java:129) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.registerService(NamingGrpcClientProxy.java:115) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.NamingClientProxyDelegate.registerService(NamingClientProxyDelegate.java:95) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.NacosNamingService.registerInstance(NacosNamingService.java:145) ~[nacos-client-2.0.3.jar!/:?]
Caused by: com.alibaba.nacos.api.exception.NacosException: authorization failed!
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.requestToServer(NamingGrpcClientProxy.java:271) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.doRegisterService(NamingGrpcClientProxy.java:129) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.registerService(NamingGrpcClientProxy.java:115) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.NamingClientProxyDelegate.registerService(NamingClientProxyDelegate.java:95) ~[nacos-client-2.0.3.jar!/:?]
获取注册的服务列表
protocol="https"
endpoint="config.xxxx.com:443"
username=nacos
password=xxxxxx
curl -X GET "$protocol://$endpoint/nacos/v1/ns/service/list?namespaceId=bigdata_office&pageNo=1&pageSize=5&accessToken=$accessToken"
response
{"count":0,"doms":[]}
场景:Nacos 2.x,调整 GRPC SDK客户端端口(GRpcSdkClient)、GRPC集群客户端端口(GRpcClusterClient)?
NACOS 2.x 的端口需求:
2.x最大的变化之一就是端口。
MainPort/8848 (默认主端口),在此之外又新增了三个端口,新增端口是在配置的主端口(server.port)基础上,进行一定偏移量自动生成:
SdkGrpcPort/9848 (主端口+1000)NACOS SDK 客户端(nacos-client)gRPC请求服务端端口,用于客户端向服务端发起连接和请求
ClusterGrpcPort/9849 (主端口+1001`)服务端gRPC请求服务端端口,用于服务间同步等
JRaftPort/7848 (主端口-1000)Jraft请求服务端端口,用于处理服务端间的Raft相关请求
注:固定的计算策略,主要源自于
com.alibaba.nacos.common.remote.client.grpc.GrpcClient#connectToServer
com.alibaba.nacos.common.remote.client.grpc.GrpcClient#serverCheck
com.alibaba.nacos.common.remote.client.RpcClient#rpcPortOffset
com.alibaba.nacos.common.remote.client.grpc.GrpcClient#serverCheck
com.alibaba.nacos.shaded.com.google.common.util.concurrent.ListenableFuture
com.alibaba.nacos.common.remote.client.grpc.GrpcUtils#parse
Waited 3000 milliseconds (plus 1 milliseconds, 465000 nanoseconds delay) for com.alibaba.nacos.shaded.io.grpc.stub.ClientCalls$GrpcFuture@3ea75b05[status=PENDING, info=[GrpcFuture{clientCall={delegate={delegate=ClientCallImpl{method=MethodDescriptor{fullMethodName=Request/request, type=UNARY, idempotent=false, safe=false, sampledToLocalTracing=true, requestMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@33a47707, responseMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@4d290757, schemaDescriptor=com.alibaba.nacos.api.grpc.auto.RequestGrpc$RequestMethodDescriptorSupplier@720a1fd0}}}}}]]
因安全策略、统一管控等原因,如:Nginx 统一用80(http)、443(https)等端口代理 NACOS 2.x 的8848端口及服务,则产生了一个问题:
1、NACOS SDK 客户端端口,按照SdkGrpcPort端口的固定的计算策略————需占用: 1080端口(80+1000=1080) 或 1443端口(443+1000=1443)
2、NACOS 服务间调用端口,按照ClusterGrpcPort端口的固定的计算策略————需占用: 1081端口(80+1001=1081) 或 1444端口(443+1001=1444)
基于1、2,则:造成了网络策略等非常被动,只能走固定的端口,也容易给网络黑客留下可乘之机。
若 客户端 与 nacos server端的 gRPC 端口网络通信失败时,将报如下错误日志:
【关键信息】
主端口为443 : ... server main port=443
与1443端口,通信超时 : Server check fail, please check server config-center.com ,port 1443 is available , error =java.util.concurrent.TimeoutException: Waited 3000 milliseconds (plus 93564 nanoseconds delay) ...
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:09.581] [INFO ] [main] [RpcClientFactory] lambda$createClient$0:77__||__[RpcClientFactory] create a new rpc client of a5c24981-4ae2-420b-9dc2-ab989fb4c728
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:09.581] [INFO ] [main] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728]RpcClient init label, labels={module=naming, source=sdk}
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:09.584] [INFO ] [main] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728]RpcClient init, ServerListFactory =com.alibaba.nacos.client.naming.core.ServerListManager
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:09.584] [INFO ] [main] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728]Registry connection listener to current client:com.alibaba.nacos.client.naming.remote.gprc.redo.NamingGrpcRedoService
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:09.585] [INFO ] [main] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728]Register server push request handler:com.alibaba.nacos.client.naming.remote.gprc.NamingPushRequestHandler
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:09.586] [INFO ] [main] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728] Try to connect to server on start up, server: {serverIp='config-center.com', server main port=443}
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:10.048] [ERROR] [com.alibaba.nacos.client.remote.worker] [LoggerUtils] printIfErrorEnabled:99__||__Server check fail, please check server config-center.com ,port 1443 is available , error =java.util.concurrent.TimeoutException: Waited 3000 milliseconds (plus 103219 nanoseconds delay) for com.alibaba.nacos.shaded.io.grpc.stub.ClientCalls$GrpcFuture@1edb7320[status=PENDING, info=[GrpcFuture{clientCall={delegate={delegate=ClientCallImpl{method=MethodDescriptor{fullMethodName=Request/request, type=UNARY, idempotent=false, safe=false, sampledToLocalTracing=true, requestMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@2053d869, responseMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@7a419da4, schemaDescriptor=com.alibaba.nacos.api.grpc.auto.RequestGrpc$RequestMethodDescriptorSupplier@14555e0a}}}}}]]
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:10.049] [INFO ] [com.alibaba.nacos.client.remote.worker] [LoggerUtils] printIfInfoEnabled:60__||__[d24573f5-2525-4cdc-9285-129ebe83ac52_config-0] fail to connect server,after trying 1 times, last try server is {serverIp='config-center.com', server main port=443},error=unknown
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:12.584] [WARN ] [com.alibaba.nacos.client.naming.grpc.redo.0] [RedoScheduledTask] run:46__||__Grpc Connection is disconnect, skip current redo task
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:12.589] [ERROR] [main] [LoggerUtils] printIfErrorEnabled:99__||__Server check fail, please check server config-center.com ,port 1443 is available , error =java.util.concurrent.TimeoutException: Waited 3000 milliseconds (plus 84026 nanoseconds delay) for com.alibaba.nacos.shaded.io.grpc.stub.ClientCalls$GrpcFuture@7caa550[status=PENDING, info=[GrpcFuture{clientCall={delegate={delegate=ClientCallImpl{method=MethodDescriptor{fullMethodName=Request/request, type=UNARY, idempotent=false, safe=false, sampledToLocalTracing=true, requestMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@2053d869, responseMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@7a419da4, schemaDescriptor=com.alibaba.nacos.api.grpc.auto.RequestGrpc$RequestMethodDescriptorSupplier@14555e0a}}}}}]]
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:12.590] [INFO ] [main] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728] Try to connect to server on start up, server: {serverIp='config-center.com', server main port=443}
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:13.252] [ERROR] [com.alibaba.nacos.client.remote.worker] [LoggerUtils] printIfErrorEnabled:99__||__Server check fail, please check server config-center.com ,port 1443 is available , error =java.util.concurrent.TimeoutException: Waited 3000 milliseconds (plus 92253 nanoseconds delay) for com.alibaba.nacos.shaded.io.grpc.stub.ClientCalls$GrpcFuture@50f72a78[status=PENDING, info=[GrpcFuture{clientCall={delegate={delegate=ClientCallImpl{method=MethodDescriptor{fullMethodName=Request/request, type=UNARY, idempotent=false, safe=false, sampledToLocalTracing=true, requestMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@2053d869, responseMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@7a419da4, schemaDescriptor=com.alibaba.nacos.api.grpc.auto.RequestGrpc$RequestMethodDescriptorSupplier@14555e0a}}}}}]]
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:13.253] [INFO ] [com.alibaba.nacos.client.remote.worker] [LoggerUtils] printIfInfoEnabled:60__||__[d24573f5-2525-4cdc-9285-129ebe83ac52_config-0] fail to connect server,after trying 2 times, last try server is {serverIp='config-center.com', server main port=443},error=unknown
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:15.584] [WARN ] [com.alibaba.nacos.client.naming.grpc.redo.0] [RedoScheduledTask] run:46__||__Grpc Connection is disconnect, skip current redo task
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:15.592] [ERROR] [main] [LoggerUtils] printIfErrorEnabled:99__||__Server check fail, please check server config-center.com ,port 1443 is available , error =java.util.concurrent.TimeoutException: Waited 3000 milliseconds (plus 83386 nanoseconds delay) for com.alibaba.nacos.shaded.io.grpc.stub.ClientCalls$GrpcFuture@62417a16[status=PENDING, info=[GrpcFuture{clientCall={delegate={delegate=ClientCallImpl{method=MethodDescriptor{fullMethodName=Request/request, type=UNARY, idempotent=false, safe=false, sampledToLocalTracing=true, requestMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@2053d869, responseMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@7a419da4, schemaDescriptor=com.alibaba.nacos.api.grpc.auto.RequestGrpc$RequestMethodDescriptorSupplier@14555e0a}}}}}]]
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:15.593] [INFO ] [main] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728] Try to connect to server on start up, server: {serverIp='config-center.com', server main port=443}
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:16.556] [ERROR] [com.alibaba.nacos.client.remote.worker] [LoggerUtils] printIfErrorEnabled:99__||__Server check fail, please check server config-center.com ,port 1443 is available , error =java.util.concurrent.TimeoutException: Waited 3000 milliseconds (plus 88277 nanoseconds delay) for com.alibaba.nacos.shaded.io.grpc.stub.ClientCalls$GrpcFuture@16ca9314[status=PENDING, info=[GrpcFuture{clientCall={delegate={delegate=ClientCallImpl{method=MethodDescriptor{fullMethodName=Request/request, type=UNARY, idempotent=false, safe=false, sampledToLocalTracing=true, requestMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@2053d869, responseMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@7a419da4, schemaDescriptor=com.alibaba.nacos.api.grpc.auto.RequestGrpc$RequestMethodDescriptorSupplier@14555e0a}}}}}]]
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:16.556] [INFO ] [com.alibaba.nacos.client.remote.worker] [LoggerUtils] printIfInfoEnabled:60__||__[d24573f5-2525-4cdc-9285-129ebe83ac52_config-0] fail to connect server,after trying 3 times, last try server is {serverIp='config-center.com', server main port=443},error=unknown
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:18.585] [WARN ] [com.alibaba.nacos.client.naming.grpc.redo.0] [RedoScheduledTask] run:46__||__Grpc Connection is disconnect, skip current redo task
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:18.596] [ERROR] [main] [LoggerUtils] printIfErrorEnabled:99__||__Server check fail, please check server config-center.com ,port 1443 is available , error =java.util.concurrent.TimeoutException: Waited 3000 milliseconds (plus 79749 nanoseconds delay) for com.alibaba.nacos.shaded.io.grpc.stub.ClientCalls$GrpcFuture@53499d85[status=PENDING, info=[GrpcFuture{clientCall={delegate={delegate=ClientCallImpl{method=MethodDescriptor{fullMethodName=Request/request, type=UNARY, idempotent=false, safe=false, sampledToLocalTracing=true, requestMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@2053d869, responseMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@7a419da4, schemaDescriptor=com.alibaba.nacos.api.grpc.auto.RequestGrpc$RequestMethodDescriptorSupplier@14555e0a}}}}}]]
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:18.597] [INFO ] [com.alibaba.nacos.client.remote.worker] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728] try to re connect to a new server ,server is not appointed,will choose a random server.
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:18.597] [INFO ] [main] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728]Register server push request handler:com.alibaba.nacos.common.remote.client.RpcClient$ConnectResetRequestHandler
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:18.597] [INFO ] [main] [LoggerUtils] printIfInfoEnabled:60__||__[a5c24981-4ae2-420b-9dc2-ab989fb4c728]Register server push request handler:com.alibaba.nacos.common.remote.client.RpcClient$4
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:06.945] [ERROR] [com.alibaba.nacos.client.remote.worker] [LoggerUtils] printIfErrorEnabled:99__||__Server check fail, please check server config-center.com ,port 1443 is available , error =java.util.concurrent.TimeoutException: Waited 3000 milliseconds (plus 93564 nanoseconds delay) for com.alibaba.nacos.shaded.io.grpc.stub.ClientCalls$GrpcFuture@7dda830b[status=PENDING, info=[GrpcFuture{clientCall={delegate={delegate=ClientCallImpl{method=MethodDescriptor{fullMethodName=Request/request, type=UNARY, idempotent=false, safe=false, sampledToLocalTracing=true, requestMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@2053d869, responseMarshaller=com.alibaba.nacos.shaded.io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@7a419da4, schemaDescriptor=com.alibaba.nacos.api.grpc.auto.RequestGrpc$RequestMethodDescriptorSupplier@14555e0a}}}}}]]
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:07.094] [WARN ] [main] [EndpointId] logWarning:155__||__Endpoint ID 'service-registry' contains invalid characters, please migrate to a valid format.
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:07.842] [INFO ] [main] [DirectJDKLog] log:173__||__Initializing ProtocolHandler ["http-nio-9527"]
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:19.627] [ERROR] [main] [LoggerUtils] printIfErrorEnabled:99__||__Send request fail, request=SubscribeServiceRequest{headers={accessToken=eyJhbGciOiJIUzI1NiJ9.234efdsf3fwfczs.43589fhejer8952wdqw, app=unknown}, requestId='null'}, retryTimes=0,errorMessage=Client not connected,current status:STARTING
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:19.728] [ERROR] [main] [LoggerUtils] printIfErrorEnabled:99__||__Send request fail, request=SubscribeServiceRequest{headers={accessToken=eyJhbGciOiJIUzI1NiJ9.234efdsf3fwfczs.43589fhejer8952wdqw, app=unknown}, requestId='null'}, retryTimes=1,errorMessage=Client not connected,current status:STARTING
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:19.829] [ERROR] [main] [LoggerUtils] printIfErrorEnabled:99__||__Send request fail, request=SubscribeServiceRequest{headers={accessToken=eyJhbGciOiJIUzI1NiJ9.234efdsf3fwfczs.43589fhejer8952wdqw, app=unknown}, requestId='null'}, retryTimes=2,errorMessage=Client not connected,current status:STARTING
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:19.829] [ERROR] [main] [NacosWatch] start:138__||__namingService subscribe failed, properties:NacosDiscoveryProperties{serverAddr='https://config-center.com:443', endpoint='', namespace='xxxx_office', watchDelay=30000, logName='', service='xxxx-service', weight=1.0, clusterName='DEFAULT', group='BDP_GROUP', namingLoadCacheAtStart='false', metadata={management.endpoints.web.base-path=/actuator, preserved.register.source=SPRING_CLOUD}, registerEnabled=true, ip='192.168.23.68', networkInterface='', port=-1, secure=false, accessKey='', secretKey='', heartBeatInterval=null, heartBeatTimeout=null, ipDeleteTimeout=null, failFast=true}
com.alibaba.nacos.api.exception.NacosException: Request nacos server failed:
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.requestToServer(NamingGrpcClientProxy.java:279) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.doSubscribe(NamingGrpcClientProxy.java:227) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.subscribe(NamingGrpcClientProxy.java:212) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.NamingClientProxyDelegate.subscribe(NamingClientProxyDelegate.java:147) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.NacosNamingService.subscribe(NacosNamingService.java:393) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.cloud.nacos.discovery.NacosWatch.start(NacosWatch.java:134) [spring-cloud-starter-alibaba-nacos-discovery-2.2.7.RELEASE.jar!/:2.2.7.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182) [spring-context-5.2.15.RELEASE.jar!/:5.2.15.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53) [spring-context-5.2.15.RELEASE.jar!/:5.2.15.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360) [spring-context-5.2.15.RELEASE.jar!/:5.2.15.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158) [spring-context-5.2.15.RELEASE.jar!/:5.2.15.RELEASE]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122) [spring-context-5.2.15.RELEASE.jar!/:5.2.15.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:895) [spring-context-5.2.15.RELEASE.jar!/:5.2.15.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:554) [spring-context-5.2.15.RELEASE.jar!/:5.2.15.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:143) [spring-boot-2.3.12.RELEASE.jar!/:2.3.12.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:755) [spring-boot-2.3.12.RELEASE.jar!/:2.3.12.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747) [spring-boot-2.3.12.RELEASE.jar!/:2.3.12.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:402) [spring-boot-2.3.12.RELEASE.jar!/:2.3.12.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312) [spring-boot-2.3.12.RELEASE.jar!/:2.3.12.RELEASE]
at cn.xxxx.service.app.XxxxServiceApplication.main(XxxxServiceApplication.java:56) [classes!/:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_412]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_412]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_412]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_412]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) [xxxx-service-app.jar:?]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) [xxxx-service-app.jar:?]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) [xxxx-service-app.jar:?]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) [xxxx-service-app.jar:?]
Caused by: com.alibaba.nacos.api.exception.NacosException: Client not connected,current status:STARTING
at com.alibaba.nacos.common.remote.client.RpcClient.request(RpcClient.java:655) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.common.remote.client.RpcClient.request(RpcClient.java:635) ~[nacos-client-2.0.3.jar!/:?]
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.requestToServer(NamingGrpcClientProxy.java:269) ~[nacos-client-2.0.3.jar!/:?]
... 26 more
[TID: N/A] [xxxx-service] [system] [2024/09/06 17:46:19.841] [INFO ] [main] [DirectJDKLog] log:173__||__Starting ProtocolHandler ["http-nio-9527"]
源码分析:nacos 2.x - 2.1.1 : GrpcSdkClient/GrpcClusterClient#rpcPortOffset 的端口配置太死板、太固定(sdkPort=主端口+1000、clusterPort=主端口+1001),无法动态配置
注:以 nacos 2.0.3 版本为例做源码分析,其 发布于 2021.7.28
本节的源码分析,均基于 nacos-client:2.0.3 / nacos-common:2.0.3
RpcClient#currentRpcServer、nextRpcServer => 调用 RpcClient#resolveServerInfo 方法
com.alibaba.nacos.common.remote.client.RpcClient#currentRpcServer、nextRpcServer => 调用 RpcClient#resolveServerInfo 方法
RpcClient#currentRpcServer
protected ServerInfo currentRpcServer() {
String serverAddress = getServerListFactory().getCurrentServer();
return resolveServerInfo(serverAddress);// serverAddress's sampleValue : http://nacos.nacos:8848 或 nacos.nacos:8848
RpcClient#nextRpcServer
protected ServerInfo nextRpcServer() {
String serverAddress = getServerListFactory().genNextServer();
return resolveServerInfo(serverAddress);
RpcClient#resolveServerInfo 在获取【主端口】时的策略
RpcClient#resolveServerInfo
private ServerInfo resolveServerInfo(String serverAddress) {
String property = System.getProperty("nacos.server.port", "8848");
int serverPort = Integer.parseInt(property);// 默认配置(nacos.server.port)的 8848 端口
ServerInfo serverInfo = null;
if (serverAddress.contains(Constants.HTTP_PREFIX)) {//Constants.HTTP_PREFIX = "http"
String[] split = serverAddress.split(Constants.COLON); // Constants.COLON = ":"
String serverIp = split[1].replaceAll("//", "");// serverIp sample = "nacos.nacos"
if (split.length > 2 && StringUtils.isNotBlank(split[2])) {//用户配置的 serverAddress 提供了端口信息时
serverPort = Integer.parseInt(split[2]);//以用户配置的 serverAddress 提供的端口为准
serverInfo = new ServerInfo(serverIp, serverPort);
} else {
String[] split = serverAddress.split(Constants.COLON);
String serverIp = split[0];
if (split.length > 1 && StringUtils.isNotBlank(split[1])) {
serverPort = Integer.parseInt(split[1]);
serverInfo = new ServerInfo(serverIp, serverPort);
return serverInfo;
GrpcClient#connectToServer :基于gRpc连接NACOS服务端时,SdkGrpcPort、ClusterGrpcPort端口依赖子类GrpcSdkClient、GrpcClusterClient的rpcPortOffset方法,但其端口计算策略较为固定,无法动态配置
com.alibaba.nacos.common.remote.client.grpc.GrpcClient#connectToServer
GrpcClient 系 RpcClient 的一级实现子类
其实现了 RpcClient#connectToServer(ServerInfo)、#serverCheck(ip ,port, requestBlockingStub)、#sendResponse(...) 等核心方法
public Connection connectToServer(ServerInfo serverInfo) {
try {
if (grpcExecutor == null) {
int threadNumber = ThreadUtils.getSuitableThreadCount(8);
grpcExecutor = new ThreadPoolExecutor(threadNumber, threadNumber, 10L, TimeUnit.SECONDS,
new LinkedBlockingQueue<>(10000),
new ThreadFactoryBuilder().setDaemon(true).setNameFormat("nacos-grpc-client-executor-%d")
.build());
grpcExecutor.allowCoreThreadTimeOut(true);
int port = serverInfo.getServerPort() + rpcPortOffset();//关键代码行
//在子类 GrpcSdkClient 中的实现————固定的端口计算策略: public int rpcPortOffset() { return 1000; }
//在子类 GrpcClusterClient 中的实现————固定的端口计算策略: public int rpcPortOffset() { return 1001; }
RequestGrpc.RequestFutureStub newChannelStubTemp = createNewChannelStub(serverInfo.getServerIp(), port);//开辟1个新通信通道
if (newChannelStubTemp != null) {
Response response = serverCheck(serverInfo.getServerIp(), port, newChannelStubTemp);//检测与服务器的通信状态是否良好
if (response == null || !(response instanceof ServerCheckResponse)) {
shuntDownChannel((ManagedChannel) newChannelStubTemp.getChannel());//关闭检测通信通道
return null;
BiRequestStreamGrpc.BiRequestStreamStub biRequestStreamStub = BiRequestStreamGrpc
.newStub(newChannelStubTemp.getChannel());
GrpcConnection grpcConn = new GrpcConnection(serverInfo, grpcExecutor);
grpcConn.setConnectionId(((ServerCheckResponse) response).getConnectionId());
//create stream request and bind connection event to this connection.
StreamObserver<Payload> payloadStreamObserver = bindRequestStream(biRequestStreamStub, grpcConn);
// stream observer to send response to server
grpcConn.setPayloadStreamObserver(payloadStreamObserver);
grpcConn.setGrpcFutureServiceStub(newChannelStubTemp);
grpcConn.setChannel((ManagedChannel) newChannelStubTemp.getChannel());
//send a setup request.
ConnectionSetupRequest conSetupRequest = new ConnectionSetupRequest();
conSetupRequest.setClientVersion(VersionUtils.getFullClientVersion());
conSetupRequest.setLabels(super.getLabels());
conSetupRequest.setAbilities(super.clientAbilities);
conSetupRequest.setTenant(super.getTenant());
grpcConn.sendRequest(conSetupRequest);
//wait to register connection setup
Thread.sleep(100L);
return grpcConn;
return null;
} catch (Exception e) {
LOGGER.error("[{}]Fail to connect to server!,error={}", GrpcClient.this.getName(), e);
return null;
源码分析:nacos 2.1.2 及更高版本 : 支持基于JVM参数配置GRPC端口
注:本变更提交于 2022.8.23,即 最低版本为 nacos 2.1.2
关键 Commit : b36e6a50ab1c6b19e2b219cd01cb7850d1c31580
GrpcSdkClient#rpcPortOffset :
common/src/main/java/com/alibaba/nacos/common/remote/client/grpc/GrpcSdkClient#rpcPortOffset
public class GrpcSdkClient extends GrpcClient {
@Override
public int rpcPortOffset() {
public int rpcPortOffset() {
return Integer.parseInt(System.getProperty(GrpcConstants.NACOS_SERVER_GRPC_PORT_OFFSET_KEY,//"nacos.server.grpc.port.offset"
String.valueOf(Constants.SDK_GRPC_PORT_DEFAULT_OFFSET)));//1000
//...
可通过 JVM 参数(-Dnacos.server.grpc.port.offset=xxx) 设置 SdkGrpcPort
GrpcClusterClient#rpcPortOffset
public class GrpcClusterClient extends GrpcClient {
@Override
public int rpcPortOffset() {
return Integer.parseInt(System.getProperty(GrpcConstants.NACOS_SERVER_GRPC_PORT_OFFSET_KEY,//"nacos.server.grpc.port.offset"
String.valueOf(Constants.CLUSTER_GRPC_PORT_DEFAULT_OFFSET)));//1001
//...
可通过 JVM 参数(-Dnacos.server.grpc.port.offset=xxx) 设置 ClusterGrpcPort
common/src/main/java/com/alibaba/nacos/common/remote/client/grpc/GrpcConstants
public class GrpcConstants {
public static final String NACOS_SERVER_GRPC_PORT_OFFSET_KEY = "nacos.server.grpc.port.offset";
public static final String NACOS_CLIENT_GRPC = "nacos.remote.client.grpc";
com.alibaba.nacos.api.common.Constants
public class Constants {
//...
public static final String DATA_ID = "dataId";
public static final String TENANT = "tenant";
public static final String GROUP = "group";
public static final String NAMESPACE_ID = "namespaceId";
public static final String ACCESS_TOKEN = "accessToken";
public static final String USERNAME = "username";
public static final Integer SDK_GRPC_PORT_DEFAULT_OFFSET = 1000;//关键代码行
public static final Integer CLUSTER_GRPC_PORT_DEFAULT_OFFSET = 1001;//关键代码行
//...
若使用 nacos server 为 2.x,而 nacos-client 在 2.1.2 以下,则:
方法1:升级 nacos-client 为 2.1.2及以上版本,则:可通过设置JVM参数(nacos.server.grpc.port.offset)来调整 SdkGrpcPort
方法2:不升级 nacos-cliet 版本(如:2.0.3),则:网络策略(防火墙等),需遵照 SdkGrpcPort = MainPort + 1000的原则来开放端口。
方法3:不升级 nacos-cliet 版本(如:2.0.3),则:若实际开放的主端口是443,SdkGrpcPort实际开放的是9848,不便于改动网络端口时的配置技巧: (亲测,有效)
serverAddr 配置为 https://config-center ,而非https://config-center:443
即:不要在serverAddr主动声明主端口,根据 RpcClient#resolveServerInfo的逻辑,会设定主端口为8848,那么SdkGrpcPort则会是9848,而原本应与443端口的通信也不会受影响
方法4: 关闭 nacos 的 grpc端口 或 修改 nacos 的通信协议为nacos/http协议 (本方法,未亲测)
Nacos 默认情况下会使用 gRPC 协议进行数据通信。如果你需要关闭 gRPC,可以通过修改 Nacos 的配置文件来实现。
具体步骤:
Step1 找到 Nacos 服务端的配置文件application.properties或bootstrap.properties。在配置文件中添加或修改以下配置项:
# 关闭gRPC
nacos.grpc.enabled=false
Step2 重启 Nacos 服务端
修改客户端配置文件
方式1:在您的应用程序中,如果使用了Nacos作为配置中心或服务发现组件,通常会有一个Nacos客户端的配置文件(如application.properties或bootstrap.properties),在此文件中添加或修改以下配置项:
nacos.client.protocol=nacos
这里通过指定nacos.client.protocol属性为nacos,可以使得客户端优先使用原生的Nacos协议进行通信,而非gRPC协议。
方式2:Spring Cloud Alibaba项目。如果您使用的是Spring Cloud Alibaba项目,可以在配置文件中加入:
# 值可能为 nacos ,也可能是 http (未亲测)
spring.cloud.nacos.config.protocol=nacos
spring.cloud.nacos.discovery.protocol=nacos
这两行配置分别用于配置Nacos Config和Nacos Discovery组件,确保它们都采用原生Nacos协议通信。
注意,关闭 gRPC 可能会影响 Nacos 的一些功能,如服务注册与发现、配置管理等。在生产环境中,通常建议保持 gRPC 开启状态。关闭 gRPC 应谨慎进行,并确保你明白这可能带来的后果。
方法4,参考自 : nacos 2.x 以后如何关闭 grpc 通信? - 阿里云 / Nacos2.1 如何禁用GRPC? - 阿里云
【强烈推荐】
ISSUES#9013 | [Enhancement] Enhance usage of GrpcClient. - Github/Nacos
关键 需求/ISSUE
ISSUES#9017 | enhance grpc client
Nacos 服务端版本升级到2.0.3版本,客户端存在某些服务器无法连接grpc端口的问题 #8250
解决:很多用户提交issue描述GrpcClient有些配置太难使用,使 GrpcClient 的属性(nacos.server.grpc.port.offset)可以在客户端启动时配置。
关键 源代码 :GrpcClusterClient#rpcPortOffset、GrpcSdkClient#rpcPortOffset、GrpcConstants#NACOS_SERVER_GRPC_PORT_OFFSET_KEY
关键 Commit : https://github.com/alibaba/nacos/commit/b36e6a50ab1c6b19e2b219cd01cb7850d1c31580
解决版本 : Release 2.1.2 - Github/Nacos
场景:查询指定关键字在NACOS的所有配置文件中是否出现
select
id, data_id , group_id , content , md5
-- , gmt_create , gmt_modified , src_user , src_ip , app_name , tenant_id , c_desc , c_use , effect , type, c_schema , encrypted_data_key
from config_info
where 1=1
and lower(content) like '%xxxxxxx%'
content : nacos 配置中心-配置文件的配置内容
U 关键源码分析
com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy
含属性 : com.alibaba.nacos.common.remote.client.RpcClient
--> com.alibaba.nacos.common.remote.client.RpcClient
含属性 : com.alibaba.nacos.common.remote.client.Connection
--> com.alibaba.nacos.common.remote.client.Connection
含属性 : com.alibaba.nacos.common.remote.client.RpcClient.ServerInfo
--> com.alibaba.nacos.common.remote.client.RpcClient.ServerInfo
protected String serverIp;
protected int serverPort;
com.alibaba.nacos.client.naming.core.ServerListManager
com.alibaba.nacos.common.remote.client.RpcClient
com.alibaba.nacos.common.remote.client.RpcClient#resolveServerInfo
com.alibaba.nacos.common.remote.client.RpcClient#request(com.alibaba.nacos.api.remote.request.Request, long)
com.alibaba.cloud.nacos.client.NacosPropertySourceBuilder : 加载远程配置数据
springcloud 应用是否拿到了 nacos 配置中心的配置文件数据的关键代码
com.alibaba.cloud.nacos.client.NacosPropertySourceBuilder#loadNacosData
com.alibaba.nacos.api.config.ConfigService#getConfig
spring cloud 集成 注册中心和配置中心 关键配置文件: bootstrap.yml
X 参考文献
Nacos
https://nacos.io/zh-cn/docs/open-api.html
https://nacos.io/docs/next/manual/admin/system-configurations/
https://github.com/alibaba/spring-cloud-alibaba/wiki/Nacos-config
nacos2.x默认端口为8848、9848、9849,客户端连接时只能配置管理端访问端口8848,我想要配置其他两个端口,该怎么做 - nacos.io 【推荐】
Nacos注册失败:Client not connected,current status:STARTING - CSDN
[Java SE] 彻底搞懂Java程序的三大参数配置途径:系统变量与JVM参数(VM Option)/环境变量/启动程序参数args - 博客园/千千寰宇