21 /08/23 10 :20:14 INFO NewConsumer: subscribe:bc_test Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0 ( Native Method ) at sun.reflect.NativeMethodAccessorImpl.invoke ( NativeMethodAccessorImpl.java:62 ) at sun.reflect.DelegatingMethodAccessorImpl.invoke ( DelegatingMethodAccessorImpl.java:43 ) at java.lang.reflect.Method.invoke ( Method.java:498 ) at org.springframework.boot.loader.MainMethodRunner.run ( MainMethodRunner.java:49 ) at org.springframework.boot.loader.Launcher.launch ( Launcher.java:108 ) at org.springframework.boot.loader.Launcher.launch ( Launcher.java:58 ) at org.springframework.boot.loader.JarLauncher.main ( JarLauncher.java:88 ) Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: An error: ( java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [ Caused by GSSException: No valid credentials provided ( Mechanism level: Server not found in Kerberos database ( 7 ) - LOOKING_UP_SERVER ) ] ) occurred when evaluating SASL token received from the Kafka Broker. Kafka Client will go to AUTHENTICATION_FAILED state. Caused by: javax.security.sasl.SaslException: GSS initiate failed [ Caused by GSSException: No valid credentials provided ( Mechanism level: Server not found in Kerberos database ( 7 ) - LOOKING_UP_SERVER ) ] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge ( GssKrb5Client.java:211 ) at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator $2 .run ( SaslClientAuthenticator.java:361 ) at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator $2 .run ( SaslClientAuthenticator.java:359 ) at java.security.AccessController.doPrivileged ( Native Method ) at javax.security.auth.Subject.doAs ( Subject.java:422 ) at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken ( SaslClientAuthenticator.java:359 ) at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken ( SaslClientAuthenticator.java:269 ) at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate ( SaslClientAuthenticator.java:206 ) at org.apache.kafka.common.network.KafkaChannel.prepare ( KafkaChannel.java:81 ) at org.apache.kafka.common.network.Selector.pollSelectionKeys ( Selector.java:486 ) at org.apache.kafka.common.network.Selector.poll ( Selector.java:424 ) at org.apache.kafka.clients.NetworkClient.poll ( NetworkClient.java:460 ) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll ( ConsumerNetworkClient.java:261 ) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll ( ConsumerNetworkClient.java:233 ) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll ( ConsumerNetworkClient.java:209 ) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady ( AbstractCoordinator.java:219 ) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady ( AbstractCoordinator.java:205 ) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll ( ConsumerCoordinator.java:279 ) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce ( KafkaConsumer.java:1149 ) at org.apache.kafka.clients.consumer.KafkaConsumer.poll ( KafkaConsumer.java:1115 ) at receive.NewConsumer.doWork ( NewConsumer.java:102 ) at receive.NewConsumer.main ( NewConsumer.java:127 ) at sun.reflect.NativeMethodAccessorImpl.invoke0 ( Native Method ) at sun.reflect.NativeMethodAccessorImpl.invoke ( NativeMethodAccessorImpl.java:62 ) at sun.reflect.DelegatingMethodAccessorImpl.invoke ( DelegatingMethodAccessorImpl.java:43 ) at java.lang.reflect.Method.invoke ( Method.java:498 ) at org.springframework.boot.loader.MainMethodRunner.run ( MainMethodRunner.java:49 ) at org.springframework.boot.loader.Launcher.launch ( Launcher.java:108 ) at org.springframework.boot.loader.Launcher.launch ( Launcher.java:58 ) at org.springframework.boot.loader.JarLauncher.main ( JarLauncher.java:88 ) Caused by: GSSException: No valid credentials provided ( Mechanism level: Server not found in Kerberos database ( 7 ) - LOOKING_UP_SERVER ) at sun.security.jgss.krb5.Krb5Context.initSecContext ( Krb5Context.java:770 ) at sun.security.jgss.GSSContextImpl.initSecContext ( GSSContextImpl.java:248 ) at sun.security.jgss.GSSContextImpl.initSecContext ( GSSContextImpl.java:179 ) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge ( GssKrb5Client.java:192 ) .. . 29 more Caused by: KrbException: Server not found in Kerberos database ( 7 ) - LOOKING_UP_SERVER at sun.security.krb5.KrbTgsRep. < init > ( KrbTgsRep.java:73 ) at sun.security.krb5.KrbTgsReq.getReply ( KrbTgsReq.java:251 ) at sun.security.krb5.KrbTgsReq.sendAndGetCreds ( KrbTgsReq.java:262 ) at sun.security.krb5.internal.CredentialsUtil.serviceCreds ( CredentialsUtil.java:308 ) at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds ( CredentialsUtil.java:126 ) at sun.security.krb5.Credentials.acquireServiceCreds ( Credentials.java:458 ) at sun.security.jgss.krb5.Krb5Context.initSecContext ( Krb5Context.java:693 ) .. . 32 more Caused by: KrbException: Identifier doesn't match expected value ( 906 ) at sun.security.krb5.internal.KDCRep.init ( KDCRep.java:140 ) at sun.security.krb5.internal.TGSRep.init ( TGSRep.java:65 ) at sun.security.krb5.internal.TGSRep. < init > ( TGSRep.java:60 ) at sun.security.krb5.KrbTgsRep. < init > ( KrbTgsRep.java:55 ) .. . 38 more

连接华为大数据平台报的错,通过排除发现,认证kerberos通过,因为出现Will use keytab Commit Succeeded 字样,在创建消费者的时候出现了异常信息。网上的方法都试过,诸如配置hosts、检查生成的jaas.conf文件、比对krb5文件等。
最终将kafka的依赖包换成华为的三个包(kafka_2.11-1.1.0.jar,kafka-clients-1.1.0.jar,zookeeper-3.5.1.jar)通过了!

maven引本地包

Maven 安装 JAR 包的命令是:
mvn install:install-file
-Dfile=jar包的位置
-DgroupId=pom文件里的groupId
-DartifactId=pom文件里的artifactId
-Dversion=pom文件里的version
-Dpackaging=jar

mvn install:install-file -Dfile=./kafka_2.11-1.1.0.jar -DgroupId=org.apache.kafka -DartifactId=kafka_2.11 -Dversion=1.1.0-hw -Dpackaging=jar

maven引入

       <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>1.1.0-hw</version>
        </dependency>
                    报错原文principal is kafka_test@HADOOP.COMWill use keytabCommit Succeeded 21/08/23 10:20:14 INFO NewConsumer: subscribe:bc_testException in thread "main" java.lang.reflect.InvocationTargetException	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
2. 配置文件是不是客户端上最新的
3. zk依赖包是不是华为的,不能是开源的
4. zookeeper.server.principal 这个参数是不是 zookeeper/hadoop.hadoop.com
依次排查 1、2 没问题
在代码中进行了4的修改 没有改3的情况下 报错依旧
从华为客户端中 /opt/client/Kafka/kafka/libs/目录下拷贝出三个jar包 (不知道具体是哪个有修改
				
kafka开启了SASL(kerberos), server.properties配置为 sasl.enabled.mechanisms: GSSAPI security.inter.broker.protocol: SASL_PLAINTEXT ssl.mode.enable: false allow.everyone.if.no.acl.found: true sasl.port: 19092 服务端的jaas.conf内容为 KafkaServer { com.sun.security.auth.m
生产环境却报错 查找日志信息,发现Kerberos认证的时候,域名解析出现问题?!! 登录生产环境ping 043节点,能ping通说明域名是能解析成IP地址的(有DNS服务器)蓝瘦香菇,明明报错是域名解析问题为什么能ping通呢? 于是把本地Java访问集群代码改成IP试一试,呵呵报错了 Caused by: org.ietf.jgss.GSSException: No v...
KrbException: Server not found in Kerberos database (7) - LOOKING_UP_SERVER &gt;&gt;&gt; KdcAccessibility: remove storm1.starsriver.cn at sun.security.krb5.KrbTgsRep.&lt;init&gt;(KrbTgsRep.java:73...
MaxKey(马克思的钥匙)用户单点登录认证系统(Sigle Sign On System),寓意是最大钥匙,是业界领先的企业级IAM身份管理和身份认证产品,支持OAuth 2.0/OpenID Connect、SAML 2.0、JWT、CAS等标准化的开放协议,提供简单、标准、安全和开放的用户身份管理(IDM)、身份认证(AM)、单点登录(SSO)、RBAC权限管理和资源管理等。 MaxKey主要功能: 1、所有应用系统共享一个身份认证系统 2、所有应用系统能够识别和提取ticket信息 3、提供标准的认证接口以便于其他应用集成SSO,安全的移动接入,安全的API、第三方认证和互联网认证的整合。 4、提供用户生命周期管理,支持SCIM 2协议,基于Apache Kafka代理,通过连接器(Connector)实现身份供给同步。 5、认证中心具有平台无关性、环境多样性,支持Web、手机、移动设备等, 如Apple iOS,Andriod等,将认证能力从B/S到移动应用全面覆盖。 6、多种认证机制并存,各应用系统可保留原有认证机制,同时集成认证中心的认证;应用具有高度独立性,不依赖认证中心,又可用使用认证中心的认证,实现单点登录。 7、基于Java平台开发,采用Spring、MySQL、Tomcat、Apache Kafka、Redis等开源技术,支持微服务,扩展性强。 8、许可证 Apache License, Version 2.0,开源免费。 MaxKey单点登录认证系统 更新日志: v2.7.0 加入Dromara开源组织,官方网站的优化,文档优化 BootJar,Docker,Standard三种打包方式的配置优化 openldap,activedirectory密码验证支持 数据库访问注释由@Service改为@Repository cas logout优化支持 CAS单点注销及返回数据类型适配器的优化 CAS返回数据类重构 CAS地址优化统一配置到常量类CasConstants 注销空指针异常BUG OAuth2地址优化统一配置常量类OAuth2Constants OAuth2 Token多次调用时认证转换的BUG ExtendApi标准优化 增加基于时间签名的ExtendApi适配器 返回数据Constants整合 扩展数据配置优化 LDAP和MS AD固定属性Constants SpringSecurity OAuth 2客户端登录适配 移除Desktop的支持,后续可以开发FormBase的适配器定制 application.properties profiles的优化,不同环境启动更加简单 删除maxkey.properties,配置整合到 application.properties 增加适配器注册功能,在配置应用时只需选择对应的适配器 增加Synchronizer接口同步的功能 增加TimeBased OTP接口支持 XSS安全防护功能 禅道项目管理系统单点登录适配 GitLab单点登录适配 云速邮箱单点登录适配 JumpServer开源堡垒机单点登录适配 华为云单点登录适配 Jenkins单点登录适配 通知公告简单功能实现 查询参数优化 SDK优化 依赖jar引用、更新和升级
本文描述安全集群访问非安全集群遇到的问题及分析。 使用Hive映射Phoenix表,其中Hive服务在启用kerberos的集群中,Phoenix在另一个未启用Kerberos的集群中。 报错及分析 HUE返回报错: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Fail
org.apache.hadoop.hdfs.server.namenode.ha.AdaptiveFailoverProxyProvider not found Members only: 大佬求解,这是为什么 kafka消费端Attempt to heartbeat failed since group is rebalancing 温柔已看透: 没有换消费者组,我的是消费者处理逻辑有报错,我会偶尔收到一些结构变了的异常消息,这种情况如果没处理的话就会阻塞消费者,消费者不能继续工作。我的解决办法是改了逻辑,加了try catch捕获,在catch中将异常数据记录了下来 kafka消费端Attempt to heartbeat failed since group is rebalancing 遇到同样的问题,排查不出来。楼主是换了消费组解决的吗? 整理linux测试TCP、UDP端口几种特殊情况 qin147896325: 用nmap,端口,ip都能判断 整理linux测试TCP、UDP端口几种特殊情况 温柔已看透: 所以说那个测试方式不准