单节点Kafka部署并开启Kerberos配置

芒果1年前技术文章716

安装Kerberos
server节点安装kerberos相关软件
yum install -y krb5-server krb5-workstation krb5-libs
修改配置文件krb5.conf
(#以下需要在服务端和客户端都配置,可以在服务端配置好以后使用scp拷贝。)
 vim /etc/krb5.conf
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt
default_realm = HADOOP.COM
[realms]
  kdc = hadoop01
  admin_server = hadoop01
}
[domain_realm]
修改server服务端的配置文件kdc.conf
vim /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
配置阶段
创建kerberos数据库
[root@hdp01 ~]# kdb5_util create -s -r HADOOP.COM
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'HADOOP.COM',
master key name 'K/M@HADOOP.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:
(123456)
[root@hdp01 ~]#
创建管理员admin
[root@hdp01 ~]# kadmin.local -q "addprinc admin/admin"
Authenticating as principal root/admin@HADOOP.COM with password.
WARNING: no policy specified for admin/admin@HADOOP.COM; defaulting to no policy
Enter password for principal "admin/admin@HADOOP.COM":
Re-enter password for principal "admin/admin@HADOOP.COM":
Principal "admin/admin@HADOOP.COM" created.
(123456)
[root@hdp01 ~]#
给管理员账户添加acl权限
[root@hdp01 ~]# cat /var/kerberos/krb5kdc/kadm5.acl
*/admin@HADOOP.COM      *
启动服务和设置开机自启
systemctl start krb5kdc
systemctl start kadmin
systemctl enable krb5kdc
systemctl enable kadmin
在客户端测试连接
[root@hdp01 ~]# kadmin -p admin/admin
Authenticating as principal admin/admin with password.
Password for admin/admin@HADOOP.COM:
kadmin:  listprincs
首先下载kafka
wget https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/3.4.1/kafka_2.12-3.4.1.tgz --no-check-certificate
解压和重命名:
tar -xvf kafka_2.12-3.4.1.tgz
mv kafka_2.12-3.4.1 kafka
修改kafka配置
vim kafka/config/server.properties
broker.id=0
listeners=PLAINTEXT://192.168.117.135:9092
advertised.listeners=PLAINTEXT://192.168.117.135:9092
log.dirs=/opt/kafka/kafka-logs
zookeeper.connect=192.168.117.135:2181

启动kafka
./kafka-server-start.sh ../config/server.properties &
命令:
查看当前服务器中的所有topic
/opt/kafka/bin/kafka-topics.sh --bootstrap-server 172.16.121.194:9092 -list
创建test topic
/opt/kafka/bin/kafka-topics.sh --bootstrap-server 172.16.121.194:9092 --create --partitions 1 --replication-factor 1 --topic test
选项说明:
–topic 定义topic名
–replication-factor 定义副本数
–partitions 定义分区数
生产者命令行操作
/opt/kafka/bin/kafka-console-producer.sh --bootstrap-server 172.16.121.194:9092 --topic test
消费test主题中的数据。
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 172.16.121.194:9092   --from-beginning --topic test
kafka开启kerberos
首先创建用户和生成用户keytab
kadmin:  addprinc -randkey kafka/hadoop01@HADOOP.COM
kdmin: addprinc -randkey client/hadoop01@HADOOP.COM
kadmin.local -q "xst -k /tmp/kafka.keytab -norandkey kafka/hadoop01@HADOOP.COM"
kadmin.local -q "xst -k /tmp/client_kafka.keytab -norandkey client/hadoop01@HADOOP.COM"
Ps:
执行addprinc命令,如果没指定-randkey或-nokey参数,需要设置密码
执行xst导出命令,如果没有使用-norandkey,会导致密码被随机重置
配置
修改kafka/conf/server.properties
#增加
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
super.users=User:kafka
新建kafka_server_jaas.conf 文件
KafkaServer {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/opt/kafka/config/kafka.keytab"
   storeKey=true
   useTicketCache=false
   principal="kafka/hadoop01@HADOOP.COM";
};
修改/kafka/bin/kafka-server-start.sh
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
Kafka 客户端配置
新建 kafka/config/client.properties 文件
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
新建 kafka/config/kafka_client_jaas.conf 文件
KafkaClient {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/opt/kafka/config/client.keytab"
   storeKey=true
   useTicketCache=false
   principal="client/hadoop01@HADOOP.COM";
};
修改 kafka-topics.sh、kafka-console-consumer.sh、kafka-console-producer.sh 脚本
倒数第二行增加
export KAFKA_OPTS="-Djava.security.krb5.conf=/opt/kafka/config/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/kafka_client_jaas.conf"
启动并测试
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
查看所有Topic
/opt/kafka/bin/kafka-topics.sh --list --bootstrap-server 192.168.117.135:9092 --command-config /opt/kafka/config/client.properties
创建一个测试的Topic
/opt/kafka/bin/kafka-topics.sh --create --topic topictsasl --bootstrap-server 192.168.117.135:9092 --partitions 1 --replication-factor 1 --command-config /opt/kafka/config/client.properties
生产消息
/opt/kafka/bin/kafka-console-producer.sh --topic topicsasl --bootstrap-server 192.168.117.135:9092  --producer.config /opt/kafka/config/client.properties
消费消息
/opt/kafka/bin/kafka-console-consumer.sh --topic topicsasl --from-beginning --bootstrap-server 192.168.117.135:9092 --consumer.config /opt/kafka/config/client.properties 


相关文章

中国式IT运维,趟出自己的“长期主义”河流

2018年,Gartner曾做过一次长达6个月的调查问卷。在这个主题为AIOps的问卷中,Gartner选取了500家各行业头部的企业,针对其当下的IT架构来判断调研企业未来是否会加大对于AIOps的...

ES运维(八)添加IK分词器

ES运维(八)添加IK分词器

一、概述ES自带standard analyzer、simple analyzer、whitespace analyzer、stop analyzer、language analyzer、patter...

haproxy服务无法正常启动

haproxy服务无法正常启动

【局点信息】测试环境【集群信息】【问题现象】haproxy设置配置文件后,无法正常启动,查看服务状态显示失败配置增加内容#增加如下配置 listen ApiApplicationServer   ...

hive创建hbase映射表

hive创建hbase映射表

hbase创建表,导入数据/opt/app/hbase-2.1.0/bin/hbase shell查看已有表,创建新表,查看表结构listcreate 'student', 'info', 'scor...

内存--模拟内存打满

1 tmpfs介绍tmpfs是一种虚拟内存文件系统,正如这个定义它最大的特点就是它的存储空间在VM里面     VM是由linux内核里面的vm子系统管理的东...

dolphinscheduler单机部署

dolphinscheduler单机部署

官网链接:https://dolphinscheduler.apache.org本次测试版本为:https://dolphinscheduler.apache.org/zh-cn/download/3...

发表评论    

◎欢迎参与讨论,请在这里发表您的看法、交流您的观点。