日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Kafka SASL/SCRAM+ACL实现动态创建用户及权限控制

發布時間:2023/12/14 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Kafka SASL/SCRAM+ACL实现动态创建用户及权限控制 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • SASL_SCRAM+ACL實現動態創建用戶及權限控制
    • 使用SASL / SCRAM進行身份驗證
      • 1. 創建SCRAM Credentials
        • 創建broker建通信用戶(或稱超級用戶)
        • 創建客戶端用戶fanboshi
        • 查看SCRAM證書
        • 刪除SCRAM證書
      • 2. 配置Kafka Brokers
      • 客戶端配置
        • kafka-console-producer
        • kafka-console-consumer
    • ACL配置
      • 授予fanboshi用戶對test topic 寫權限, 只允許 192.168.2.* 網段
      • 授予fanboshi用戶對test topic 讀權限, 只允許 192.168.2.* 網段
      • 授予fanboshi用戶, fanboshi-group 消費者組 對test topic 讀權限, 只允許 192.168.2.* 網段
      • 查看acl配置
      • 刪除配置
      • 再次測試
    • 如何查看我們創建了哪些"用戶"
    • 參考文獻

SASL_SCRAM+ACL實現動態創建用戶及權限控制

??研究了一段時間Kafka的權限控制. 之前一直看SASL_PLAINTEXT, 結果發現這玩意不能動態創建用戶. 今天沒死心與查了查,發現這個人也問了這個問題
https://stackoverflow.com/questions/54147460/kafka-adding-sasl-users-dynamically-without-cluster-restart
于是研究了下SCRAM. 寫了這篇完整的文檔

本篇文檔中使用的是自己部署的zookeeper, zookeeper無需做任何特殊配置

使用SASL / SCRAM進行身份驗證

請先在不配置任何身份驗證的情況下啟動Kafka

1. 創建SCRAM Credentials

Kafka中的SCRAM實現使用Zookeeper作為憑證(credential)存儲。 可以使用kafka-configs.sh在Zookeeper中創建憑據。 對于啟用的每個SCRAM機制,必須通過添加具有機制名稱的配置來創建憑證。 必須在啟動Kafka broker之前創建代理間通信的憑據。 可以動態創建和更新客戶端憑證,并使用更新的憑證來驗證新連接。

創建broker建通信用戶(或稱超級用戶)

bin/kafka-configs.sh --zookeeper 192.168.2.229:2182 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

創建客戶端用戶fanboshi

bin/kafka-configs.sh --zookeeper 192.168.2.229:2182 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=fanboshi],SCRAM-SHA-512=[password=fanboshi]' --entity-type users --entity-name fanboshi

查看SCRAM證書

[root@node002229 kafka]# bin/kafka-configs.sh --zookeeper localhost:2182 --describe --entity-type users --entity-name fanboshi Configs for user-principal 'fanboshi' are SCRAM-SHA-512=salt=MWwwdWJqcjBncmUwdzY1Mzdoa2NwNXppd3A=,stored_key=mGCJy5k3LrE2gs6Dp4ALRhgy37l1WYPUIdoOncCF+B3Ti3wL2sQNmzg8oEz3tUs9DFsclFCygjbysb0S0BU9bA==,server_key=iTyX0U0Jt02dkddUm6QrVwNf3lJk72dBNs9EDHTqe8kLlNGIp9ypzRkcgkc+WVMd1bkAF3cg8vk9Q1LrJ/2i/A==,iterations=4096,SCRAM-SHA-256=salt=ZDg5MHVlYW40dW9jbXJ6MndvZDVlazd3ag==,stored_key=cgX1ldpXnDL1+TlLHJ3IHn7tAQS/7pQ7BVZUtECpQ3A=,server_key=i7Mcnb5sPUqfIFs6qKWWHZ2ortoKiRc7oabHOV5dawI=,iterations=8192

刪除SCRAM證書

這里只演示,不操作

[root@node002229 kafka]# bin/kafka-configs.sh --zookeeper localhost:2182 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name fanboshi

2. 配置Kafka Brokers

  • 在每個Kafka broker的config目錄中添加一個類似下面的JAAS文件,我們稱之為kafka_server_jaas.conf:
  • [root@node002229 config]# cat kafka_server_jaas.conf KafkaServer {org.apache.kafka.common.security.scram.ScramLoginModule requiredusername="admin"password="admin-secret"; };

    注意不要少寫了分號

  • 將JAAS配置文件位置作為JVM參數傳遞給每個Kafka broker:
    修改 /usr/local/kafka/bin/kafka-server-start.sh
    將exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@" 注釋, 增加下面的內容
  • #exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@" exec $base_dir/kafka-run-class.sh $EXTRA_ARGS -Djava.security.auth.login.config=$base_dir/../config/kafka_server_jaas.conf kafka.Kafka "$@"

    或者不修改kafka-server-start.sh腳本, 而是將下面的內容添加到~/.bashrc

    export KAFKA_PLAIN_PARAMS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf" export KAFKA_OPTS="$KAFKA_PLAIN_PARAMS $KAFKA_OPTS"
  • 如此處所述,在server.properties中配置SASL端口和SASL機制。 例如:
  • # 認證配置 listeners=SASL_PLAINTEXT://node002229:9092 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 sasl.enabled.mechanisms=SCRAM-SHA-256# ACL配置 allow.everyone.if.no.acl.found=false super.users=User:admin authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

    在官方文檔中寫的是

    listeners=SASL_SSL://host.name:port security.inter.broker.protocol=SASL_SSL

    這里其實沒必要寫成SASL_SSL , 我們可以根據自己的需求選擇SSL或PLAINTEXT, 我這里選擇PLAINTEXT不加密明文傳輸, 省事, 性能也相對好一些

  • 重啟ZK/Kafka
    重啟ZK / Kafka服務. 所有broker在連接之前都會引用’kafka_server_jaas.conf’.
    Zookeeper所有節點
  • [root@node002229 zookeeper]# zkServer.sh stop /usr/local/zookeeper/bin/../conf/zoo.cfg ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo1.cfg Stopping zookeeper ... STOPPED[root@node002229 zookeeper]# zkServer.sh start /usr/local/zookeeper/bin/../conf/zoo.cfg ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo1.cfg

    Kafka所有Broker

    cd /usr/local/kafka/;bin/kafka-server-stop.sh cd /usr/local/kafka/;bin/kafka-server-start.sh -daemon config/server.properties

    客戶端配置

    先使用kafka-console-producer 和 kafka-console-consumer 測試一下

    kafka-console-producer

  • 創建 config/client-sasl.properties 文件
  • [root@node002229 kafka]# vim config/client-sasl.properties security.protocol=SASL_PLAINTEXT sasl.mechanism=SCRAM-SHA-256
  • 創建config/kafka_client_jaas_admin.conf文件
  • [root@node002229 kafka]# vim config/kafka_client_jaas_admin.conf KafkaClient {org.apache.kafka.common.security.scram.ScramLoginModule requiredusername="admin"password="admin-secret"; };
  • 修改kafka-console-producer.sh腳本
    這里我復制一份,再改
  • cp bin/kafka-console-producer.sh bin/kafka-console-producer-admin.sh vim bin/kafka-console-producer-admin.sh #exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@" exec $(dirname $0)/kafka-run-class.sh -Djava.security.auth.login.config=$(dirname $0)/../config/kafka_client_jaas_admin.conf kafka.tools.ConsoleProducer "$@"
  • 創建測試topic
  • bin/kafka-topics.sh --create --zookeeper localhost:2182 --partitions 1 --replication-factor 1 --topic test
  • 測試生產消息
  • bin/kafka-console-producer-admin.sh --broker-list 192.168.2.229:9092 --topic test --producer.config config/client-sasl.properties >1 >

    可以看到admin用戶無需配置ACL就可以生成消息
    6. 測試fanboshi用戶
    如法炮制, 我們創建一個bin/kafka-console-producer-fanboshi.sh文件, 只是修改其中的kafka_client_jaas_admin.conf 為 kafka_client_jaas_fanboshi.conf

    vim config/kafka_client_jaas_fanboshi.conf KafkaClient {org.apache.kafka.common.security.scram.ScramLoginModule requiredusername="fanboshi"password="fanboshi"; };cp bin/kafka-console-producer-admin.sh bin/kafka-console-producer-fanboshi.sh vi bin/kafka-console-producer-fanboshi.sh exec $(dirname $0)/kafka-run-class.sh -Djava.security.auth.login.config=$(dirname $0)/../config/kafka_client_jaas_fanboshi.conf kafka.tools.ConsoleProducer "$@"

    生產消息

    [root@node002229 kafka]# bin/kafka-console-producer-fanboshi.sh --broker-list 192.168.2.229:9092 --topic test --producer.config config/client-sasl.properties >1 [2019-01-26 18:07:50,099] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient) [2019-01-26 18:07:50,100] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [test]

    可以看到報錯了, 因為fanboshi用戶還沒有權限

    kafka-console-consumer

  • 創建 config/consumer-fanboshi.properties 文件
  • [root@node002229 kafka]# vim config/consumer-fanboshi.properties security.protocol=SASL_PLAINTEXT sasl.mechanism=SCRAM-SHA-256 group.id=fanboshi-group
  • 創建 bin/kafka-console-consumer-fanboshi.sh 文件
  • cp bin/kafka-console-consumer.sh bin/kafka-console-consumer-fanboshi.sh vim bin/kafka-console-consumer-fanboshi.sh #exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@" exec $(dirname $0)/kafka-run-class.sh -Djava.security.auth.login.config=$(dirname $0)/../config/kafka_client_jaas_fanboshi.conf kafka.tools.ConsoleConsumer "$@"
  • 測試消費者
  • bin/kafka-console-consumer-fanboshi.sh --bootstrap-server 192.168.2.229:9092 --topic test --consumer.config config/consumer-fanboshi.properties --from-beginning

    其實也會報錯的, 報錯內容就不貼了

    ACL配置

    授予fanboshi用戶對test topic 寫權限, 只允許 192.168.2.* 網段

    bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2182 --add --allow-principal User:fanboshi --operation Write --topic test --allow-host 192.168.2.*

    授予fanboshi用戶對test topic 讀權限, 只允許 192.168.2.* 網段

    bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2182 --add --allow-principal User:fanboshi --operation Read --topic test --allow-host 192.168.2.*

    授予fanboshi用戶, fanboshi-group 消費者組 對test topic 讀權限, 只允許 192.168.2.* 網段

    bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2182 --add --allow-principal User:fanboshi --operation Read --group fanboshi-group --allow-host 192.168.2.*

    查看acl配置

    [root@node002229 kafka]# bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2182 --list Current ACLs for resource `Group:LITERAL:fanboshi-group`: User:fanboshi has Allow permission for operations: Read from hosts: * Current ACLs for resource `Topic:LITERAL:test`: User:fanboshi has Allow permission for operations: Write from hosts: *User:fanboshi has Allow permission for operations: Read from hosts: *

    刪除配置

    bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2182 --remove --allow-principal User:bob --operation Read --topic example10 --allow-host 192.168.1.158

    再次測試

    生產者

    [root@node002229 kafka]# bin/kafka-console-producer-fanboshi.sh --broker-list 192.168.2.229:9092 --topic test --producer.config config/client-sasl.properties >1 [2019-01-26 18:07:50,099] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient) [2019-01-26 18:07:50,100] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [test] >1

    消費者

    [root@node002229 kafka]# bin/kafka-console-consumer-fanboshi.sh --bootstrap-server 192.168.2.229:9092 --topic test --consumer.config config/consumer-fanboshi.properties --from-beginning 1 1

    都沒問題了

    如何查看我們創建了哪些"用戶"

    好像只能去zookeeper看?

    zkCli.sh -server node002229:2182 ls /config/users [admin, alice, fanboshi]

    嘗試刪除alice

    [root@node002229 kafka]# bin/kafka-configs.sh --zookeeper localhost:2182 --describe --entity-type users --entity-name alice Configs for user-principal 'alice' are SCRAM-SHA-512=salt=MWt1OHRhZnd3cWZvZ2I4bXcwdTM0czIyaTQ=,stored_key=JYeud1Cx5Z2+FaJgJsZGbMcIi63B9XtA9Wyc+KEm2gXK8+2IxxAVvi1CfSjlkqeupfeIMFJ7/EUkOw+zqvYz6w==,server_key=O4NIgjleroia7puK01/ZZoagFeoxh+zHzckGXXooBsWTdx/7Shb0pMHniMu4IY2jb5orWB2t9K8MZkxCliJDsg==,iterations=4096,SCRAM-SHA-256=salt=MTJ3bXRod3EyN3FtZWdsNHk0NXoyeWdlNjE=,stored_key=chQX35reoBYtfg/U5HBtkzvBAk+gSCgskNzUiScOrUE=,server_key=rRTbUzAehwVMUDTMuoOMumGEuvc7wDecKcqK6yYlbWY=,iterations=8192 [root@node002229 kafka]# bin/kafka-configs.sh --zookeeper localhost:2182 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice Completed Updating config for entity: user-principal 'alice'. [root@node002229 kafka]# bin/kafka-configs.sh --zookeeper localhost:2182 --describe --entity-type users --entity-name alice Configs for user-principal 'alice' are SCRAM-SHA-256=salt=MTJ3bXRod3EyN3FtZWdsNHk0NXoyeWdlNjE=,stored_key=chQX35reoBYtfg/U5HBtkzvBAk+gSCgskNzUiScOrUE=,server_key=rRTbUzAehwVMUDTMuoOMumGEuvc7wDecKcqK6yYlbWY=,iterations=8192 [root@node002229 kafka]# bin/kafka-configs.sh --zookeeper localhost:2182 --alter --delete-config 'SCRAM-SHA-256' --entity-type users --entity-name alice Completed Updating config for entity: user-principal 'alice'. [root@node002229 kafka]# bin/kafka-configs.sh --zookeeper localhost:2182 --describe --entity-type users --entity-name alice Configs for user-principal 'alice' are

    去ZK查看

    [zk: node002229:2182(CONNECTED) 0] ls /config/users [admin, alice, fanboshi]

    還是有, 難道只能在ZK手動刪除?

    參考文獻

    http://kafka.apache.org/documentation/#security_sasl_scram
    https://sharebigdata.wordpress.com/category/kafka/kafka-sasl-scram-with-w-o-ssl/
    https://developer.ibm.com/opentech/2017/05/31/kafka-acls-in-practice/
    https://developer.ibm.com/tutorials/kafka-authn-authz/
    Kafka并不難學! 入門、進階、商業實戰

    總結

    以上是生活随笔為你收集整理的Kafka SASL/SCRAM+ACL实现动态创建用户及权限控制的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。