πŸ§ͺ Kafka Consumer Performance Test β€” High Throughput Configuration

Configure Auth - kafka_jaas.conf

KafkaClient {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="$ConnectionString"

password="Endpoint=sb://demo-kafka.servicebus.windows.net/;SharedAccessKeyName=admin;SharedAccessKey=test=;EntityPath=adttohis";

};

Export Creds

export KAFKA_OPTS="-Djava.security.auth.login.config=/Users/deepaknishad/jmeter-kafka/kafka_jaas.conf"
/opt/homebrew/Cellar/kafka/4.0.0/libexec/bin/kafka-consumer-perf-test.sh \
  --bootstrap-server demo-kafka.servicebus.windows.net:9093 \
  --topic adttohis \
  --messages 10000 \
  --group testcg \
  --from-latest \
  --timeout 130000 \
  --print-metrics \
  --show-detailed-stats \
  --consumer.config consumer.properties

🧡 Breakdown of Each Option

OptionDescriptionWhy It’s Important for High Throughput
--bootstrap-server demo-kafka.servicebus.windows.net:9093Kafka broker (Azure Event Hubs endpoint)Required to connect to Kafka
--topic adttohisThe topic from which messages are consumedDetermines source of data load
--messages 10000Number of messages to consumeA larger number provides better test for sustained throughput
--group testcgConsumer group IDIsolates offsets for this test; avoids offset conflict
--from-latestStart from the latest offsetOnly measures performance of new messages, not backlog
--timeout 130000Max time (ms) to wait for consumingPrevents indefinite wait if throughput is low
--print-metricsShow summary metrics after testDisplays throughput, records/sec, latency
--show-detailed-statsIncludes latency percentiles and other statsHelps understand fetch/request patterns
--consumer.config consumer.propertiesExternal config file for advanced tuningEnables tuning for high performance

βš™οΈ consumer.properties Explained (High Throughput Focus)

# Connection
bootstrap.servers=demo-kafka.servicebus.windows.net:9093
security.protocol=SASL_SSL
sasl.mechanism=PLAIN

πŸ” Security & Connection to Azure Event Hubs


πŸš€ Performance Optimization

#fetch.min.bytes=65536
fetch.max.wait.ms=550
#max.partition.fetch.bytes=65536
max.poll.records=3000
session.timeout.ms=10000
heartbeat.interval.ms=3000
PropertyWhat It DoesHigh Throughput Role
fetch.min.bytes=65536 (commented)Broker waits for at least 64KB data before respondingImproves batching efficiency
fetch.max.wait.ms=550Max time broker waits before sending fetch responseBalances latency vs batch size
max.partition.fetch.bytes=65536 (commented)Max data per partition per fetchShould be increased for high-volume partitions
max.poll.records=3000Max records returned per pollLarger poll size = fewer requests, better throughput
session.timeout.ms=10000Time before broker marks consumer as deadAvoids unnecessary rebalances
heartbeat.interval.ms=3000Interval between heartbeatsShould be < β…“ of session.timeout.ms

⏱️ Timeouts

request.timeout.ms=30000
metadata.max.age.ms=30000
PropertyPurposeWhy It Matters
request.timeout.msMax time to wait for broker responsesPrevents long stalls
metadata.max.age.msHow often metadata is refreshedEnsures quick adaptation to cluster changes

πŸ” Retry Behavior

reconnect.backoff.ms=100
reconnect.backoff.max.ms=1000
retry.backoff.ms=100
PropertyPurposeTuning Insight
reconnect.backoff.msTime before reconnect attemptKeeps retry loops under control
reconnect.backoff.max.msMax wait between reconnectsPrevents overloading the broker
retry.backoff.msTime before retrying failed requestsUseful in cloud where transient errors occur

Final consumer.properties

# Connection
 
bootstrap.servers=demo-kafka.servicebus.windows.net:9093
 
security.protocol=SASL_SSL
 
sasl.mechanism=PLAIN
 
# Performance Optimization
 
#fetch.min.bytes=65536
 
fetch.max.wait.ms=550
 
#max.partition.fetch.bytes=65536
 
max.poll.records=3000
 
session.timeout.ms=10000
 
heartbeat.interval.ms=3000
 
# Timeouts
 
request.timeout.ms=30000
 
metadata.max.age.ms=30000
 
# Retry
 
reconnect.backoff.ms=100
 
reconnect.backoff.max.ms=1000
 
retry.backoff.ms=100