kafka java示例

研究kafka中, 集群批准好了, 用kafka的 控制台例子收发消息都成功了, 想用java进行一下测试,但是怎么弄都没好用, 在网络搜索了例子测试一下。

kafka官方给的示例并不是很完整,以下代码是经过我补充的并且编译后能运行的。
http://kafka.apache.org/documentation.html#highlevelconsumerapi 等

Producer Code

import java.util.*;
import kafka.message.Message;
import kafka.producer.ProducerConfig;
import kafka.javaapi.producer.Producer;
import kafka.javaapi.producer.ProducerData;

public class ProducerSample {

public static void main(String[] args) {
ProducerSample ps = new ProducerSample();

Properties props = new Properties();
props.put("zk.connect", "127.0.0.1:2181");
props.put("serializer.class", "kafka.serializer.StringEncoder");

ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
ProducerData<String, String> data = new ProducerData<String, String>("test-topic", "test-message2");
producer.send(data);
producer.close();
}
}

Consumer Code

import java.nio.ByteBuffer;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.Message;
import kafka.message.MessageAndMetadata;

public class ConsumerSample {

public static void main(String[] args) {
// specify some consumer properties
Properties props = new Properties();
props.put("zk.connect", "localhost:2181");
props.put("zk.connectiontimeout.ms", "1000000");
props.put("groupid", "test_group");

// Create the connection to the cluster
ConsumerConfig consumerConfig = new ConsumerConfig(props);
ConsumerConnector consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);

// create 4 partitions of the stream for topic “test-topic”, to allow 4 threads to consume
HashMap<String, Integer> map = new HashMap<String, Integer>();
map.put("test-topic", 4);
Map<String, List<KafkaStream<Message>>> topicMessageStreams =
consumerConnector.createMessageStreams(map);
List<KafkaStream<Message>> streams = topicMessageStreams.get("test-topic");

// create list of 4 threads to consume from each of the partitions
ExecutorService executor = Executors.newFixedThreadPool(4);

// consume the messages in the threads
for (final KafkaStream<Message> stream : streams) {
executor.submit(new Runnable() {
public void run() {
for (MessageAndMetadata msgAndMetadata : stream) {
// process message (msgAndMetadata.message())
System.out.println("topic: " + msgAndMetadata.topic());
Message message = (Message) msgAndMetadata.message();
ByteBuffer buffer = message.payload();
byte[] bytes = new byte[message.payloadSize()];
buffer.get(bytes);
String tmp = new String(bytes);
System.out.println("message content: " + tmp);
}
}
});
}

}
}

分别启动zookeeper,kafka server之后,依次运行Producer,Consumer的代码

运行ProducerSample:

运行ConsumerSample:

由于本人不熟悉java的多线程,将官方给的Consumer Code做点小改动,如下所示:

import java.nio.ByteBuffer;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.Message;
import kafka.message.MessageAndMetadata;

public class ConsumerSample2 {

public static void main(String[] args) {
// specify some consumer properties
Properties props = new Properties();
props.put("zk.connect", "localhost:2181");
props.put("zk.connectiontimeout.ms", "1000000");
props.put("groupid", "test_group");

// Create the connection to the cluster
ConsumerConfig consumerConfig = new ConsumerConfig(props);
ConsumerConnector consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);

HashMap<String, Integer> map = new HashMap<String, Integer>();
map.put("test-topic", 1);
Map<String, List<KafkaStream<Message>>> topicMessageStreams =
consumerConnector.createMessageStreams(map);
List<KafkaStream<Message>> streams = topicMessageStreams.get("test-topic");

for (final KafkaStream<Message> stream : streams) {
for (MessageAndMetadata msgAndMetadata : stream) {
// process message (msgAndMetadata.message())
System.out.println("topic: " + msgAndMetadata.topic());
Message message = (Message) msgAndMetadata.message();
ByteBuffer buffer = message.payload();
byte[] bytes = new byte[message.payloadSize()];
buffer.get(bytes);
String tmp = new String(bytes);
System.out.println("message content: " + tmp);
}
}
}
}

我在Producer端又发送了一条“test-message2”的消息,Consumer收到了两条消息,如下所示:

kafka作为分布式日志收集或系统监控服务,我们有必要在合适的场合使用它。kafka的部署包括zookeeper环境/kafka环境,同时还需要进行一些配置操作.接下来介绍如何使用kafka.

我们使用3个zookeeper实例构建zk集群,使用2个kafka broker构建kafka集群.

其中kafka为0.8V,zookeeper为3.4.5V

二. Kafka集群构建

因为Broker配置文件涉及到zookeeper的相关约定,因此我们先展示broker配置文件.我们使用2个kafka broker来构建这个集群环境,分别为kafka-0,kafka-1.

1) kafka-0

在config目录下修改配置文件为:
Java代码  收藏代码

broker.id=0
port=9092
num.network.threads=2
num.io.threads=2
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dir=./logs
num.partitions=2
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
#log.retention.bytes=1073741824
log.segment.bytes=536870912
##replication机制,让每个topic的partitions在kafka-cluster中备份2个
##用来提高cluster的容错能力..
default.replication.factor=1
log.cleanup.interval.mins=10
zookeeper.connect=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183
zookeeper.connection.timeout.ms=1000000

因为kafka用scala语言编写,因此运行kafka需要首先准备scala相关环境。
Java代码  收藏代码

> cd kafka-0
> ./sbt update
> ./sbt package
> ./sbt assembly-package-dependency

其中最后一条指令执行有可能出现异常,暂且不管。 启动kafka broker:
Java代码  收藏代码

> JMS_PORT=9997 bin/kafka-server-start.sh config/server.properties &

因为zookeeper环境已经正常运行了,我们无需通过kafka来挂载启动zookeeper.如果你的一台机器上部署了多个kafka broker,你需要声明JMS_PORT.

2) kafka-1
Java代码  收藏代码

broker.id=1
port=9093
##其他配置和kafka-0保持一致

然后和kafka-0一样执行打包命令,然后启动此broker.
Java代码  收藏代码

> JMS_PORT=9998 bin/kafka-server-start.sh config/server.properties &

仍然可以通过如下指令查看topic的"partition"/"replicas"的分布和存活情况.
Java代码  收藏代码

> bin/kafka-list-topic.sh --zookeeper localhost:2181
topic: my-replicated-topic  partition: 0    leader: 2   replicas: 1,2,0 isr: 2
topic: test partition: 0    leader: 0   replicas: 0 isr: 0

到目前为止环境已经OK了,那我们就开始展示编程实例吧。[配置参数详解]

三.项目准备

项目基于maven构建,不得不说kafka java客户端实在是太糟糕了;构建环境会遇到很多麻烦。建议参考如下pom.xml;其中各个依赖包必须版本协调一致。如果kafka client的版本和kafka server的版本不一致,将会有很多异常,比如"broker id not exists"等;因为kafka从0.7升级到0.8之后(正名为2.8.0),client与server通讯的protocol已经改变.

<dependencies>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.14</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.8.2</artifactId>
<version>0.8.0</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>com.yammer.metrics</groupId>
<artifactId>metrics-core</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>com.101tec</groupId>
<artifactId>zkclient</artifactId>
<version>0.3</version>
</dependency>
</dependencies>

四.Producer端代码

1) producer.properties文件:此文件放在/resources目录下
Java代码  收藏代码

#partitioner.class=
##broker列表可以为kafka server的子集,因为producer需要从broker中获取metadata
##尽管每个broker都可以提供metadata,此处还是建议,将所有broker都列举出来
metadata.broker.list=127.0.0.1:9092,127.0.0.1:9093
##,127.0.0.1:9093
##同步,建议为async
producer.type=sync
compression.codec=0
serializer.class=kafka.serializer.StringEncoder
##在producer.type=async时有效
#batch.num.messages=100

2) LogProducer.java代码样例

package com.test.kafka;

import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.Properties;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class LogProducer {

private Producer<String,String> inner;
public LogProducer() throws Exception{
Properties properties = new Properties();
properties.load(ClassLoader.getSystemResourceAsStream("producer.properties"));
ProducerConfig config = new ProducerConfig(properties);
inner = new Producer<String, String>(config);
}

public void send(String topicName,String message) {
if(topicName == null || message == null){
return;
}
KeyedMessage<String, String> km = new KeyedMessage<String, String>(topicName,message);//如果具有多个partitions,请使用new KeyedMessage(String topicName,K key,V value).
inner.send(km);
}

public void send(String topicName,Collection<String> messages) {
if(topicName == null || messages == null){
return;
}
if(messages.isEmpty()){
return;
}
List<KeyedMessage<String, String>> kms = new ArrayList<KeyedMessage<String, String>>();
for(String entry : messages){
KeyedMessage<String, String> km = new KeyedMessage<String, String>(topicName,entry);
kms.add(km);
}
inner.send(kms);
}

public void close(){
inner.close();
}

/**
* @param args
*/
public static void main(String[] args) {
LogProducer producer = null;
try{
producer = new LogProducer();
int i=0;
while(true){
producer.send("test-topic", "this is a sample" + i);
i++;
Thread.sleep(2000);
}
}catch(Exception e){
e.printStackTrace();
}finally{
if(producer != null){
producer.close();
}
}

}

}

五.Consumer端

1) consumer.properties:文件位于/resources目录下

zookeeper.connect=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183
##,127.0.0.1:2182,127.0.0.1:2183
# timeout in ms for connecting to zookeeper
zookeeper.connectiontimeout.ms=1000000
#consumer group id
group.id=test-group
#consumer timeout
#consumer.timeout.ms=5000
auto.commit.enable=true
auto.commit.interval.ms=60000

2) LogConsumer.java代码样例

package com.test.kafka;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.MessageAndMetadata;
public class LogConsumer {

private ConsumerConfig config;
private String topic;
private int partitionsNum;
private MessageExecutor executor;
private ConsumerConnector connector;
private ExecutorService threadPool;
public LogConsumer(String topic,int partitionsNum,MessageExecutor executor) throws Exception{
Properties properties = new Properties();
properties.load(ClassLoader.getSystemResourceAsStream("consumer.properties"));
config = new ConsumerConfig(properties);
this.topic = topic;
this.partitionsNum = partitionsNum;
this.executor = executor;
}

public void start() throws Exception{
connector = Consumer.createJavaConsumerConnector(config);
Map<String,Integer> topics = new HashMap<String,Integer>();
topics.put(topic, partitionsNum);
Map<String, List<KafkaStream<byte[], byte[]>>> streams = connector.createMessageStreams(topics);
List<KafkaStream<byte[], byte[]>> partitions = streams.get(topic);
threadPool = Executors.newFixedThreadPool(partitionsNum);
for(KafkaStream<byte[], byte[]> partition : partitions){
threadPool.execute(new MessageRunner(partition));
}
}

public void close(){
try{
threadPool.shutdownNow();
}catch(Exception e){
//
}finally{
connector.shutdown();
}

}

class MessageRunner implements Runnable{
private KafkaStream<byte[], byte[]> partition;

MessageRunner(KafkaStream<byte[], byte[]> partition) {
this.partition = partition;
}

public void run(){
ConsumerIterator<byte[], byte[]> it = partition.iterator();
while(it.hasNext()){
//connector.commitOffsets();手动提交offset,当autocommit.enable=false时使用
MessageAndMetadata<byte[],byte[]> item = it.next();
System.out.println("partiton:" + item.partition());
System.out.println("offset:" + item.offset());
executor.execute(new String(item.message()));//UTF-8,注意异常
}
}
}

interface MessageExecutor {

public void execute(String message);
}

/**
* @param args
*/
public static void main(String[] args) {
LogConsumer consumer = null;
try{
MessageExecutor executor = new MessageExecutor() {

public void execute(String message) {
System.out.println(message);

}
};
consumer = new LogConsumer("test-topic", 2, executor);
consumer.start();
}catch(Exception e){
e.printStackTrace();
}finally{
//          if(consumer != null){
//              consumer.close();
//          }
}

}

}

需要提醒的是,上述LogConsumer类中,没有太多的关注异常情况,必须在MessageExecutor.execute()方法中抛出异常时的情况.

在测试时,建议优先启动consumer,然后再启动producer,这样可以实时的观测到最新的消息。

 

来源:http://blog.csdn.net/hxpjava1/article/details/19160665

发表评论