修改最大打开文件数,大于等于65536
临时生效
ulimit -n 65536
重启生效
vi /etc/security/limits.conf
@search soft nofile 65536
@search hard nofile 65536
Kafka Streams默认使用RocksDB来存储状态
状态可配置为保存在内存中或者持久化,RocksDB对于两者都适用,并且可以通过Stores factory API在两者间切换。StateStoreSupplier被创建之后,可以用于Kafka Streams DSL API(high level),也可以用于Processor API(low level)
运行一段kafka streams程序的时候,报出以下异常
1 | Properties settings = new Properties(); |
1 | Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: stream-thread [StreamThread-1] Failed to rebalance |
Kafka版本0.9.0.1,发送数据时报错1
Failed to allocate memory within the configured max blocking time
原因很明显,如果producer端缓存(buffer.memory,默认32M)满了的话,在一定时间(max.block.ms,默认60s)内如果数据无法被放入缓存,则抛出该异常。
Kafka整个消息管道的默认口径是1M,换句话说,默认Producer只能向Kafka发送大小不超过1M的消息,Kafka内部也只能处理大小不超过1M的消息,Consumer也只能消费大小不超过1M的消息。
如果发送2M(为了方便计算,以下1M=1000K)大小的数据,client会报异常
1 | Exception in thread "main" java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The message is 2000037 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. |
安利几个ES的插件:
sql1
2
3GITHUB地址:https://github.com/NLPchina/elasticsearch-sql/blob/master/README.md?utm_source=tuicool&utm_medium=referral
在线安装:./bin/plugin -u https://github.com/NLPchina/elasticsearch-sql/releases/download/{version}/elasticsearch-sql-{version}.zip --install sql
访问地址:http://localhost:9200/_plugin/sql/
今天做kafka高并发测试的时候发生内存泄漏,直接原因是producer关闭之后内存没有得到释放。根本原因是:
1 | CLUSTER_NODES=localhost |