我用1.8版本的flume 在java 1.8.0_101 的主機(jī)上向elasticsearch 5.0.0傳日志,我下載了 elasticsearch-5.0.0.jar 與 lucene-core-5.0.0.jar 放在flume 的lib目錄下,啟動(dòng)后仍然提示有問題,這個(gè)依賴下載有問題嗎?
以下是報(bào)錯(cuò)信息
Info: Including Hadoop libraries found via (/usr/local/hadoop/bin/hadoop) for HDFS access
Info: Including Hive libraries found via () for Hive access
+ exec /usr/local/jdk1.8.0_101/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/mnt/flume_Outer_1.8/conf:/mnt/flume_Outer_1.8/lib/*:/usr/local/hadoop-2.6.5/etc/hadoop:/usr/local/hadoop-2.6.5/share/hadoop/common/lib/*:/usr/local/hadoop-2.6.5/share/hadoop/common/*:/usr/local/hadoop-2.6.5/share/hadoop/hdfs:/usr/local/hadoop-2.6.5/share/hadoop/hdfs/lib/*:/usr/local/hadoop-2.6.5/share/hadoop/hdfs/*:/usr/local/hadoop-2.6.5/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.6.5/share/hadoop/yarn/*:/usr/local/hadoop-2.6.5/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-2.6.5/share/hadoop/mapreduce/*:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/lib/*' -Djava.library.path=:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib org.apache.flume.node.Application -f conf/infotest -n a1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/mnt/flume_Outer_1.8/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.5/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
18/02/08 18:07:49 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
18/02/08 18:07:49 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:conf/infotest
18/02/08 18:07:49 INFO conf.FlumeConfiguration: Processing:s1
18/02/08 18:07:49 INFO conf.FlumeConfiguration: Processing:s1
18/02/08 18:07:49 INFO conf.FlumeConfiguration: Processing:s1
18/02/08 18:07:49 INFO conf.FlumeConfiguration: Processing:s1
18/02/08 18:07:49 INFO conf.FlumeConfiguration: Added sinks: s1 Agent: a1
18/02/08 18:07:49 INFO conf.FlumeConfiguration: Processing:s1
18/02/08 18:07:49 INFO conf.FlumeConfiguration: Processing:s1
18/02/08 18:07:49 INFO conf.FlumeConfiguration: Processing:s1
18/02/08 18:07:49 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
18/02/08 18:07:49 INFO node.AbstractConfigurationProvider: Creating channels
18/02/08 18:07:49 INFO channel.DefaultChannelFactory: Creating instance of channel ch1 type memory
18/02/08 18:07:49 INFO node.AbstractConfigurationProvider: Created channel ch1
18/02/08 18:07:49 INFO source.DefaultSourceFactory: Creating instance of source r1, type exec
18/02/08 18:07:49 INFO sink.DefaultSinkFactory: Creating instance of sink: s1, type: elasticsearch
18/02/08 18:07:49 INFO node.AbstractConfigurationProvider: Channel ch1 connected to [r1, s1]
18/02/08 18:07:49 INFO node.Application: Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:r1,state:IDLE} }} sinkRunners:{s1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6add9eb counterGroup:{ name:null counters:{} } }} channels:{ch1=org.apache.flume.channel.MemoryChannel{name: ch1}} }
18/02/08 18:07:49 INFO node.Application: Starting Channel ch1
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: ch1: Successfully registered new MBean.
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 started
18/02/08 18:07:49 INFO node.Application: Starting Sink s1
18/02/08 18:07:49 INFO node.Application: Starting Source r1
18/02/08 18:07:49 INFO source.ExecSource: Exec source starting with command: tail -n +0 -F /mnt/echat-log/info/echat_old/echat_third/echat.log.2018-02-07
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
18/02/08 18:07:49 INFO elasticsearch.ElasticSearchSink: ElasticSearch sink {} started
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: s1: Successfully registered new MBean.
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: s1 started
18/02/08 18:07:49 WARN client.ElasticSearchTransportClient: [192.168.1.4:9200]
18/02/08 18:07:49 ERROR lifecycle.LifecycleSupervisor: Unable to start SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6add9eb counterGroup:{ name:null counters:{} } } - Exception follows.
java.lang.NoSuchMethodError: org.elasticsearch.common.transport.InetSocketTransportAddress.<init>(Ljava/lang/String;I)V
at org.apache.flume.sink.elasticsearch.client.ElasticSearchTransportClient.configureHostnames(ElasticSearchTransportClient.java:141)
at org.apache.flume.sink.elasticsearch.client.ElasticSearchTransportClient.<init>(ElasticSearchTransportClient.java:77)
at org.apache.flume.sink.elasticsearch.client.ElasticSearchClientFactory.getClient(ElasticSearchClientFactory.java:48)
at org.apache.flume.sink.elasticsearch.ElasticSearchSink.start(ElasticSearchSink.java:358)
at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:45)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:249)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
18/02/08 18:07:49 INFO elasticsearch.ElasticSearchSink: ElasticSearch sink {} stopping
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: s1 stopped
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.start.time == 1518084469218
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.stop.time == 1518084469230
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.batch.complete == 0
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.batch.empty == 0
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.batch.underflow == 0
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.connection.closed.count == 1
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.connection.creation.count == 0
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.connection.failed.count == 0
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.event.drain.attempt == 0
18/02/08 18:07:49 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: s1. sink.event.drain.sucess == 0
18/02/08 18:07:49 WARN lifecycle.LifecycleSupervisor: Component SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6add9eb counterGroup:{ name:null counters:{} } } stopped, since it could not besuccessfully started due to missing dependencies
18/02/08 18:08:19 ERROR source.ExecSource: Failed while running command: tail -n +0 -F /mnt/echat-log/info/echat_old/echat_third/echat.log.2018-02-07
org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight
at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:128)
at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194)
at org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:378)
at org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:338)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
18/02/08 18:08:19 ERROR source.ExecSource: Exception occurred when processing event batch
org.apache.flume.ChannelException: java.lang.InterruptedException
at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:154)
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194)
at org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:378)
at org.apache.flume.source.ExecSource$ExecRunnable.access$100(ExecSource.java:251)
at org.apache.flume.source.ExecSource$ExecRunnable$1.run(ExecSource.java:320)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:582)
at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:126)
at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
... 11 more
以下是配置文件
a1.channels = ch1
a1.sources = r1
a1.sinks = s1
a1.channels.ch1.type = memory
a1.channels.ch1.capacity = 1000
a1.channels.ch1.transactionCapacity = 1000
a1.channels.ch1.keep-alive = 30
a1.sources.r1.type = exec
a1.sources.r1.shell = /bin/bash -c
a1.sources.r1.command = tail -n +0 -F /mnt/echat-log/info/echat_old/echat_third/echat.log.2018-02-07
a1.sources.r1.channels = ch1
a1.sources.r1.threads = 5
a1.sources.r1.restartThrottle = 100000
a1.sources.r1.restart = true
a1.sources.r1.logStdErr = true
a1.sinks.s1.channel = ch1
a1.sinks.s1.type = elasticsearch
a1.sinks.s1.hostNames = 192.168.1.4:9200
a1.sinks.s1.indexName = foo_index
a1.sinks.s1.indexType = bar_type
a1.sinks.s1.batchSize = 500
a1.sinks.s1.serializer = org.apache.flume.sink.elasticsearch.ElasticSearchDynamicSerializer
北大青鳥APTECH成立于1999年。依托北京大學(xué)優(yōu)質(zhì)雄厚的教育資源和背景,秉承“教育改變生活”的發(fā)展理念,致力于培養(yǎng)中國(guó)IT技能型緊缺人才,是大數(shù)據(jù)專業(yè)的國(guó)家
北大青鳥中博軟件學(xué)院創(chuàng)立于2003年,作為華東區(qū)著名互聯(lián)網(wǎng)學(xué)院和江蘇省首批服務(wù)外包人才培訓(xùn)基地,中博成功培育了近30000名軟件工程師走向高薪崗位,合作企業(yè)超4
中公教育集團(tuán)創(chuàng)建于1999年,經(jīng)過二十年潛心發(fā)展,已由一家北大畢業(yè)生自主創(chuàng)業(yè)的信息技術(shù)與教育服務(wù)機(jī)構(gòu),發(fā)展為教育服務(wù)業(yè)的綜合性企業(yè)集團(tuán),成為集合面授教學(xué)培訓(xùn)、網(wǎng)
達(dá)內(nèi)教育集團(tuán)成立于2002年,是一家由留學(xué)海歸創(chuàng)辦的高端職業(yè)教育培訓(xùn)機(jī)構(gòu),是中國(guó)一站式人才培養(yǎng)平臺(tái)、一站式人才輸送平臺(tái)。2014年4月3日在美國(guó)成功上市,融資1
曾工作于聯(lián)想擔(dān)任系統(tǒng)開發(fā)工程師,曾在博彥科技股份有限公司擔(dān)任項(xiàng)目經(jīng)理從事移動(dòng)互聯(lián)網(wǎng)管理及研發(fā)工作,曾創(chuàng)辦藍(lán)懿科技有限責(zé)任公司從事總經(jīng)理職務(wù)負(fù)責(zé)iOS教學(xué)及管理工作。
浪潮集團(tuán)項(xiàng)目經(jīng)理。精通Java與.NET 技術(shù), 熟練的跨平臺(tái)面向?qū)ο箝_發(fā)經(jīng)驗(yàn),技術(shù)功底深厚。 授課風(fēng)格 授課風(fēng)格清新自然、條理清晰、主次分明、重點(diǎn)難點(diǎn)突出、引人入勝。
精通HTML5和CSS3;Javascript及主流js庫(kù),具有快速界面開發(fā)的能力,對(duì)瀏覽器兼容性、前端性能優(yōu)化等有深入理解。精通網(wǎng)頁(yè)制作和網(wǎng)頁(yè)游戲開發(fā)。
具有10 年的Java 企業(yè)應(yīng)用開發(fā)經(jīng)驗(yàn)。曾經(jīng)歷任德國(guó)Software AG 技術(shù)顧問,美國(guó)Dachieve 系統(tǒng)架構(gòu)師,美國(guó)AngelEngineers Inc. 系統(tǒng)架構(gòu)師。