Hadoop 开发中的错误

NO.1

hadoop启动
./sbin/start-dfs.sh

提示:
VM: ssh: Could not resolve hostname VM: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
will: ssh: Could not resolve hostname will: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
disabled: ssh: Could not resolve hostname disabled: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
guard: ssh: Could not resolve hostname guard: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
Java: ssh: Could not resolve hostname Java: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
The: ssh: Could not resolve hostname The: Name or service not known
recommended: ssh: Could not resolve hostname recommended: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
……

主要是在环境变量中没有加入
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib”

只要在~/.profile(~/.bash_profile)里边加入如下内容 然后启动就没有问题了

hadoop@chris-Founder-PC:~$ cat ~/.profile
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.

# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask022

# if running bash
if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then
. “$HOME/.bashrc”
fi
fi

# set PATH so it includes user’s private bin if it exists
if [ -d "$HOME/bin" ] ; then
PATH=”$HOME/bin:$PATH”
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib”

export PATH

再加载一下,source ~/.profile启动./sbin/start-dfs.sh 就没有问题了

hadoop@chris-Founder-PC:~$ /usr/local/hadoop/sbin/start-dfs.sh
16/01/26 21:04:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-chris-Founder-PC.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-chris-Founder-PC.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-chris-Founder-PC.out
16/01/26 21:04:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
hadoop@chris-Founder-PC:~$ jps
27528 NameNode
27652 DataNode
27986 Jps
27848 SecondaryNameNode

 

===========================================

NO.2

An internal error occurred during Connecting to DFS

原因:2.7.1jar包有问题,导入插件后无法看到hdfs目录。
eclipse导入插件hadoop 2.7.1 看不到DFS Locations

更换为2.6 的eclipse 插件后就可以看到hdfs文件目录了
记得需要eclipse.exe -clean 刷新一下

 

=============================================

NO.3

Exception message:/bin/bash:第0行fg:无任务控制
windows下的eclipse提交给集群服务器出现的问题,需要在代码中添加
conf.set(“mapreduce.app-submission.cross-platform”, “true”);

 

==============================================

NO.4

Exception in thread “main” java.lang.IllegalArgumentException: Wrong FS: hdfs://locahost:9000/user/input/20120923.txt, expected: hdfs://localhost:9000
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:646)
在hdfs新建文件的时候,提示的错误。
原因是:
// 读取hadoop文件系统的配置
Configuration conf = new Configuration();

//文件系统访问接口
URI uri = new URI(“hdfs://localhost:9000/”);
//创建FileSystem对象aa
FileSystem fs = FileSystem.get(uri, conf);
// 获得本地文件系统,这里应该使用相对域名。否则就会报错
//Path block = new Path(“hdfs://locahost:9000/user/input/”+ fileName + “.txt”);
Path block = new Path(“/user/input/”+ fileName + “.txt”);
// 打开输出流
FSDataOutputStream out = fs.create(block);

如果没有将hdfs-site.xml,core-site.xml等配置放到eclipse的source目录下,提示的错误类似,但是有不同。
java.lang.IllegalArgumentException: Wrong FS: hdfs:/localhost:9000, expected file:///
file://表示目前读取的目录是本地目录,也就是说没有配置好hdfs
解决方法1:
Configuration conf = new Configuration();
conf.addResource(new Path(“/usr/local/hadoop/etc/hadoop/core-site.xml”));
conf.addResource(new Path(“/usr/local/hadoop/etc/hadoop/hdfs-site.xml”));

解决方法2:

把core-site.xml、hdfs-site.xml文件放到开发目录下。

 

=============================================

NO.5

搭建7节点 ha时,无法格式化namenode

16/06/10 15:49:53 ERROR namenode.NameNode: Failed to start namenode.
java.lang.IllegalArgumentException: URI has an authority component
at java.io.File.<init>(File.java:423)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:329)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:276)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:247)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:985)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
16/06/10 15:49:53 INFO util.ExitUtil: Exiting with status 1
16/06/10 15:49:53 INFO namenode.NameNode: SHUTDOWN_MSG:
提示是文件权限问题,可能是没有改成hadoop归属。检查后发现不是这个问题。
检查hdfs-site.xml,core-site.xml配置,后来删掉了路径前面的file,并同步所有节点的配置。就成功了。
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>file:/usr/local/data/journaldata/jn</value>
</property>

======================================================

NO.6

没有给stanby namenode配置ssh免密码登陆

16/06/10 16:56:15 INFO ha.EditLogTailer: Triggering log roll on remote NameNode Master2/192.168.50.59:9000
16/06/10 16:56:16 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:17 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:18 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:19 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:20 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:21 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:22 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:23 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:24 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:25 INFO ipc.Client: Retrying connect to server: Master2/192.168.50.59:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/06/10 16:56:25 WARN ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From Master/192.168.50.60 to Master2:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1480)
at org.apache.hadoop.ipc.Client.call(Client.java:1407)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy15.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:273)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:315)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
Caused by: java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
at org.apache.hadoop.ipc.Client.call(Client.java:1446)

======================================================

NO.7

mapreduce.Job: Running job 卡在这里不动,查看yarn,http://master:8088可以看到job的状态是accept,等待分配资源。
waiting for AM container to be allocated

问题原因是yarn的内存默认配置问题,我的Master节点一台是1GB,另一台是2GB,但是默认的yarn.nodemanager.resource.memory-mb是8192(8GB),那么资源无法分配,就会导致这个问题。yarn-site.xml默认配置
最后把yarn-site.xml添加如下配置,即可运行:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>900</value>
<description>Amount of physical memory, in MB, that can be allocated for containers.</description>
</property>

<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>400</value>
</property>

<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>600</value>
</property>
yarn的内存配置可以参考

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1-11.html

但是有一些配置还是要看你的hadoop的版本下的yarn-site.xml,mapred-site.xml等的默认配置是如何。
比如,按文章解释添加了如下配置:
mapred-site.xml:
<property>
<name>mapreduce.map.memory.mb</name>
<value>256</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>512</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>204</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>408</value>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>512</value>
</property>
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>409</value>
</property>

yarn-site.xml:

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
<description>Amount of physical memory, in MB, that can be allocated for containers.</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>256</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>1024</value>
</property>

配置完运行wordcount程序:
Application application_1470572917650_0007 failed 2 times due to AM Container for appattempt_1470572917650_0007_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://Master:8088/cluster/app/application_1470572917650_0007Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e01_1470572917650_0007_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
打开http://Master:8088/cluster/app/application_1470572917650_0007可以看到日志,写着无法找到类
推测可能是有些字段没有识别,像是
yarn.app.mapreduce.am.command-opts
就是在mapred-site.xml里面配置的,而不是在yarn-site.xml里配置,具体可以看官网的configuration下的xxx-default.xml里面怎么配的。

https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnCommands.html#jar

还有就是值的问题,
yarn.app.mapreduce.am.command-opts -Xmx1024m
配置方式是-Xmx1024m,表示JVM Heap(堆内存)最大尺寸1024MB,按需分配
-Xms128m 表示JVM Heap(堆内存)最小尺寸128MB,初始分配
之后修改配置:
mapred-site.xml:
<property>
<name>mapreduce.map.memory.mb</name>
<value>256</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>512</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx204</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx408</value>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>512</value>
</property>
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Xmx409</value>
</property>

yarn-site.xml:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
<description>Amount of physical memory, in MB, that can be allocated for containers.</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>256</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>1024</value>
</property>
更新每个节点的配置,之后就可以运行程序了。(每次都不记得要把配置传到其他节点,查问题效率低很多)

 

======================================================

NO.8

INFO mapreduce.Job: Task Id : attempt_1469896784221_0013_m_000013_0, Status : FAILED
AttemptID:attempt_1469896784221_0013_m_000013_0 Timed out after 600 secs
默认配置600S:
mapreduce.task.timeout 600000
mapred-site.xml改为更长的时间,三十分钟
<property>
<name>mapreduce.task.timeout</name>
<value>1800000</value> <!– 30 minutes –>
</property>

======================================================

NO.9

发现运行程序的时候,提示 failed on connection exception: java.net.ConnectException: 拒绝连接
后来发现是两个resourcemanager处于standby状态,无法访问master:8088端口
关闭HA,强制启动结果失败,最后namenode 格式化之后就可以启动了,不过datanode节点的数据都没了。可能是我格式啊namenode 的时候也格式化datanode了。

两个master同时standby很烦,完全起不来。一般不用HA模式,重启一遍就好了,要用HA,格式化再重启就好了,如果已经存了很多数据,出现同时standby的情况,不想丢失数据,还是不知道怎么解决。觉的是因为其中一台master计算能力不够,起不来。如果谁知道怎么查问题,希望能告知。

======================================================

NO.10

安装snappy错误
flume配置成gzip,提示
Caused by: java.lang.IllegalArgumentException: SequenceFile doesn’t work with GzipCodec without native-hadoop code!
配置成snappy都提示
Caused by: java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.

意思就是说,GzipCodec找不到本地库,现在使用的libhadoop不支持snappy
既然提示本地库问题,那么就得重新编译hadoop源码,用源码生成的本地库代替现网使用的本地库。
网上乱七八糟的文章不注明版本,很多都无法使用。
编译源码一定要看BUILDING.txt
里面会写清楚需要安装哪些软件

还有,编译源码的时候最好用比较新的稳定的linux版本,还要注意你编译源码,使用的是32位系统还是64位系统。我编译的时候用的比较旧的linux版本,费时费力,不仅编译不成功,而且中间会出现各种各样的问题。编译时间大概为15分钟。看到一堆success就编译成功了。

[INFO] — maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist —

[INFO] Building jar: /home/hadoop/hadoop-2.7.1-src/hadoop-dist/target/hadoop-dist-2.7.1-javadoc.jar

[INFO] ————————————————————————

[INFO] Reactor Summary:

[INFO]

[INFO] Apache Hadoop Main …………………………… SUCCESS [ 4.404 s]

[INFO] Apache Hadoop Project POM …………………….. SUCCESS [ 3.943 s]

[INFO] Apache Hadoop Annotations …………………….. SUCCESS [ 5.518 s]

[INFO] Apache Hadoop Assemblies ……………………… SUCCESS [ 0.182 s]

[INFO] Apache Hadoop Project Dist POM ………………… SUCCESS [ 6.141 s]

[INFO] Apache Hadoop Maven Plugins …………………… SUCCESS [ 2.640 s]

[INFO] Apache Hadoop MiniKDC ………………………… SUCCESS [ 2.311 s]

[INFO] Apache Hadoop Auth …………………………… SUCCESS [ 2.277 s]

[INFO] Apache Hadoop Auth Examples …………………… SUCCESS [ 2.704 s]

[INFO] Apache Hadoop Common …………………………. SUCCESS [02:11 min]

[INFO] Apache Hadoop NFS ……………………………. SUCCESS [ 6.156 s]

[INFO] Apache Hadoop KMS ……………………………. SUCCESS [ 14.329 s]

[INFO] Apache Hadoop Common Project ………………….. SUCCESS [ 0.452 s]

[INFO] Apache Hadoop HDFS …………………………… SUCCESS [04:37 min]

[INFO] Apache Hadoop HttpFS …………………………. SUCCESS [ 23.211 s]

[INFO] Apache Hadoop HDFS BookKeeper Journal ………….. SUCCESS [ 5.884 s]

[INFO] Apache Hadoop HDFS-NFS ……………………….. SUCCESS [ 8.351 s]

[INFO] Apache Hadoop HDFS Project ……………………. SUCCESS [ 0.113 s]

[INFO] hadoop-yarn …………………………………. SUCCESS [ 0.074 s]

[INFO] hadoop-yarn-api ……………………………… SUCCESS [02:31 min]

[INFO] hadoop-yarn-common …………………………… SUCCESS [ 29.533 s]

[INFO] hadoop-yarn-server …………………………… SUCCESS [ 0.216 s]

[INFO] hadoop-yarn-server-common …………………….. SUCCESS [ 9.086 s]

[INFO] hadoop-yarn-server-nodemanager ………………… SUCCESS [ 12.703 s]

[INFO] hadoop-yarn-server-web-proxy ………………….. SUCCESS [ 1.688 s]

[INFO] hadoop-yarn-server-applicationhistoryservice ……. SUCCESS [ 4.188 s]

[INFO] hadoop-yarn-server-resourcemanager …………….. SUCCESS [ 12.948 s]

[INFO] hadoop-yarn-server-tests ……………………… SUCCESS [ 3.110 s]

[INFO] hadoop-yarn-client …………………………… SUCCESS [ 3.472 s]

[INFO] hadoop-yarn-server-sharedcachemanager ………….. SUCCESS [ 1.811 s]

[INFO] hadoop-yarn-applications ……………………… SUCCESS [ 0.053 s]

[INFO] hadoop-yarn-applications-distributedshell ………. SUCCESS [ 1.309 s]

[INFO] hadoop-yarn-applications-unmanaged-am-launcher ….. SUCCESS [ 0.987 s]

[INFO] hadoop-yarn-site …………………………….. SUCCESS [ 0.060 s]

[INFO] hadoop-yarn-registry …………………………. SUCCESS [ 3.070 s]

[INFO] hadoop-yarn-project ………………………….. SUCCESS [ 5.895 s]

[INFO] hadoop-mapreduce-client ………………………. SUCCESS [ 0.062 s]

[INFO] hadoop-mapreduce-client-core ………………….. SUCCESS [ 16.904 s]

[INFO] hadoop-mapreduce-client-common ………………… SUCCESS [ 15.219 s]

[INFO] hadoop-mapreduce-client-shuffle ……………….. SUCCESS [ 2.322 s]

[INFO] hadoop-mapreduce-client-app …………………… SUCCESS [ 4.418 s]

[INFO] hadoop-mapreduce-client-hs ……………………. SUCCESS [ 3.028 s]

[INFO] hadoop-mapreduce-client-jobclient ……………… SUCCESS [ 2.791 s]

[INFO] hadoop-mapreduce-client-hs-plugins …………….. SUCCESS [ 1.104 s]

[INFO] Apache Hadoop MapReduce Examples ………………. SUCCESS [ 2.405 s]

[INFO] hadoop-mapreduce …………………………….. SUCCESS [ 3.137 s]

[INFO] Apache Hadoop MapReduce Streaming ……………… SUCCESS [ 2.338 s]

[INFO] Apache Hadoop Distributed Copy ………………… SUCCESS [ 8.093 s]

[INFO] Apache Hadoop Archives ……………………….. SUCCESS [ 1.290 s]

[INFO] Apache Hadoop Rumen ………………………….. SUCCESS [ 3.027 s]

[INFO] Apache Hadoop Gridmix ………………………… SUCCESS [ 2.096 s]

[INFO] Apache Hadoop Data Join ………………………. SUCCESS [ 1.231 s]

[INFO] Apache Hadoop Ant Tasks ………………………. SUCCESS [ 1.268 s]

[INFO] Apache Hadoop Extras …………………………. SUCCESS [ 1.506 s]

[INFO] Apache Hadoop Pipes ………………………….. SUCCESS [ 4.323 s]

[INFO] Apache Hadoop OpenStack support ……………….. SUCCESS [ 2.154 s]

[INFO] Apache Hadoop Amazon Web Services support ………. SUCCESS [ 2.260 s]

[INFO] Apache Hadoop Azure support …………………… SUCCESS [ 2.000 s]

[INFO] Apache Hadoop Client …………………………. SUCCESS [ 9.934 s]

[INFO] Apache Hadoop Mini-Cluster ……………………. SUCCESS [ 0.250 s]

[INFO] Apache Hadoop Scheduler Load Simulator …………. SUCCESS [ 2.677 s]

[INFO] Apache Hadoop Tools Dist ……………………… SUCCESS [ 8.720 s]

[INFO] Apache Hadoop Tools ………………………….. SUCCESS [ 0.084 s]

[INFO] Apache Hadoop Distribution ……………………. SUCCESS [01:22 min]

[INFO] ————————————————————————

[INFO] BUILD SUCCESS

[INFO] ————————————————————————

[INFO] Total time: 15:36 min

[INFO] Finished at: 2016-06-13T18:51:37+08:00

[INFO] Final Memory: 104M/352M

 

 

发表评论

电子邮件地址不会被公开。 必填项已用*标注

您可以使用这些HTML标签和属性: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>