- 浏览: 4606907 次
- 性别:
- 来自: 武汉
文章分类
最新评论
-
x70740692:
我也是舆情系统苦逼难做
网络舆情信息分析系统——(1) -
勇气魄力:
啥啊!没什么信息
ArcGIS for Server 10.1智能支持云的架构(上) -
迟来的风:
很不错,值得学习,非常感谢您给了我们这么好的资源
最新 跟我学spring3 电子书下载 -
linfanne:
哭了, 有一个地方写错了, 跟了2个多小时代码才找到原因& ...
Spring MVC+Freemarker+Javascript的多语言(国际化i18n/本地化)和主题(Theme)实现 -
linfanne:
无数的鲜花,多语言暂时不考虑,多主题刚好用到,我一般都不回帖, ...
Spring MVC+Freemarker+Javascript的多语言(国际化i18n/本地化)和主题(Theme)实现
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 6 actions: DoNotRetryIOE
今天做了两件事:其一,编译打包hadoop-eclipse-plugin-1.0.2.jar;其二,使用mapreduce操控hbase(上面两个操作都在eclipse完成)。
先说下版本吧:hadoop:1.0.2; hbase:0.94.0,系统是Ubuntu11.10.
打包编译感觉还好,hadoop1.0.2没有现成的eclipse插件,所以要自己编译打包才行,我参考了下面的文章进行了编译,打包,参考文章:http://www.cnblogs.com/siwei1988/archive/2012/08/03/2621589.html,编译打包好的jar文件,有需要但觉得编译打包繁琐的同学可以在http://download.csdn.net/detail/fansy1990/4534905下载,或者私信或者留个邮箱,我发给你。
因为我要使用mapreduce操作hbase,所以我把hbase下所有的.jar文件都导入了eclipse下的mapreduce工程,在操作hbase时,遇到了下面的问题,弄了好久也不知道问题的所在,提示如下:
12/08/29 18:56:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/08/29 18:56:26 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:host.name=localhost.localdomain
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_34
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk/jdk1.6.0_34/jre
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/fansy/workspace/MRHbaseDemo02/bin:/home/fansy/hadoop-1.0.2/lib/xmlenc-0.52.jar:/home/fansy/hadoop-1.0.2/lib/commons-configuration-1.6.jar:/home/fansy/hadoop-1.0.2/lib/asm-3.2.jar:/home/fansy/hadoop-1.0.2/lib/mockito-all-1.8.5.jar:/home/fansy/hadoop-1.0.2/lib/commons-httpclient-3.0.1.jar:/home/fansy/hadoop-1.0.2/lib/hadoop-fairscheduler-1.0.2.jar:/home/fansy/hadoop-1.0.2/lib/jersey-json-1.8.jar:/home/fansy/hadoop-1.0.2/lib/commons-codec-1.4.jar:/home/fansy/hadoop-1.0.2/lib/jasper-compiler-5.5.12.jar:/home/fansy/hadoop-1.0.2/lib/commons-collections-3.2.1.jar:/home/fansy/hadoop-1.0.2/lib/jackson-core-asl-1.8.8.jar:/home/fansy/hadoop-1.0.2/lib/slf4j-api-1.4.3.jar:/home/fansy/hadoop-1.0.2/lib/kfs-0.2.2.jar:/home/fansy/hadoop-1.0.2/lib/oro-2.0.8.jar:/home/fansy/hadoop-1.0.2/lib/hadoop-thriftfs-1.0.2.jar:/home/fansy/hadoop-1.0.2/lib/log4j-1.2.15.jar:/home/fansy/hadoop-1.0.2/lib/junit-4.5.jar:/home/fansy/hadoop-1.0.2/lib/aspectjrt-1.6.5.jar:/home/fansy/hadoop-1.0.2/lib/core-3.1.1.jar:/home/fansy/hadoop-1.0.2/lib/jsch-0.1.42.jar:/home/fansy/hadoop-1.0.2/lib/commons-logging-1.1.1.jar:/home/fansy/hadoop-1.0.2/lib/aspectjtools-1.6.5.jar:/home/fansy/hadoop-1.0.2/lib/Htable.jar:/home/fansy/hadoop-1.0.2/lib/commons-el-1.0.jar:/home/fansy/hadoop-1.0.2/lib/commons-net-1.4.1.jar:/home/fansy/hadoop-1.0.2/lib/commons-daemon-1.0.1.jar:/home/fansy/hadoop-1.0.2/lib/jasper-runtime-5.5.12.jar:/home/fansy/hadoop-1.0.2/lib/jdeb-0.8.jar:/home/fansy/hadoop-1.0.2/lib/jets3t-0.6.1.jar:/home/fansy/hadoop-1.0.2/lib/commons-beanutils-1.7.0.jar:/home/fansy/hadoop-1.0.2/lib/jersey-core-1.8.jar:/home/fansy/hadoop-1.0.2/lib/hadoop-capacity-scheduler-1.0.2.jar:/home/fansy/hadoop-1.0.2/lib/commons-logging-api-1.0.4.jar:/home/fansy/hadoop-1.0.2/lib/commons-digester-1.8.jar:/home/fansy/hadoop-1.0.2/lib/hsqldb-1.8.0.10.jar:/home/fansy/hadoop-1.0.2/lib/jackson-mapper-asl-1.8.8.jar:/home/fansy/hadoop-1.0.2/lib/commons-math-2.1.jar:/home/fansy/hadoop-1.0.2/lib/commons-lang-2.4.jar:/home/fansy/hadoop-1.0.2/lib/commons-beanutils-core-1.8.0.jar:/home/fansy/hadoop-1.0.2/lib/jersey-server-1.8.jar:/home/fansy/hadoop-1.0.2/lib/jetty-util-6.1.26.jar:/home/fansy/hadoop-1.0.2/lib/commons-cli-1.2.jar:/home/fansy/hadoop-1.0.2/lib/jetty-6.1.26.jar:/home/fansy/hadoop-1.0.2/lib/servlet-api-2.5-20081211.jar:/home/fansy/hadoop-1.0.2/lib/slf4j-log4j12-1.4.3.jar:/home/fansy/hadoop-1.0.2/hadoop-client-1.0.2.jar:/home/fansy/hadoop-1.0.2/hadoop-tools-1.0.2.jar:/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar:/home/fansy/hadoop-1.0.2/hadoop-ant-1.0.2.jar:/home/fansy/hadoop-1.0.2/hadoop-minicluster-1.0.2.jar:/home/fansy/hbase-0.94.0/lib/activation-1.1.jar:/home/fansy/hbase-0.94.0/lib/asm-3.1.jar:/home/fansy/hbase-0.94.0/lib/avro-1.5.3.jar:/home/fansy/hbase-0.94.0/lib/avro-ipc-1.5.3.jar:/home/fansy/hbase-0.94.0/lib/commons-beanutils-1.7.0.jar:/home/fansy/hbase-0.94.0/lib/commons-beanutils-core-1.8.0.jar:/home/fansy/hbase-0.94.0/lib/commons-cli-1.2.jar:/home/fansy/hbase-0.94.0/lib/commons-codec-1.4.jar:/home/fansy/hbase-0.94.0/lib/commons-collections-3.2.1.jar:/home/fansy/hbase-0.94.0/lib/commons-configuration-1.6.jar:/home/fansy/hbase-0.94.0/lib/commons-digester-1.8.jar:/home/fansy/hbase-0.94.0/lib/commons-el-1.0.jar:/home/fansy/hbase-0.94.0/lib/commons-httpclient-3.1.jar:/home/fansy/hbase-0.94.0/lib/commons-io-2.1.jar:/home/fansy/hbase-0.94.0/lib/commons-lang-2.5.jar:/home/fansy/hbase-0.94.0/lib/commons-logging-1.1.1.jar:/home/fansy/hbase-0.94.0/lib/commons-math-2.1.jar:/home/fansy/hbase-0.94.0/lib/commons-net-1.4.1.jar:/home/fansy/hbase-0.94.0/lib/core-3.1.1.jar:/home/fansy/hbase-0.94.0/lib/guava-r09.jar:/home/fansy/hbase-0.94.0/lib/hadoop-core-1.0.2.jar:/home/fansy/hbase-0.94.0/lib/high-scale-lib-1.1.1.jar:/home/fansy/hbase-0.94.0/lib/httpclient-4.1.2.jar:/home/fansy/hbase-0.94.0/lib/httpcore-4.1.3.jar:/home/fansy/hbase-0.94.0/lib/jackson-core-asl-1.5.5.jar:/home/fansy/hbase-0.94.0/lib/jackson-jaxrs-1.5.5.jar:/home/fansy/hbase-0.94.0/lib/jackson-mapper-asl-1.5.5.jar:/home/fansy/hbase-0.94.0/lib/jackson-xc-1.5.5.jar:/home/fansy/hbase-0.94.0/lib/jamon-runtime-2.3.1.jar:/home/fansy/hbase-0.94.0/lib/jasper-compiler-5.5.23.jar:/home/fansy/hbase-0.94.0/lib/jasper-runtime-5.5.23.jar:/home/fansy/hbase-0.94.0/lib/jaxb-api-2.1.jar:/home/fansy/hbase-0.94.0/lib/jaxb-impl-2.1.12.jar:/home/fansy/hbase-0.94.0/lib/jersey-core-1.4.jar:/home/fansy/hbase-0.94.0/lib/jersey-json-1.4.jar:/home/fansy/hbase-0.94.0/lib/jersey-server-1.4.jar:/home/fansy/hbase-0.94.0/lib/jettison-1.1.jar:/home/fansy/hbase-0.94.0/lib/jetty-6.1.26.jar:/home/fansy/hbase-0.94.0/lib/jetty-util-6.1.26.jar:/home/fansy/hbase-0.94.0/lib/jruby-complete-1.6.5.jar:/home/fansy/hbase-0.94.0/lib/jsp-2.1-6.1.14.jar:/home/fansy/hbase-0.94.0/lib/jsp-api-2.1-6.1.14.jar:/home/fansy/hbase-0.94.0/lib/libthrift-0.8.0.jar:/home/fansy/hbase-0.94.0/lib/log4j-1.2.16.jar:/home/fansy/hbase-0.94.0/lib/netty-3.2.4.Final.jar:/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar:/home/fansy/hbase-0.94.0/lib/servlet-api-2.5-6.1.14.jar:/home/fansy/hbase-0.94.0/lib/slf4j-api-1.5.8.jar:/home/fansy/hbase-0.94.0/lib/slf4j-log4j12-1.5.8.jar:/home/fansy/hbase-0.94.0/lib/snappy-java-1.0.3.2.jar:/home/fansy/hbase-0.94.0/lib/stax-api-1.0.1.jar:/home/fansy/hbase-0.94.0/lib/velocity-1.7.jar:/home/fansy/hbase-0.94.0/lib/xmlenc-0.52.jar:/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar:/home/fansy/hbase-0.94.0/hbase-0.94.0.jar:/home/fansy/hbase-0.94.0/hbase-0.94.0-tests.jar
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/jdk/jdk1.6.0_34/jre/lib/i386/server:/usr/jdk/jdk1.6.0_34/jre/lib/i386:/usr/jdk/jdk1.6.0_34/jre/../lib/i386:/usr/jdk/jdk1.6.0_34/jre/lib/i386/client:/usr/jdk/jdk1.6.0_34/jre/lib/i386::/usr/java/packages/lib/i386:/lib:/usr/lib
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.38-14-generic
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:user.name=fansy
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/fansy
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/fansy/workspace/MRHbaseDemo02
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
12/08/29 18:56:26 INFO zookeeper.ClientCnxn: Opening socket connection to server /0:0:0:0:0:0:0:1:2181
12/08/29 18:56:26 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: 无法定位登录配置 occurred when trying to find JAAS configuration.
12/08/29 18:56:26 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 12028@fansy-Lenovo-G450
12/08/29 18:56:26 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work,
please fix your JAAS configuration.
12/08/29 18:56:26 INFO zookeeper.ClientCnxn: Socket connection established to fansy-Lenovo-G450/0:0:0:0:0:0:0:1:2181, initiating session
12/08/29 18:56:26 INFO zookeeper.ClientCnxn: Session establishment complete on server fansy-Lenovo-G450/0:0:0:0:0:0:0:1:2181, sessionid = 0x13971ca392d0012, negotiated timeout = 40000
12/08/29 18:56:27 INFO mapreduce.TableOutputFormat: Created table instance for mrtable
****hdfs://localhost:9000/user/fansy/input/mrtest.txt
12/08/29 18:56:27 INFO input.FileInputFormat: Total input paths to process : 1
12/08/29 18:56:27 WARN snappy.LoadSnappy: Snappy native library not loaded
12/08/29 18:56:27 INFO filecache.TrackerDistributedCacheManager: Creating hbase-0.94.0.jar in /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar-work--2952961500529514442 with rwxr-xr-x
12/08/29 18:56:27 INFO filecache.TrackerDistributedCacheManager: Extracting /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar-work--2952961500529514442/hbase-0.94.0.jar to /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar-work--2952961500529514442
12/08/29 18:56:27 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/hbase-0.94.0.jar as /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/hbase-0.94.0.jar as /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Creating zookeeper-3.4.3.jar in /tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar-work--6348729395114744014
with rwxr-xr-x
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Extracting /tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar-work--6348729395114744014/zookeeper-3.4.3.jar to
/tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar-work--6348729395114744014
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar as /tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar as /tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Creating hadoop-core-1.0.2.jar in /tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar-work--116863529921692065 with
rwxr-xr-x
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Extracting /tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar-work--116863529921692065/hadoop-core-1.0.2.jar to
/tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar-work--116863529921692065
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar as /tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar as /tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Creating protobuf-java-2.4.0a.jar in /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar-work--139328093195795474
with rwxr-xr-x
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Extracting /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar-work--139328093195795474/protobuf-java-2.4.0a.jar
to /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar-work--139328093195795474
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar as /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar as /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar
12/08/29 18:56:28 INFO mapred.JobClient: Running job: job_local_0001
12/08/29 18:56:28 INFO mapreduce.TableOutputFormat: Created table instance for mrtable
12/08/29 18:56:28 INFO util.ProcessTree: setsid exited with exit code 0
12/08/29 18:56:28 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@578073
12/08/29 18:56:29 WARN mapred.FileOutputCommitter: Output path is null in cleanup
12/08/29 18:56:29 WARN mapred.LocalJobRunner: job_local_0001
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 6 actions: DoNotRetryIOException: 6 times, servers with issues: localhost.localdomain:34995,
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1591)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1367)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:945)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:982)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:109)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:651)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
12/08/29 18:56:29 INFO mapred.JobClient: map 0% reduce 0%
12/08/29 18:56:29 INFO mapred.JobClient: Job complete: job_local_0001
12/08/29 18:56:29 INFO mapred.JobClient: Counters: 0
RetriesExhaustedWithDetailsException: 这个问题上网查也没有好的解决方案,还是自己解决吧。
下面就说下我的解决方案:
我使用的是hbase里面的example里的改良的程序,如下:
package org.fansy.demo02;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class SampleUploaderOne {
/**
* @param args
*/
public static void main(String[] args) throws Exception{
// TODO Auto-generated method stub
Configuration conf = HBaseConfiguration.create();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if(otherArgs.length != 2) {
System.err.println("Wrong number of arguments: " + otherArgs.length);
System.err.println("Usage: <input> <tablename>");
System.exit(-1);
}
Job job=new Job(conf,"Hbaseuploadone");
job.setJarByClass(SampleUploaderOne.class);
job.setMapperClass(UploaderMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapOutputValueClass(Put.class);
TableMapReduceUtil.initTableReducerJob(args[1], null, job);
job.setNumReduceTasks(0);
FileInputFormat.setInputPaths(job, args[0]);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
public static class UploaderMapper extends Mapper<Object,Text,ImmutableBytesWritable,Put>{
public void map(Object key,Text line,Context context)throws IOException, InterruptedException{
String[] values=line.toString().split(",");
if(values.length != 4) {
return;
}
// Extract each value
byte [] row = Bytes.toBytes(values[0]);
byte [] family = Bytes.toBytes(values[1]);
byte [] qualifier = Bytes.toBytes(values[2]);
byte [] value = Bytes.toBytes(values[3]);
Put put=new Put(row);
put.add(family,qualifier,value);
context.write(new ImmutableBytesWritable(row), put);
}
}
}
我的输入文件如下:
1,f1,name,fansy
2,f1,name,tom
3,f1,name,jake
4,f1,age,22
5,f1,age,23
6,f1,age,27
就目前的情况来看,问题是出在建立的表上面,前面建立的表是(在 hbase shell下):create 'mrtable','t',然后我改为 :create 'mrtable','f1'就没有出错了,所以应该是建立的表的family的名字应该和文件里的一样才行。在hbase shell 下看 mrtable的数据如下:
hadoop程序运行提示部分如下:
12/08/29 19:21:18 INFO mapred.JobClient: Running job: job_local_0001
12/08/29 19:21:18 INFO mapreduce.TableOutputFormat: Created table instance for mrtable
12/08/29 19:21:18 INFO util.ProcessTree: setsid exited with exit code 0
12/08/29 19:21:18 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7e9ce2
12/08/29 19:21:18 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/08/29 19:21:19 INFO mapred.JobClient: map 0% reduce 0%
12/08/29 19:21:21 INFO mapred.LocalJobRunner:
12/08/29 19:21:21 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
12/08/29 19:21:21 WARN mapred.FileOutputCommitter: Output path is null in cleanup
12/08/29 19:21:22 INFO mapred.JobClient: map 100% reduce 0%
12/08/29 19:21:22 INFO mapred.JobClient: Job complete: job_local_0001
12/08/29 19:21:22 INFO mapred.JobClient: Counters: 13
12/08/29 19:21:22 INFO mapred.JobClient: File Output Format Counters
12/08/29 19:21:22 INFO mapred.JobClient: Bytes Written=0
12/08/29 19:21:22 INFO mapred.JobClient: File Input Format Counters
12/08/29 19:21:22 INFO mapred.JobClient: Bytes Read=82
12/08/29 19:21:22 INFO mapred.JobClient: FileSystemCounters
12/08/29 19:21:22 INFO mapred.JobClient: FILE_BYTES_READ=9683090
12/08/29 19:21:22 INFO mapred.JobClient: HDFS_BYTES_READ=82
12/08/29 19:21:22 INFO mapred.JobClient: FILE_BYTES_WRITTEN=9817966
12/08/29 19:21:22 INFO mapred.JobClient: Map-Reduce Framework
12/08/29 19:21:22 INFO mapred.JobClient: Map input records=7
12/08/29 19:21:22 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
12/08/29 19:21:22 INFO mapred.JobClient: Spilled Records=0
12/08/29 19:21:22 INFO mapred.JobClient: Total committed heap usage (bytes)=76677120
12/08/29 19:21:22 INFO mapred.JobClient: CPU time spent (ms)=0
12/08/29 19:21:22 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
12/08/29 19:21:22 INFO mapred.JobClient: SPLIT_RAW_BYTES=114
12/08/29 19:21:22 INFO mapred.JobClient: Map output records=6
相关推荐
Java-org.apache.hadoop,Java-org.apache.hadoop,Java-org.apache.hadoop
org.apache.hadoop.security.authentication.client org.apache.hadoop.security.authentication.examples org.apache.hadoop.security.authentication.server org.apache.hadoop.security.authentication.util ...
包org.apache.hadoop.mapreduce的Hadoop源代码分析
NULL 博文链接:https://ouyida3.iteye.com/blog/1144326
必须将此jar包放在org.apache.hadoop.io包下,否则无法正常覆盖使用
Hive错误之 Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask错误分析_xiaohu21的博客-CSDN博客.mht
Maven坐标:org.apache.hadoop:hadoop-mapreduce-client-common:2.6.5; 标签:apache、mapreduce、common、client、hadoop、jar包、java、API文档、中英对照版; 使用方法:解压翻译后的API文档,用浏览器打开...
Maven坐标:org.apache.hadoop:hadoop-mapreduce-client-core:2.5.1; 标签:core、apache、mapreduce、client、hadoop、jar包、java、API文档、中文版; 使用方法:解压翻译后的API文档,用浏览器打开“index.html...
看清楚版本,想要其他版本的可以私聊我,版本经测试,可以用,请大家放心下载使用
Maven坐标:org.apache.hbase:hbase-hadoop2-compat:1.1.3; 标签:apache、compat、hbase、hadoop2、jar包、java、中文文档; 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。...
对应Maven信息:groupId:org.apache.hadoop,artifactId:hadoop-auth,version:2.6.5 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。 人性化翻译,文档中的代码和结构保持...
Exception in thread main org.apache.hadoop.security.AccessControlException: Permission denied: user=L.MOON, access=WRITE, inode=/user/lsy/result1/_temporary/0:lsy:supergroup:drwxr-xr-x Caused by: org....
Maven坐标:org.apache.hadoop:hadoop-yarn-client:2.7.3; 标签:apache、hadoop、yarn、client、中英对照文档、jar包、java; 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容...
hbase-0.90.5.tar.gz与hadoop0.20.2版本匹配,我在我本地虚拟机已经安装成功可以使用。请放心下载!!!
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient 通过控制台的...
IDEA中通过Java的API操作MapReducer报错org.apache.hadoop.io.nativeio.NativeIO$Windows...的解决办法(进来看一下)-附件资源
ERROR : FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. 前言报错信息异常分析配置改动后记 前言 在成功消除Cloudare管理界面上那些可恶的警告之后,我又对yarn...
hadoop2.7汇总:新增功能最新编译64位安装、源码包、API、eclipse插件下载
解决方案:Exceptionin thread "main" java.lang.UnsatisfiedLinkError:org.apache.hadoop.util.NativeCrc32.nativeCo
ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-admin \mapred\local\ttprivate to 0700 at org.apache...