第一步:创建一个java project命名为wujiadong_hbase
![761429-20161220111812057-1447786419.png](https://images2015.cnblogs.com/blog/761429/201612/761429-20161220111812057-1447786419.png)
第二步:在该工程下创建一个folder命名为lib(储存依赖的jar包)
![761429-20161220111824245-487055093.png](https://images2015.cnblogs.com/blog/761429/201612/761429-20161220111824245-487055093.png)
第三步:将集群中的hbase安装目录下载一份到win下,将hbase下lib目录(I:\data science\hbase\hbase-0.9\lib)中所有的jar包复制到刚才创建lib文件夹下
复制进去之后,选中lib文件夹下的所有jar包,右键Build Path——Add to build Path
第三步:新建一个java类命名为HBaseDeom,就可以开始写java代码了
![761429-20161220111842729-507646442.png](https://images2015.cnblogs.com/blog/761429/201612/761429-20161220111842729-507646442.png)
一个创建hbase_test表的代码示例
package wujiadong_hbase;import java.io.IOException;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.HColumnDescriptor;import org.apache.hadoop.hbase.HTableDescriptor;import org.apache.hadoop.hbase.client.HBaseAdmin;import org.apache.hadoop.hbase.TableName;import org.apache.hadoop.conf.Configuration;public class HBaseDeom { public static void main(String[] args)throws IOException { // TODO Auto-generated method stub // Instantiating configuration class Configuration con = HBaseConfiguration.create(); con.set("hbase.rootdir","hdfs://spark1:9000/hbase" ); con.set("hbase.zookeeper.quorum", "192.168.220.144,192.168.220.145,192.168.220.146"); // Instantiating HbaseAdmin class HBaseAdmin admin = new HBaseAdmin(con); // Instantiating table descriptor class HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("hbase_test")); // Adding column families to table descriptor tableDescriptor.addFamily(new HColumnDescriptor("personal")); tableDescriptor.addFamily(new HColumnDescriptor("professional")); // Execute the table through admin admin.createTable(tableDescriptor); System.out.println(" Table created "); }}
第四步:开启zookeeper,hadoop,hbase集群,确保都正常
查看hbase是否启动成功,进入hbase shell,输入statushbase(main):013:0> status2 servers, 0 dead, 13.5000 average load注释:0 dead说明habse启动成功
第五步:运行java代码
运行结果如下
![761429-20161220111857323-1280635517.png](https://images2015.cnblogs.com/blog/761429/201612/761429-20161220111857323-1280635517.png)
![761429-20161220111905276-490943646.png](https://images2015.cnblogs.com/blog/761429/201612/761429-20161220111905276-490943646.png)
运行报错
org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to ZooKeeper: KeeperErrorCode = OperationTimeout
报错原因
windows下开发HBase应用程序,HBase部署在linux环境中,在运行调试时出现无法找到主机
解决方法
在C:\WINDOWS\system32\drivers\etc\hosts文件中添加如下映射信息
192.168.220.144 spark1192.168.220.145 spark2192.168.220.146 spark3
参考资料