Apache-Hadoop-Developer Free Dumps Study Materials
Question 9: Which project gives you a distributed, Scalable, data store that allows you random, realtime
read/write access to hundreds of terabytes of data?
A. HBase
B. Hue
C. Pig
D. Hive
E. Oozie
F. Flume
G. Sqoop
Correct Answer: A
Explanation:
Use Apache HBase when you need random, realtime read/write access to your Big Data.
Note: This project's goal is the hosting of very large tables -- billions of rows X millions of columns -
- atop clusters of commodity hardware. Apache HBase is an open-source, distributed, versioned,
column-oriented store modeled after Google's Bigtable: A Distributed Storage System for Structured
Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google
File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
Features
Linear and modular scalability. Strictly consistent reads and writes. Automatic and configurable
sharding of tables Automatic failover support between RegionServers. Convenient base classes for
backing Hadoop MapReduce jobs with Apache HBase tables. Easy to use Java API for client access.
Block cache and Bloom Filters for real-time queries. Query predicate push down via server side Filters
Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding
options Extensible jruby-based (JIRB) shell Support for exporting metrics via the Hadoop metrics
subsystem to files or Ganglia; or via JMX
Reference: http://hbase.apache.org/ (when would I use HBase? First sentence)