0% found this document useful (0 votes)
297 views

Final Exam

The document provides a summary of a student's performance on a final exam in Hadoop fundamentals. It includes the total number of questions, the number answered correctly, the student's overall score, a breakdown of performance by section, and analysis of responses at the question level including difficulty and response status for each question.

Uploaded by

Hervin Camargo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
297 views

Final Exam

The document provides a summary of a student's performance on a final exam in Hadoop fundamentals. It includes the total number of questions, the number answered correctly, the student's overall score, a breakdown of performance by section, and analysis of responses at the question level including difficulty and response status for each question.

Uploaded by

Hervin Camargo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

6/17/2016 Final Exam

https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 1/17
6/17/2016 Final Exam

Print Exit Print Mode

About the exam


Dear Student,

Greetings!
You have completed the "Final Exam" exam.
At this juncture, it is important for you to understand your strengths and focus on them to achieve the best results.
We present here a snapshot of your performance in "Final Exam" exam in terms of marks scored by you in each
section, question-wise response pattern and diᬂculty-wise analysis of your performance.

This Report consists of the following sections that can be accessed using the left navigation panel:

Overall Performance: This part of report shows the summary of marks scored by you across all sections of the
exam and the comparison of your performance across all sections.

Section-wise Performance: You can click on a section name in the left navigation panel to check your performance
in that section. Section-wise performance includes the details of your response at each question level and diᬂculty-
wise analysis of your performance for that section.

NOTE : For Short Answer, Subjective, Typing and Programing Type Questions stidents will not be able to view Bar
Chart Report in the Performance Analysis.

Section Questions Attempted Correct Score

Final 40/99 40 40

Marks Obtained Subject Wise

Final

NOTE : Subject having negative marks are not considered in the pie chart. Pie chart will not be shown if all the
subject contains 0 marks.

Final
The Final section comprises of a total of 99 questions with the following diᬂculty level distribution: -

Diᬂculty Level No. of questions

MODERATE 99

Question wise details


https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 2/17
6/17/2016 Final Exam
Question wise details
Please click on question to view detailed analysis

= Not Evaluated = Evaluated = Correct = Incorrect = Not Attempted = Marked for Review
= Answered = Correct Option = Your Option

Question Details

Q1.Key/Value is considered as hadoop format.

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : True
Option 2 : False

Q2.What kind of servers are used for creating a hadoop cluster?

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : Server grade machines.


Option 2 : Commodity hardware.
Option 3 : Only supercomputers
Option 4 : None of the above.

Q3.Hadoop was developed by:

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : Doug Cutting


Option 2 : Lars George
Option 3 : Tom White
Option 4 : Eric Sammer

Q4.One of the features of hadoop is you can achieve parallelism.

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : False
Option 2 : True

Q5.Hadoop can only work with structured data.

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : False
Option 2 : True

Q6.Hadoop cluster can scale out:

Diᬂculty Level : Moderate


Response : 4 Status : Correct

Option 1 : By upgrading existing servers


Option 2 : By increasing the area of the cluster.
Option 3 : By downgrading existing servers
Option 4 : By adding more hardware

Q7.Hadoop can solve only use cases involving data from Social media.
https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 3/17
6/17/2016 Final Exam
Q7.Hadoop can solve only use cases involving data from Social media.

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : 1
Option 2 : False

Q8.Hadoop can be utilized for demographic analysis.

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : True
Option 2 : False

Q9.Hadoop is inspired from which ᫫le system.

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : AFS
Option 2 : GFS
Option 3 : MPP
Option 4 : None of the above.

Q10.For Apache Hadoop one needs licensing before leveraging it.

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : True
Option 2 : False

Q11.HDFS runs in the same namespace as that of local ᫫lesystem.

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : False
Option 2 : True

Q12.HDFS follows a master-slave architecture.

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : False
Option 2 : True

Q13.Namenode only responds to:

Diᬂculty Level : Moderate


Response : 3 Status : Correct

Option 1 : FTP calls


Option 2 : SFTP calls.
Option 3 : RPC calls
Option 4 : MPP calls

Q14.Perfect balancing can be achieved in a Hadoop cluster.

Diᬂculty Level : Moderate


Response : 1 Status : Correct

https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 4/17
6/17/2016 Final Exam

Option 1 : False
Option 2 : True

Q15.What does Namenode periodically expects from Datanodes?

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : EditLogs
Option 2 : Block report and Status
Option 3 : FSImages
Option 4 : None of the above

Q16.After client requests JobTracker for running an application, whom does JT contacts?

Diᬂculty Level : Moderate


Response : 3 Status : Correct

Option 1 : DataNodes
Option 2 : Tasktracker
Option 3 : Namenode
Option 4 : None of the above.

Q17.Intertaction to HDFS is done through which script.

Diᬂculty Level : Moderate


Response : 4 Status : Correct

Option 1 : Fsadmin
Option 2 : Hive
Option 3 : Mapreduce
Option 4 : Hadoop

Q18.What is the usage of put command in HDFS?

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : It deletes ᫫les from one ᫫le system to another.


Option 2 : It copies ᫫les from one ᫫le system to another
Option 3 : It puts con᫫guration parameters in con᫫guration ᫫les
Option 4 : None of the above.

Q19.Each directory or ᫫le has three kinds of permissions:

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : read,write,execute
Option 2 : read,write,run
Option 3 : read,write,append
Option 4 : read,write,update

Q20.Mapper output is written to HDFS.

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : False
Option 2 : True

Q21.A Reducer writes its output in what format.

Diᬂculty Level : Moderate


https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 5/17
6/17/2016 Final Exam
Diᬂculty Level : Moderate
Response : 1 Status : Correct

Option 1 : Key/Value
Option 2 : Text ᫫les
Option 3 : Sequence ᫫les
Option 4 : None of the above

Q22.Which of the following is a pre-requisite for hadoop cluster installation?

Diᬂculty Level : Moderate


Response : 3 Status : Correct

Option 1 : Gather Hardware requirement


Option 2 : Gather network requirement
Option 3 : Both
Option 4 : None of the above

Q23.Nagios and Ganglia are tools provided by:

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : None of the above

Q24.Which of the following are cloudera management services?

Diᬂculty Level : Moderate


Response : 3 Status : Correct

Option 1 : Activity Monitor


Option 2 : Host Monitor
Option 3 : Both
Option 4 : None of the above

Q25.Which of the following is used to collect information about activities running in a


hadoop cluster?

Diᬂculty Level : Moderate


Response : 3 Status : Correct

Option 1 : Report Manager


Option 2 : Cloudera Navigator
Option 3 : Activity Monitor
Option 4 : All of the above

Q26.Which of the following aggregates events and makes them available for alerting and
searching?

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : Event Server


Option 2 : Host Monitor
Option 3 : Activity Monitor
Option 4 : None of the above

Q27.Which tab in the cloudera manager is used to add a service?

Diᬂculty Level : Moderate


Response : 3 Status : Correct

https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 6/17
6/17/2016 Final Exam

Option 1 : Hosts
Option 2 : Activities
Option 3 : Services
Option 4 : None of the above

Q28.Which of the following provides http access to HDFS?

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : HttpsFS
Option 2 : Name Node
Option 3 : Data Node
Option 4 : All of the above

Q29.Which of the following is used to balance a load in case of addition of a new node
and in case of a failure?

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : Gateway
Option 2 : Balancer
Option 3 : Secondary Name Node
Option 4 : None of the above

Q30.Which of the following is used to designate a host for a particular service?

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : Gateway
Option 2 : Balancer
Option 3 : Secondary Name Node
Option 4 : All of the above

Q31.Which of the following are the con᫫guration ᫫les?

Diᬂculty Level : Moderate


Response : 3 Status : Correct

Option 1 : Core-site.xml
Option 2 : Hdfs-site.xml
Option 3 : Both
Option 4 : None of the above

Q32.Which are the commercial leading Hadoop distributors in the market?

Diᬂculty Level : Moderate


Response : 4 Status : Correct

Option 1 : Cloudera , Intel, MapR


Option 2 : MapR, Cloudera, Teradata
Option 3 : Hortonworks, IBM, Cloudera
Option 4 : MapR, Hortonworks, Cloudera

Q33.What are the core Apache components enclosed in its bundle?

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : HDFS, Map-reduce,YARN,Hadoop Commons


Option 2 : HDFS, NFS, Combiners, Utility Package
Option 3 : HDFS, Map-reduce, Hadoop core
https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 7/17
6/17/2016 Final Exam
Option 3 : HDFS, Map-reduce, Hadoop core
Option 4 : MapR-FS, Map-reduce,YARN,Hadoop Commons

Q34.Apart from its basic components Apache Hadoop also provides:

Diᬂculty Level : Moderate


Response : 4 Status : Correct

Option 1 : Apache Hive


Option 2 : Apache Pig
Option 3 : Apache Zookeeper
Option 4 : All the above

Q35.Rolling upgrades is not possible in which of the following?

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : Possible in all of the above

Q36.In which of the following Hbase Latency is low with respect to each other:

Diᬂculty Level : Moderate


Response : 3 Status : Correct

Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : IBM BigInsights

Q37.MetaData Replication is possible in:

Diᬂculty Level : Moderate


Response : 3 Status : Correct

Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : Teradata

Q38.Disastor recovery management is not handled by:

Diᬂculty Level : Moderate


Response : 1 Status : Correct

Option 1 : Hortonworks
Option 2 : MapR
Option 3 : Cloudera
Option 4 : Amazon Web Services EMR

Q39.Mirroring concept is possible in Cloudera.

Diᬂculty Level : Moderate


Response : 2 Status : Correct

Option 1 : True
Option 2 : False

Q40.Does MapR supports only Streaming Data Ingestion ?

Diᬂculty Level : Moderate


Response : 2 Status : Correct
https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 8/17
6/17/2016 Final Exam
Response : 2 Status : Correct

Option 1 : True
Option 2 : False

Q41.Hcatalog is open source metadata framework developed by:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Cloudera
Option 2 : MapR
Option 3 : Hortonworks
Option 4 : Amazon EMR

Q42.BDA can be applicable to gain knowledge on user behaviour, prevents customer


churn in Media and Telecommunications Industry.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q43.What is the correct sequence of Big Data Analytics stages?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Big Data Production > Big Data Consumption > Big Data Management
Option 2 : Big Data Management > Big Data Production > Big Data Consumption
Option 3 : Big Data Production > Big Data Management > Big Data Consumption
Option 4 : None of these

Q44.Big Data Consumption involves:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Mining
Option 2 : Analytic
Option 3 : Search and Enrichment
Option 4 : All of the above

Q45.Big Data Integration and Data Mining are the phases of Big Data Management.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q46.RDBMS, Social Media data, Sensor data are the possible input sources to a big data
environment.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q47.For which of the following type of data it is not possible to store in big data
environment and then process/parse it?

Diᬂculty Level : Moderate


https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 9/17
6/17/2016 Final Exam
Diᬂculty Level : Moderate
Response : Status : Unanswered

Option 1 : XML/JSON type of data


Option 2 : RDBMS
Option 3 : Semi-structured data
Option 4 : None of the above

Q48.Software framework for writing applications that parallely process vast amounts of
data is known as:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Map-reduce
Option 2 : Hive
Option 3 : Impala
Option 4 : None of the above

Q49.In proper ow of the map-reduce, reducer will always be executed after mapper.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q50.Which of the following are the features of Map-reduce?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Automatic parallelization and distribution


Option 2 : Fault-Tolerance
Option 3 : Platform independent
Option 4 : All of the above

Q51.Where does the intermediate output of mapper gets written to?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Local disk of node where it is executed.


Option 2 : HDFS of node where it is executed.
Option 3 : On a remote server outside the cluster.
Option 4 : Mapper output gets written to the local disk of Name node machine.

Q52.Reducer is required in map-reduce job for:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : It combines all the intermediate data collected from mappers.


Option 2 : It reduces the amount of data by half of what is supplied to it.
Option 3 : Both a and b
Option 4 : None of the above

Q53.Output of every map is passed to which component.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Partitioner
Option 2 : Combiner
Option 3 : Mapper
Option 4 : None of the above
https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 10/17
6/17/2016 Final Exam
Option 4 : None of the above

Q54.Data Locality concept is used for:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Localizing data


Option 2 : Avoiding network traᬂc in hadoop system
Option 3 : Both A and B
Option 4 : None of the above

Q55.No of ᫫les in the output of map reduce job depends on:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : No of reducer used for the process


Option 2 : Size of the data
Option 3 : Both A and B
Option 4 : None of the above

Q56.Input format of the map-reduce job is speci᫫ed in which class?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Combiner class


Option 2 : Reducer class
Option 3 : Mapper class
Option 4 : Any of the above

Q57.The intermediate keys, and their value lists, are passed to the Reducer in sorted key
order.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q58.In which stage of the map-reduce job data is transferred between mapper and
reducer?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Transfer
Option 2 : Combiner
Option 3 : Distributed Cache
Option 4 : Shue and Sort

Q59.Maximum three reducers can run at any time in a MapReduce Job.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q60.Functionality of the Jobtracker is to:

Diᬂculty Level : Moderate


Response : Status : Unanswered

https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 11/17
6/17/2016 Final Exam

Option 1 : Coordinate the job run


Option 2 : Sorting the output
Option 3 : Both A and B
Option 4 : None of the above

Q61.The submit() method on Job creates an internal JobSummitter instance and calls _____
on it.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : jobSubmitInternal()
Option 2 : internalJobSubmit()
Option 3 : submitJobInternal()
Option 4 : None of these

Q62.Which method polls the job's progress and after how many seconds?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : WaitForCompletion() and after each second


Option 2 : WaitForCompletion() after every 15 seconds
Option 3 : Not possible to poll
Option 4 : None of the above

Q63.Job Submitter tells the task tracker that the job is ready for execution.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q64.Hadoop 1.0 runs 3 instances of job tracker for parallel execution on hadoop
cluster.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : Flase

Q65.Map and Reduce tasks are created in job initialization phase.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q66.Based on heartbeats received after how many seconds does it help the job tracker to
decide regarding health of task tracker?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : After every 3 seconds


Option 2 : After every 1 second
Option 3 : After every 60 seconds
Option 4 : None of the above

Q67.Task tracker has assigned ᫫xed number of slots for map and reduce tasks.
https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 12/17
6/17/2016 Final Exam

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q68.To improve the performance of the map-reduce task jar that contains map-reduce
code is pushed to each slave node over HTTP.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q69.Map-reduce can take which type of format as input?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Text
Option 2 : CSV
Option 3 : Arbitrary
Option 4 : None of these

Q70.Input ᫫les can be located at hdfs or local system for map-reduce.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q71.Is there any default InputFormat for input ᫫les in map-reduce process?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : KeyValueInputFormat
Option 2 : TextInputFormat.
Option 3 : A and B
Option 4 : None of these

Q72.An InputFormat is a class that provides the following functionality:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Selects the ᫫les or other objects that should be used for input
Option 2 : De᫫nes the InputSplits that break a ᫫le into tasks
Option 3 : Provides a factory for RecordReader objects that read the ᫫le
Option 4 : All of the above

Q73.An InputSplit describes a unit of work that comprises a ____ map task in a MapReduce
program.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : One
Option 2 : Two
Option 3 : Three
Option 4 : None of these

https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 13/17
6/17/2016 Final Exam

Q74.The FileInputFormat and its descendants break a ᫫le up into ____MB chunks.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : 128
Option 2 : 64
Option 3 : 32
Option 4 : 256

Q75.What allows several map tasks to operate on a single ᫫le in parallel?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Processing of a ᫫le in chunks


Option 2 : Con᫫guration ᫫le properties
Option 3 : Both A and B
Option 4 : None of the above

Q76.The Record Reader is invoked ________ on the input until the entire InputSplit has
been consumed.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Once
Option 2 : Twice
Option 3 : Repeatedly
Option 4 : None of these

Q77.Which of the following is KeyValueTextInputFormat?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Key is separated from the value by Tab


Option 2 : Data is speci᫫ed in binary sequence
Option 3 : Both A and B
Option 4 : None of the above

Q78.In map-reduce programming model mappers can communicate with each other is:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q79.User can de᫫ne own partitioner class.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q80.The Output Format class is a factory for RecordWriter objects; these are used to write
the individual records to the ᫫les as directed by the OutputFormat is:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False
https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 14/17
6/17/2016 Final Exam
Option 2 : False

Q81.Which of the following are part of Hadoop ecosystem.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Talend,MapR,NFS
Option 2 : Mysql,Shell
Option 3 : Pig,Hive,Hbase
Option 4 : None of the above

Q82.Default Metostore location for Hive is:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Mysql
Option 2 : Derby
Option 3 : PostgreSQL
Option 4 : None of the above

Q83.Extend the following class to write a User De᫫ned Function in Hive.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : HiveMapper
Option 2 : Eval
Option 3 : UDF
Option 4 : None of the above

Q84.Which component of hadoop ecosystem supports updation?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Zookeeper
Option 2 : Hive
Option 3 : Pig
Option 4 : Hbase

Q85.Which hadoop component should be used if a join of dataset is required?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Hbase
Option 2 : Hive
Option 3 : Zookeeper
Option 4 : None of the above

Q86.Which hadoop component can be used for ETL?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Pig
Option 2 : Zookeeper
Option 3 : Hbase
Option 4 : None of the above

Q87.Which hadoop component is best suited for pulling data from the web?

Diᬂculty Level : Moderate


https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 15/17
6/17/2016 Final Exam
Diᬂculty Level : Moderate
Response : Status : Unanswered

Option 1 : Hive
Option 2 : Zookeeper
Option 3 : Hbase
Option 4 : Flume

Q88.Which hadoop component can be used to transfer data from relational DB to


HDFS?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Zookeeper
Option 2 : Pig
Option 3 : Sqoop
Option 4 : None of the above

Q89.In an application more than one hadoop component cannot be used on top of
HDFS.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q90.Hbase supports join.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q91.Pig can work only with data present in HDFS.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q92.Which tool out of the following can be used for an OLTP application?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Pentaho
Option 2 : Hive
Option 3 : Hbase
Option 4 : None of the above

Q93.Which tool is best suited for real time writes?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Pig
Option 2 : Hive
Option 3 : Hbase
Option 4 : Cassandra

Q94.Which out of the following hadoop component is called as ETL of hadoop?


https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 16/17
6/17/2016 Final Exam
Q94.Which out of the following hadoop component is called as ETL of hadoop?

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Pig
Option 2 : Hbase
Option 3 : Talend
Option 4 : None of the above

Q95.Hadoop can completely replace tradtional Dbs.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q96.Zookeeper can be used as data transfer also.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : False
Option 2 : True

Q97.Map-reduce cannot be tested on data/᫫les present in local ᫫le system.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

Q98.Hive was developed by:

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : Tom White


Option 2 : Cloudera
Option 3 : Doug Cutting
Option 4 : Facebook

Q99.Mrv1 programs cannot be run on top of clusters con᫫gured for Mrv2.

Diᬂculty Level : Moderate


Response : Status : Unanswered

Option 1 : True
Option 2 : False

https://dc2­g21.tcsion.com/OnlineAssessment/report.html?857@@18071@@2 17/17

You might also like