Skip to 0 minutes and 5 secondsDear online students, with the help of this video, I will guide you through the process of setting up your RHadoop working place. I am assuming that (i) you already have the latest version of virtual box and (ii) you have successfully downloaded our mint_hadoop virtual machine, where the RHadoop environment is prearranged. We will run RHadoop in 5 steps. First, we search on our computer for the virtual box and run it. In the VirtualBox we find our virtual machine called mint_hadoop and run it. If you receive any notification about the keyboard or mouse, we suggest that you ignore it. Second, we log in as hduser with the password ‘’hadoop’’. We can see a welcome screen which we can also close.
Skip to 1 minute and 7 secondsIn the third step we run Hadoop. First, we start the terminal window.
Skip to 1 minute and 13 secondsWe run Hadoop with two commands: Start-dfs.sh is used to run the hadoop distributed file system. This establishes one namenode and the related datanodes, in our case only one datanode. Next we run start-yarn.sh to start master and node resource managers and map reduce. Now Hadoop is running. In the fourth step we run R, which we will use to create and submit map-reduce tasks to Hadoop. We decided to use RStudio, which is a free and open-source, integrated development environment for R. We run R through RStudio from the terminal window. Note that if the running of the script ‘rstudio’ reports some warnings then they are probably related to missing fonts. We ignore them and just press enter.
Skip to 2 minutes and 19 secondsIn the last step we set up RStudio for data analysis with RHadoop. We open a new script file and save it to your local folder. It the beginning, we must set the system environment for Hadoop. These lines define the system variables. We copy them into the script file, mark them and execute by pressing ctrl+enter. Finally, we load the basic RHadoop libraries. We establish our connectivity to the Hadoop Distributed File System by loading the library rhdfs. To perform a statistical analysis in R with Hadoop MapReduce we also need to load library rmr2, where the scripts for the map and reduce operations are defined. We close this last step with the execution hdfs.init().
Skip to 3 minutes and 27 secondsNow RHadoop is ready and we can start writing scripts for the big-data analysis.
Skip to 3 minutes and 34 secondsWhen we want to close the RHadoop session we: Save all the script files and if needed also the workspace; Close RStudio by clicking the close button; Stop Hadoop by typing stop-yarn.sh and stop-dfs.sh; Stop the terminal window and the mint_hadoop virtual machine by clicking the close button.
Setting up the RHadoop working place
© PRACE and University of Ljubljana