I am trying to submit a MapReduce job to an HDInsight cluster. In my work, I did not write part reduction, because I do not want to reduce anything. All I want to do is parse each file name and add values to each line in the file. So I will have all the necessary data inside the file.
My code
using Microsoft.Hadoop.MapReduce; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace GetMetaDataFromFileName { class Program { static void Main(string[] args) { var hadoop = connectAzure(); //Temp Workaround to Env Variables Environment.SetEnvironmentVariable("HADOOP_HOME", @"c:\hadoop"); Environment.SetEnvironmentVariable("Java_HOME", @"c:\hadoop\jvm"); var result = hadoop.MapReduceJob.ExecuteJob<MetaDataGetterJob>(); } static IHadoop connectAzure() { //TODO: Update credentials and other information return Hadoop.Connect( new Uri("https://sampleclustername.azurehdinsight.net//"), "admin", "Hadoop", "password", "blobstoragename.blob.core.windows.net", //Storage Account that Log files exists "AccessKeySample", //Storage Account Access Key "logs", //Container Name true ); } //Hadoop Mapper public class MetaDataGetter : MapperBase { public override void Map(string inputLine, MapperContext context) { try { //Get the meta data from name of the file string[] _fileMetaData = context.InputFilename.Split('_'); string _PublicIP = _fileMetaData[0].Trim(); string _PhysicalAdapterMAC = _fileMetaData[1].Trim(); string _BootID = _fileMetaData[2].Trim(); string _ServerUploadTime = _fileMetaData[3].Trim(); string _LogType = _fileMetaData[4].Trim(); string _MachineUpTime = _fileMetaData[5].Trim(); //Generate CSV portion string _RowHeader = string.Format("{0},{1},{2},{3},{4},{5},", _PublicIP, _PhysicalAdapterMAC, _BootID, _ServerUploadTime, _LogType, _MachineUpTime); //TODO: Append _RowHeader to every row in the file. context.EmitLine(_RowHeader + inputLine); } catch(ArgumentException ex) { return; } } } //Hadoop Job Definition public class MetaDataGetterJob : HadoopJob<MetaDataGetter> { public override HadoopJobConfiguration Configure(ExecutorContext context) { //Initiate the job config HadoopJobConfiguration config = new HadoopJobConfiguration(); config.InputPath = "asv:// logs@sample.blob.core.windows.net /Input"; config.OutputFolder = "asv:// logs@sample.blob.core.windows.net /Output"; config.DeleteOutputFolder = true; return config; } } } }
Usually, what do you do, reason 500 (Server Error)? Can I use the wrong credentials? In fact, I really did not understand the difference between the Username and HadoopUser parameters in the Hadoop.Connect method?
Thanks,
source share