<p><span style="font-size: large;"><strong>JOB DESCRIPTION:</strong></span></p>
<p><span><span>• Big Data Admin responsible for the build out, day-to-day management and support of Big Data clusters based on Hadoop and other technologies, on-premises and in cloud and responsible for cluster availability.</span></span></p>
<p><span><span><span>• Responsible for implementation and support of the Enterprise Hadoop environment that involves</span><span class="WbZuDe"> designing, capacity planning, cluster set up, monitoring, structure planning, scaling and administration of Hadoop components ((YARN, MapReduce, HDFS, HBase, Zookeeper, * Storm, Kafka, Spark, Pig and Hive).</span></span></span></p>
<p><span><span><span class="WbZuDe"><span>• Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users. </span></span></span></span></p>
<p><span><span><span class="WbZuDe"><span>• Responsible for implementation and ongoing administration of Hadoop infrastructure.</span></span></span></span></p>
<p><span><span><span class="WbZuDe"><span><span>• Identify hardware and software technical problems, storage and/or related system malfunctions.</span></span></span></span></span></p>
<p> </p>
<p><span>• Leverage experience with diagnosing network performance, Support development and production deployments.</span></p>
<p><span> • Setup, configure and maintain security for Big Data clusters.</span></p>
<p><span><span>• Expand and maintain our Hadoop environments (MapR distro, HBase, Hive, Yarn, Zookeeper, Oozie, Spyglass, etc.) and Apache stack environments (Java, Spark/Scala, Kafka,Elastic search, Drill, Kylin, etc.)</span></span></p>
<p><span><span><span>• Contribute to the evolving architecture of our storage service to meet changing requirements for scaling, reliability, performance, manageability, and price. </span></span></span></p>
<p><span><span><span>• Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required. </span></span></span></p>
<p><span><span><span>• Work closely with Technology Leadership, Product Managers, and Reporting Team for understanding the functional and system requirements.</span></span></span></p>
<p><span><span><span>• Expertise in Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Amazon web services, and other tools.</span></span></span></p>
<p><span><span><span> • Excellent troubleshooting and problem-solving abilities.</span></span></span></p>
Apply Now