Hadoop Systems Administrator - ndoiljobs : Job Details

Hadoop Systems Administrator

ndoiljobs

Job Location : Oklahoma City,OK, USA

Posted on : 2024-12-10T19:32:24Z

Job Description :

Preview JOB SUMMARY

As a Hadoop Systems Administrator, you will be responsible for daily operations and maintenance of IT systems centering around Continental's big data platform. You will be required to keep these IT systems maintained and optimized through daily, weekly, and monthly system tuning and periodic system maintenance. Day-to-day operations will require interfacing with other IT departments, developers and project management. Manage testing and implementation of upgrades and changes to supported systems following change control processes.

DUTIES/RESPONSIBILITIES

  • Day to Day maintenance activities including but not limited to Log monitoring and cleanup, Cluster monitoring, deployments, production support and performance tuning
  • Install, configure, monitor, tune, and troubleshoot all components of the CDH environments, including but not limited to, Cloudera Manager, Cloudera Management Services, HDFS, YARN, Zookeeper, Hive, Spark, Sprak Streaming, Hue, Kudu, Impala, HBase, Key Management Server, Kafka, Flume, Solr, SSL, Sqoop, and Sentry
  • Build, Tune and administer Search and Index frameworks like Solr and Elastic Search
  • Build, Tune and administer HBase cluster in multiple environments
  • Design, Document, Develop, Unit Test and Deploy scripts for automating cluster patching, application upgrades, Application deployments using UNIX and Java in multiple environments
  • Automate daily administrative tasks to guarantee high availability of systems, applications and data
  • Troubleshoot issues with applications, services and clusters in multiple environments
  • Implement and administer Active-Active cluster configuration for Disaster Recovery
  • Manage cluster hardening activities with security implementation to enforce data governance for the data accessed by data scientists and analysts using Sentry and LDAP integration
  • Performance tune cluster services in multiple environments
  • Capacity planning for cluster upgrades
  • Setup Monitoring and Alerting for applications and services on the clusters using tools like Splunk and Grafana
  • Automate data and log backups and archival
  • Automate log analysis and setup alerting
  • Build, Maintain and administer clusters to support highly available and fast real-time streaming applications
  • Daily communication with Business and Technical teams to understand the needs and build, maintain and support environments that help solve the use cases
  • Cluster patching activities
  • Cluster upgrades when required
  • Other duties assigned

SKILLS

  • Excellent knowledge of Linux (Redhat Preferred)
  • Deep understanding of Hadoop and Spark cluster security, networking connectivity and IO throughput along with other factors that affect distributed system performance
  • Strong working knowledge of disaster recovery, incident management, and security best practices
  • Working knowledge of containers (e.g., docker) and major orchestrators (e.g., Mesos, Kubernetes, Docker Datacenter)
  • Working knowledge of automation tools (e.g., Puppet, Chef, Ansible)
  • Working knowledge of software defined networking
  • Working knowledge of parcel based upgrades with Hadoop (i.e., Cloudera)
  • Working knowledge of hardening Hadoop with Kerberos, TLS, and HDFS encryption
  • Ability to quickly perform critical analysis and use creative approaches for solving complex problems
  • Excellent written and verbal communication skills

QUALIFICATIONS

  • Bachelor's degree from an accredited college/university in Computer Science, Computer Engineering, or related field and minimum four years of big data experience with multiple programming languages and technologies
  • Understanding of cloud and distributed systems principles, including load balancing, networks, scaling, in-memory vs. disk, etc.; and experience with large-scale, big data methods, such as MapReduce, Hadoop, Spark, Hive, Impala, or Storm
  • Ability to work efficiently under Unix/Linux environment or .NET, with experience with source code management systems like GIT
  • Ability to work with team members and clients to assess needs, provide assistance, and resolve problems, using excellent problem-solving skills, verbal/written communication, and the ability to explain technical concepts to business people
  • An acceptable pre-employment background and drug test
Apply Now!

Similar Jobs ( 0)