Job Details – this job has expired, please see similar jobs below
Company (NYSE: Company) is a global communications and IT services company focused on connecting its customers to the power of the digital world. Company offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,000-route-mile U.S. fiber network and a 360,000-route-mile international transport network. Visit Company for more information.
Support the Company Cognilytics organization in the development and implementation of support services for multi-tenant big-data-as-a-service products. Responsibilities will include provisioning and support automation development in a agile environment.
Work with product developers to build, test and automated the provisioning of various configuration of the Hadoop ecosystem.
Cluster management and maintenance using a variety of tools including Cloudera Manager, Nagios, Ganglia and Graphite.
Administer, troubleshoot, perform problem isolation and correct problems discovered in clusters.
Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.
Hadoop security management and auditing.
Work with end-to-end teams to guarantee high availability of the Hadoop clusters.
Deploy configurations of the Cloudera Distribution of Hadoop from both the command line and Cloudera Manager.
Support the integration of 3rd Party data movement and visualization products into the ecosystem.
Work in a hybrid infrastructure environment.
Manage development priorities, projects, resources, issues and risks effectively.
General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.
The most essential requirements are: They should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups.
Good knowledge of Linux as Hadoop runs on Linux.
Firm grasp of UNIX/Linux fundamentals in relation to UNIX scripting and administration. Strong experience with CentOS or RHEL.
Experience administrating production Big systems based on (Apache, Hortonworks, or Cloudera) including related technologies such as HBase, Hive, Impala, etc.
Experience in automation technologies, preferably Ansible.
Expert in Linux shell scripting. Python or Scala scripting experience preferred.
Travel: As required to support internal BDaaS development
This job may require successful completion of an online assessment. A brief description of the assessments can be viewed on our website at Company website/
We are committed to providing equal employment opportunities to all persons regardless of race, color, ancestry, citizenship, national origin, religion, veteran status, disability, genetic characteristic or information, age, gender, sexual orientation, gender identity, marital status, family status, pregnancy, or other legally protected status (collectively, “protected statuses”). We do not tolerate unlawful discrimination in any employment decisions, including recruiting, hiring, compensation, promotion, benefits, discipline, termination, job assignments or training.
The above job definition information has been designed to indicate the general nature and level of work performed by employees within this classification. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities, and qualifications required of employees assigned to this job. Job duties and responsibilities are subject to change based on changing business needs and conditions.
Sign up and search through 26,180 curated jobs in the Entertainment & Media Edition: