Hadoop Resume Samples

4.8 (122 votes) for Hadoop Resume Samples

The Guide To Resume Tailoring

Guide the recruiter to the conclusion that you are the best candidate for the hadoop job. It’s actually very simple. Tailor your resume by picking relevant responsibilities from the examples below and then add your accomplishments. This way, you can position yourself in the best way to get hired.

Craft your perfect resume by picking job responsibilities written by professional recruiters

Pick from the thousands of curated job responsibilities used by the leading companies

Tailor your resume & cover letter with wording that best fits for each job you apply

Resume Builder

Create a Resume in Minutes with Professional Resume Templates

Resume Builder
CHOOSE THE BEST TEMPLATE - Choose from 15 Leading Templates. No need to think about design details.
USE PRE-WRITTEN BULLET POINTS - Select from thousands of pre-written bullet points.
SAVE YOUR DOCUMENTS IN PDF FILES - Instantly download in PDF format or share a custom link.

Resume Builder

Create a Resume in Minutes with Professional Resume Templates

Create a Resume in Minutes
KP
K Pollich
Kayli
Pollich
943 Lueilwitz Locks
Phoenix
AZ
+1 (555) 374 3342
943 Lueilwitz Locks
Phoenix
AZ
Phone
p +1 (555) 374 3342
Experience Experience
Philadelphia, PA
Hadoop Admin
Philadelphia, PA
McDermott-Gerlach
Philadelphia, PA
Hadoop Admin
  • Cloudera Manager configuration for Static and Dynamic Resource Management Allocations for MapReduce and Impala Workloads
  • Responsible for people Management, including goal setting and providing performance feedback
  • File system management and cluster monitoring using Cloudera Manager
  • Innovation & Continuous Improvement - Actively look for innovation and continuous improvement, efficiency in all assigned tasks
  • Based on the analysis of the type of the project provide inputs on project methodology to senior stakeholders (Project Manager / Architects etc)
  • Consolidate inputs from developers and collectively provide inputs on activities/ tasks, task level estimates, schedule, dependencies, risks etc
  • Fair amount of domain expertise gained through working on the application or certification programs (if working in a vertical)
Dallas, TX
Software Engineer, Hadoop
Dallas, TX
Runolfsdottir-Witting
Dallas, TX
Software Engineer, Hadoop
  • Working knowledge of Unix/Linux
  • Solid computer science fundamentals such as complexity analysis, data structures and software design
  • Conduct performance testing and monitoring of production systems
  • Software development in a collaborative team environment using Scrum Agile methodologies, to build data pipelines using technologies like Spark
  • Development of unit and integration tests and ownership of testing the software you develop
  • You are passionate about shipping quality high-performance code
  • 4+ years of developing data pipelines
present
Chicago, IL
Hadoop Solutions Architect
Chicago, IL
Cummings, Feeney and Cummerata
present
Chicago, IL
Hadoop Solutions Architect
present
  • Provide technical direction in a team that designs and develops path breaking large-scale cluster data processing systems
  • Interact with domain experts, data architects, solutions architects and analytics developers to define data models for streaming input and delivering analytics output
  • Design strategies and detailed approaches to integrate Hadoop with existing applications, including but not limited to Oracle, DB2 and Enterprise Data Warehouse and Data Mart applications
  • Helping Northern Trusts internal Partners develop strategies that maximize the value of their data
  • Help establish thought leadership in the big data space by contributing internal papers, technical commentary to the user community
  • Researches and maintains knowledge in emerging technologies and possible application to the business
  • Design high throughput and streaming data pipelines using Experian Lambda Architecture; Internal bulk sources to include server logs, file uploads, database logs, relational sources and CDC; streaming sources to include web sockets, HTTP client callbacks, and kafka events
Education Education
Bachelor’s Degree in Computer Science
Bachelor’s Degree in Computer Science
Hofstra University
Bachelor’s Degree in Computer Science
Skills Skills
  • Should be proficient in Business Knowledge, Software Engineering Leadership, Technical Solution Design and Change Management
  • Able to communicate effectively across varied areas including technology, business units and senior leadership
  • Should have experience in Functional Architecture Design and Architecture Knowledge, Negotiating, Financial Analysis and Continuous (Service) Improvement
  • Ability to debug applications
  • Certification: Technical certification e.g. Cisco, VMWare; or achieved professional certification TOGAF 9 level 1; or has / seeking IAF level 1
  • Should have progressing knowledge in Business Analysis
  • Ability to work on multiple projects concurrently
  • Proven experience with introducing new technologies and ideas into development/support practices. Experience with Java, Python, or other scripting languages Experience with RESTful Web Services Experience with data integration technologies (e.g. DataStage,) Experience with GIT code management tool or another other comparable tool
  • Support DDSW on Caterpillar's new Hadoop platform includes identifying, communicating and resolving data integration and consumption issues Manage the ongoing data warehouse performance & service availability Provide Hadoop Technical support to an existing 24x5 support team providing global support (located in Peoria and India) Assist support team with resolving issues in a data ingestion solution developed on Hadoop platform Drive Root Cause Analysis and Problem Management processes Work with other application teams to investigate issues and root cause Work with infrastructure teams to investigate issues and root cause Experience: Proven experience in designing and implementing robust highly available IT solutions Strong troubleshooting and analytic skills Experience with incident management and problem management process (ITSM) discipline
  • Excellent analytical and problem-solving skills
Create a Resume in Minutes

15 Hadoop resume templates

1

Corporate Technology Hadoop Data Architect Lead Resume Examples & Samples

  • Design and development of data models for a new HDFS Master Data Reservoir and one or more relational or object Current Data environments
  • Design of optimum storage allocation for the data stores in the architecture
  • Development of data frameworks for code implementation and testing across the program
  • Knowledge and experience with RDF and other Semantic technologies
  • Participation in code reviews to assure that developed and tested code conforms with the design and architecture principles
  • QA and testing of modules/applications/interfaces
  • End-to-End project experience through to completion and supervise turnover to Operations staff
  • Preparation of documentation of data architecture, designs and implemented code
2

Hadoop Cluster Engineer Resume Examples & Samples

  • Deliver core hardware, software and data infrastructure components in support of business requirements
  • Lead platform engineering efforts building and delivering platforms to support the use of
  • Hadoop components - HDFS, Hbase, Hive, Sqoop, Flume, MapReduce, Python, Pig, HiveQL
  • Traditional ETL tools
  • Partner closely within a team and across IT organization
  • Grow and develop skills of other team members
  • Ensure quality of the technology solution delivered, e.g. stability, scalability, availability, performance, cost, security
  • Provide thought leadership and dependable execution in building and evolving big data platforms
3

DSI Hadoop Technology Lead Resume Examples & Samples

  • Excellent technical & hands on knowledge of Hadoop with very clear engineering concepts including –
  • Networks
  • Operating System
  • Distributed Computing
  • Capable of designing strategic solutions
  • Adhering to standards and patterns for implementation of reusable solutions and creation of assets for the platform
  • Good understanding of business and operational processes
4

Rainstor & Hadoop Infrastructure Analyst Resume Examples & Samples

  • Performance of BAU change ticket deployments for services across the global estate
  • Owning, scheduling and implementing maintenance releases to the global estate
  • Identifying incident and problem trends, proposing plans to address and reduce
  • Performing delivery tasks for as a member of Service Improvement Unit initiatives
  • Performing delivery tasks for internal projects as directed by L3 DBA technical leads
5

Hadoop Expert Resume Examples & Samples

  • Any of the other technologies in the Hadoop ecosystem (PIG, Hive, HBase, ...)
  • Other modern data technologies (Cassandra, Storm, Kafka, ...)
  • Python, *NIX
6

Hadoop Admin Resume Examples & Samples

  • We build and deploy strategic infrastructure to the business
  • We are transforming our infrastructure, reducing complexity and optimising costs to drive global, sustainable and scalable solutions
  • We are focused on continued 'Go-To' Customer Availability, driving excellence in service resilience, change management and pro-active problem management
  • Solid knowledge of Hadoop administration on distributed platforms
  • Ownership and leadership of technical resolution for high severity production incidents ( Sev 3 high, Sev 2 and Sev1 )
  • Adherence to Barclays standards for change control, database security and database patterns
  • Lead technical team in disaster recovery and business continuity exercises
  • Define operational procedures and documentation
  • Provide on-call rotation support where required
  • Lead and define database delivery requirements for Service Improvement Unit initiatives
  • Design and lead internal projects and deliveries aligned with the Database roadmap
  • Communications – required to participate in upgrade planning and major incident bridge calls, requiring communication to senior managers and at times directors
  • Presentation skills – as part of solving problems will be required to present technical solutions to senior technicians in own and partner teams
  • Analysis – will be required to perform detailed analysis of incidents to both identify and solve underlying trends
  • Innovation – will be expected to propose innovative ways of improving the service, for example to be more cost-efficient
  • Proven experience as lead technician supporting major or critical applications
  • Experience in leading and mentoring diagnosis and resolution of technical issues
  • Expert in performance tuning
  • Leads technical incident reviews
  • Resolves incidents and problems and fluently applies escalation and notification procedures for major incidents
  • Experienced with owning change planning, communication and approvals for BAU change and production maintenance
  • Collaborates and communicates well with partner teams – application run and build, service management, incident management, server and storage
  • Expert knowledge of Hadoop and Rainstor mechanisms including: backup, HA DR, monitoring, auditing
  • Knowledge of the ITIL framework, vocabulary and best practices
  • Expert in working with Vendor technical support staff, including vendor development teams
  • Expertise in defining technical plans and procedures
  • Demonstrable experience of database performance tuning and capacity management on critical customer-facing systems and high-volume payments services
  • Experience of working in a highly regulated environment and the associated processes and tools
  • Have a strong working knowledge of Barclays infrastructure components and the processes required to maintain the infrastructure
  • Have a good understanding of the business areas and application / service components used and an appreciation of applications from both a technical and business perspective
  • Expertise in documentation tools and writing / maintaining technical plans and procedures
7

DSI Hadoop Technology Lead-VP Resume Examples & Samples

  • Functional Programming
  • Knowledge of regulatory, legal, group Risk focussed implementations will be preferred
  • Represent Big Data technologies and capabilities in required forums to drive an integrated solution
  • Capable of Problem / issue resolution, capable of thinking out of the box
  • Participate in Design Reviews & Daily Project Scrums
  • Ensure reusability via component catalogues and conducting design reviews
  • Define innovative solutions, constantly exploring & challenging product capabilities, liaise with product experts from within the organisation or with vendor partners
  • Participate in the E2E lifecycle of a product delivery and guide the teams as required
  • Should be able to work in an agile environment with evolving demands and increasing expectations
  • Capable of contributing to Platform strategy and vision working with the Portfolio Design Lead and Architects
  • Expert problem solving skills
  • Ability to handle stressful situations with perseverance and professionalism
  • Delivers engaging, informative, well-organized presentations
  • Resolves and/or escalates issues in a timely fashion to line manager/ADM (for project delivery)
  • Understands how to communicate difficult/sensitive information tactfully
8

DSI Hadoop Senior Administrator Resume Examples & Samples

  • General Linux system administration: configuration, installs, automation, and monitoring
  • Automate operations, installation and monitoring of the Cloudera Framework specifically: HDFS, Map/Reduce, Yarn, HBase
  • Automate the setup of Linux Clusters
  • Be able to take the initiative to automate cluster admin processes
  • Work with the team in providing hardware architectural guidance, planning and estimating cluster capacity, and creating roadmaps for Hadoop cluster deployment
  • Continuous evaluation of Hadoop infrastructure requirements and design/deploy solutions (high availability, big data clusters, etc)
  • Understand the end to end architecture and process flow. Understand Business requirements and involvement in design discussions
  • Have an excellent understanding of how the Hadoop components work and be able to quickly troubleshoot issues
  • Be able to benchmark and fine tune an Hadoop cluster
  • Have a good knowledge of computer networks and security
  • Result oriented approach with ability to solve issues
  • Exposure to automation scripts, performance improvement & fine-tuning services / flows
  • Troubleshoot and debug Hadoop ecosystem issues
  • Address recovery of node failures and troubleshooting common Hadoop cluster issues
  • Documentation and communication of all production scenarios, issues and resolutions
  • Good understanding of Hadoop core concepts and technologies. Solid foundation in Linux Administration
  • Linux shell scripting experience
  • Some experience with Python or Perl
  • Troubleshooting experience with the hadoop ecosystem
  • Hadoop, Hive, Pig, HBase, Spark
  • Be able to secure an Hadoop cluster integrating with Kerberos, LDAP, ACLs
  • Passion for solving hard problems and exploring new technologies
  • Excellent communication and technical documentation skills
  • Experience in Unix scripts and databases
  • Very Good communication and organizational skills. The candidate should be able to articulate his/her thoughts well- both via written and spoken. Examples of the candidate making the right decisions/initiatives. The candidate must possess the ability to work independently
  • Very Good analytical skills. The candidate should be able to demonstrate the ability to perform in-depth analysis of issues and proactive identification of problems. He/She should be able to make suggestions/provide insight, analyse the situation and come up with the right recommendations
9

Hadoop Infrastructure Engineer Resume Examples & Samples

  • Automate operation, installation and monitoring of Hadoop ecosystem components in our open source infrastructure stack; specifically: HBase, HDFS, Map/Reduce, Yarn, Oozie, Pig, Hive, Tez, Kafka, Storm
  • Evaluate Hadoop infrastructure requirements and design/deploy solutions (high availability, big data clusters, elastic load tolerance, etc.)
  • Do performance analysis and capacity planning for our clusters
  • Troubleshoot and debug Hadoop ecosystem run-time issues
  • Provide developer and operations documentation
  • Recommend OS and hardware level optimizations
  • Proven experience building and scaling out Hadoop based or Unix hosted database infrastructure for an enterprise
  • 2+ years experience of Hadoop administration or a strong and diverse background of distributed cluster management experience
  • 2+ years of Chef, Puppet, Ansible, etc. system configuration experience (or incredible quality shell scripting for systems management -- error handling, idempotency, configuration management)
  • 2+ years of Dev. Ops. experience or System Administration experience (be conversant in Unix networking and C system calls)
  • Having used continuous integration (Jenkins or Buildbot) for at least one released software project is a plus
  • Open source experience is a plus (a well curated blog, upstream accepted contribution or community presence)
10

Avp-hadoop & Rainstor L DBA Resume Examples & Samples

  • Performance of incident and problem ticket resolution to SLA for services across the global Hadoop and Rainstor estate
  • Aherence to Barclays standards for change control, database security and database patterns
  • Participation in disaster recovery and business continuity exercises
  • Develop and maintain operational procedures and documentation
  • Provide on-call rota and shift support where required
  • Performing delivery tasks as a member of Service Improvement Unit initiatives and for internal projects as directed by L3 DBA technical leads
  • 4+ years of solid knowledge of Hadoop administration on distributed platforms
  • 4+ years of experience with ITIL framework, vocabulary and best practices
  • 4+ years of experience raising Vendor service tickets and working with Vendor technical support staff
  • 4+ years of experience with documentation tools and writing / maintaining technical plans and procedures
  • 4+ years of operating knowledge of Hadoop and Rainstor mechanisms including: backup, HA DR, monitoring, auditing
  • 4+ years of experience of performance tuning and capacity management
  • 4+ years of experience working in a highly regulated environment and the associated processes and tools
  • Proven experience in maintaining and supporting major or critical applications
  • Participates in Post Incident Reviews
  • Collaborates and communicates well with partner teams application run and build, service management, incident management, server and storage
11

Senior Analyst, Business Intelligence, Hadoop Resume Examples & Samples

  • Degrees with a balanced focus between technical and business related discipline
  • Relevant experience in the Telecom or Media industry
  • Advanced analytics in any of the following areas: Predictive modeling, machine learning, forecasting, network analysis, recommendation engines, optimization models, NP hard problems
  • Effective communication and presentation skills to both technical and non-technical audiences
  • Experience using the Hadoop tool set such as Pig, Hive, Map Reduce, Java, Spark, etc. (3+ years)
  • Experience with any of the above responsibilities or Cloudera specific tools is preferred
12

Hadoop Software Sales Specialist Resume Examples & Samples

  • Develop and execute a strategic and comprehensive business plan for the assigned territory, including identifying core customers, mapping the benefits of IBM solutions to the business requirements
  • Coach partners, accessing resources within IBM to support them on specific opportunities, with the goal of building sufficient capacity to meet customers’ demand for IBM-related services and skills
  • 4 years Enterprise Software Sales
  • 2 years Open Source Applications, Big Data, Database and/or Business Intelligence, and Enterprise Application Integration
  • 2 years working in cloud based solutions for On Premise and/or Off Premise Managed Solutions
13

Hadoop Production Support Developer Resume Examples & Samples

  • Responsible to analyze a broken processes, develop and deploy a resolution
  • Work with project delivery teams to review code before it is deployed into production for adherence to architectural standards and code defects
  • Collaborate with other areas of support (infrastructure, source systems, etc.) to resolve high impact events in a timely manner
  • Provide recommendations on improving the availability of reports and the stability of the production environments
  • Keep manager informed on important developments and/or potential escalations
  • Develop knowledge base by creating document from support experience
  • Manage incident/tasks while communicating with their owners regarding resolution
  • Must be willing to participate in weekend on-call rotations
14

Hadoop Big Data Engineer Secure Works Resume Examples & Samples

  • Perform day-to-day operational support of our VLDB clusters by monitoring their health and status,
  • RedHat certification – RHCSA or RHSE - or equivalent Red Hat Linux experience
  • Operational experience with Hadoop distributions
  • Cloudera's CDH
  • Hortonworks' HDP
  • AWS EMR (including EC2/S3)
  • Create and run example MapReduce jobs, and perform advanced Hive and Impala queries
  • Interpret Hadoop service and role logs (HDFS, etc.)
  • Work with distributed user job logs (MapReduce, etc.)
  • Linux (Red Hat 6/7), user commands, and administrative tools, including Satellite
  • Scripting languages. Bash and Perl required. Python recommended
  • TCP/IP networking concepts, including subnetting and routing
15

Hadoop Hunter Sales Specialist Resume Examples & Samples

  • At least 4 years experience in Ability to lead a complex sales campaign in a highly competitive environment
  • At least 4 years experience in Ability to articulate the value of IBM's Big Data Portfolio, including the differences between data at rest, data in motion, data in the cloud, etc
  • At least 4 years experience in Ability to understand market dynamics of the Big Data space to lead competitive positioning in a sales campaign
16

Foundational Services Hadoop Resume Examples & Samples

  • Work with various application teams within Bloomberg that look to use these technology stacks to understand their pain points and feature requirements
  • Develop clean and performant code acceptable to upstream communities
  • Interface with various open source communities and be an influencer and voice of reason within them
  • Proven Open Source community interactions (upstream accepted contributions and community presence)
  • Experience designing and implementing low latency, high-scalability systems
  • Strong background in Systems-level Java - thorough knowledge of garbage collection internals, Concurrency models, Native & Async IO, Off-heap memory management, etc
  • Experience troubleshooting complex distributed systems in a production setting and providing patches in a timely manner, as and when needed
17

Foundational Services Hadoop Resume Examples & Samples

  • Work with various application teams within Bloomberg that look to use these technology stacks to understand how to model their analytic workflows
  • Develop clean and performant code acceptable to application teams
  • Interface with the Apache Spark and the Scientific Python communities and be an influencer within them
  • Programming competence in Java and Scala with Python being a strong bonus
  • Strong proficiency with Apache Hadoop and Apache Spark programming paradigms
  • Experience with Pandas, NumPy and abstractions such as Distributed Dataframes is a strong bonus
  • Knowledge of financial data and workflows is a strong plus, though not a necessity
18

Hadoop Strategic Platform Services Manager Resume Examples & Samples

  • Extensive knowledge of midrange cluster-based infrastructure, system and applications architecture, HW/SW acquisition experience including vendor selection and negotiation, and infrastructure project management
  • Good knowledge of the Bank' data warehouse platforms and applications
  • Knowledge of Hadoop components and functionality
  • Management experience to oversee a medium size team, assignment of tasks, review of deliverables
  • Management and Project skills
  • People skills
  • Oral & communications
19

Hadoop Big Data Architect Resume Examples & Samples

  • 10 plus years of experience in Data Architecture and Modeling in Data and Information Architecture and Data Warehousing
  • Practical experience with Big Data technologies and Architecture
  • Experience building multiple Logical Data Models, both custom and industry banking models (FSLDM or BDWM)
  • Experience designing and supporting a multi-terabyte Data Warehouse using Teradata, Oracle, Hadoop or DB2
  • Requires technical understanding of key aspects of current architectures regarding Data Integration, SOA, Master
  • Data Management, Metadata, Data Warehousing, Reporting & BI product suites
  • Experience with data security and privacy architecture is critical
  • Requires extensive understanding of Data Warehousing, Active Data Warehousing, Operational Data Stores, Data Integration, Reporting and BI technologies
  • Experience achieving objectives in a large organization including influencing and negotiating with Sr. Management
  • Experience with and a strong understanding of the Teradata RDBMS, Hadoop and the SAS product suite
  • Experience with different classes of BI tools. Experience with Cognos, Tableau is a plus
  • Excellent people skills, leadership skills and experience motivating people to achieve objectives
  • Understanding of Data Integration and ETL technologies. Experience with DataStage or SAS is preferred
  • Bachelor's degree in Science, Business or Arts is required, Master's degree in Computer Science or similar is
20

Advanced Analytics Devops Engineer Hadoop Resume Examples & Samples

  • Deepening customer knowledge. The core of our existence is to empower our customers to stay a step ahead in life. To fulfill this task we need to improve our ability to better identify the needs of our customers
  • Asset generation. An important strategic theme introduced with our new strategy is consumer and SME lending in the challenger and growth markets. The team can help with improving/ developing prediction models/ solutions (leads/ acceptance/ management/ arrears)
  • Risk reduction. The team can help to improve risk models and help to better detect or predict risks by developing for example advanced network models on how markets/ companies are related or develop early indicators to detect the early stages of a financial crisis or a housing crisis
  • Operational excellence. Flawless execution and easy access to our services for customers is key. The team can help to improve service levels and lower costs for ING
21

Big Data Scientist / Hadoop Specialist Resume Examples & Samples

  • To support the delivery of our advanced analytics initiatives using cutting edge techniques and technologies to deliver our customers the best online gaming experience
  • Working in cross functional teams to deliver innovative data driven solutions
  • Building machine learning frameworks to drive personalisation and recommendations
  • Building predictive models to support marketing and KYC initiatives
  • Continually improving solutions through fast test and learn cycles
  • Analysing a wide range of data sources to identify new business value
  • Be an expert for advanced analytics across the business, educating the business about its capability and helping to identify use cases
  • Experienced in delivering analytics solutions at scale using Hadoop platforms and NoSQL databases, including solutions to drive personalisation and recommendations
  • Experience of elastic search would be an added advantage
  • Experienced in conducting complex statistical analysis, preferably holding a PHD, or Masters in a relevant discipline such as computer science, statistics, applied mathematics, or engineering
  • Expert knowledge of R, SAS, or similar and experience of using Spark, or Mahout machine learning libraries
  • Familiarity with methods including clustering, time series regression, decision trees, random forests, Bayesian inference
  • Expert knowledge of SQL, PIG, HIVE, shell scripting and strong programming skills such as Python, Java, Ruby, Scalaor Perl
  • An effective communicator and networker, able to influence and explain complex topics to business stakeholders
  • A self-starter with a go-getter attitude
  • A creative/lateral thinker able to resolve problems
  • Confident enough to make well informed judgements to resolve problems on a wide range of issues
  • Well organised and able to prioritise effectively
  • People orientated
  • Able to pick up new data tools/concepts quickly
  • Passionate about data and the commercial benefits of it
  • Able to adapt to constantly changing challenges
  • A graduate in a numerate discipline e.g. mathematics, statistics, economics, computer science
22

Hadoop Applications Developer, Associate Resume Examples & Samples

  • Strong software engineering skills with 3 – 8 years of hands-on experience in developing multi-threaded applications using advanced JAVA. Working knowledge of message oriented architectures and usage of appropriate design patterns
  • Will have responsibility for unit-level design, coding, unit testing, integration testing and participating in the full SDLC
  • Analyze system specifications and translate system requirements to task specifications for junior programmers. Participate in organization initiatives
23

Hadoop Platform Engineer Resume Examples & Samples

  • Administration, management, and tuning of Hadoop stack
  • Assist development staff in identifying the root cause of slow performing jobs/queries
  • Collaborate with development and infrastructure teams to identify, coordinate, and solve hardware and SW issues
  • Assist developers in determining efficient ways to load new data sources into the big data environment
  • Develop strategy to automate management and deployment processes
  • Planning and conducting platform upgrades
  • Work with development staff to ensure all components are ready for release/deployment
  • Manage and resolve issues on a 24X7 basis as part of an on-call rotation
  • Collect requirements, design and deploy high-availability, robust, resilient and supportable solutions
  • Ability to evaluate new tools and technology and acquire necessary skills to stay current with fast pace of change in the big data arena
24

Hadoop Sales Specialist Resume Examples & Samples

  • Full responsibility for forecasting (within +/- 10%), deliver accurate monthly and quarterly revenue targets, facilitation of agreed account and business plans and meet ongoing career development goals
  • At least 2 years experience in software sales
  • At least 2 years experience in open source applications, big data, database and/or business intelligence, and enterprise application integration
25

Hadoop Technical Sales Specialist Resume Examples & Samples

  • At least 1 year experience in hands-on Hadoop technologies
  • At least 1 year experience in using core enabling technologies
  • Basic knowledge in streaming data solutions
  • At least 3 years experience in using and/or developing big data solutions
26

Hadoop / Big Data Architect Resume Examples & Samples

  • Designing and building analytic engines to process thousands of streaming, batch and ad-hoc queries in support of our internal decision support systems (DSS), data science pipelines and user analytics
  • Architecting robust, scalable, and easily maintainable data software solutions that support digital and print businesses
  • Leading the implementation of software solutions by architecting, prototyping, coding and directing developers, all the way to productionalization of the solutions
  • Defining a roadmap for our enterprise Hadoop data platform and work closely with data engineering teams on implementation
  • Providing advice and support to production teams by helping with troubleshooting software issues
  • Working in tandem with fellow enterprise architects in other departments to define architectures and technologies to be used across the company
  • Identifying key opportunities for technology to advance the business and evaluating new technologies and setting development best practices
  • Masters degree in Computer Science or related engineering field
  • Minimum 10 years of software development experience, with a minimum of 5 years in data software development
  • Programming experience with a wide variety of today’s most used OO languages (Java, Scala, Closure), powerful languages for data manipulation (Python, Perl), big data development abstraction frameworks such as Cascading or Scalding and scripting languages such as Pig
  • Hands-on experience with messaging or pub-sub queuing systems and different types of NoSQL data stores
  • Excellent knowledge of Hadoop and similar data processing frameworks (such as Storm, Spark, Samza, Druid)
  • Knowledge of at least one of the following domains: content management systems, search engines, e-commerce applications, analytics applications, ad-serving technology, or email systems
  • Experience supervising development teams or knowledge of project management techniques
27

Hadoop Software Sales Representative Resume Examples & Samples

  • Ability to lead a complex sales campaign in a highly competitive environment
  • Ability to articulate the value of IBM's Big Data Portfolio, including the differences between data at rest, data in motion, data in the cloud, etc. or similar competitive offerings
  • Ability to understand market dynamics of the Big Data space to lead competitive positioning in a sales campaign
  • Demonstrated knowledge of the economics behind capital purchases and tradeoffs of on-premise and cloud-based software solutions
  • Demonstrated interpersonal and verbal communication skills
  • Competitive self-starter with a sense of urgency
  • Experience working at C-Level
  • Readiness to travel up to 50% travel annually
  • At least 5-years of software sales experience preferably within the Business Intelligence or Business Analytics Software space
28

Hadoop Lead Resume Examples & Samples

  • Initiate, build and sustain productive relationships
  • Serve as Technology Lead for the Helix Audit analytics suite of HDP objects, including delivery of Helix Early Version components and envisioning and enabling evolution of the capability over time
  • Serve as an escalation point for customer concerns if/when they arise. Identify appropriate resolution to achieve client satisfaction in a timely manner
  • Monitor and manage end-to-end delivery of the data transformation portion (i.e., Talend ETL) of the overall program within scope, time and budget
  • Enforce standard methodologies, processes and tools and ensure compliance to enterprise architecture and overall strategy
  • Provide leadership and direction to direct reports
  • Evaluate and identify potential redundant applications, infrastructure and tools
  • Must be able to work within a matrixed organization – balancing the needs of the customer against firm initiatives and goals
  • Must make decisions and negotiate with customers and overcome obstacles
  • Manage teams of on- and off-shore developers, to deliver in alignment with customer needs, with transparency to IT Services and customer stakeholders
  • Identify, manage and resolve complex issues, preventing escalations, where possible
  • Manage, negotiate and resolve project risks effectively
  • Demonstrate, by example, in-depth knowledge of the EY competency principles and practices, including coaching, teaching and mentoring
  • Think strategically and identify opportunities for optimization
  • Leader and team player – sets example for sub-leads, project managers, business analysts and others to follow
  • Create an open, honest, accountable and collaborative team environment
  • Operate as an empowering leader, ensuring our highly motivated and high performing team members are visible, known by leadership and rewarded
  • Strong domain knowledge in ETL (Extract, Transformation, and Load), Data Analytics, Business Intelligence, Data Warehousing, and Big Data on HDP 2.2 and beyond
  • 3+ years of hands on experience with the Apache Hadoop technology stack
  • Advanced knowledge of HIVE including but not limited to optimization, performance tuning and troubleshooting
  • Advanced knowledge of implementing a large scale HDP cluster, optimization, performance tuning and security using LDAP and Kerberos
  • 10+ years of project delivery and client relationship management in a technology environment
  • Strong business acumen and ability to negotiate with business partners
  • Strong customer orientation and able to manage customer expectations
  • Ability to develop strategic plans and translate them to actionable roadmaps
  • Strong people leadership skills. Proven track record in managing geographically dispersed, diverse teams of 10+ people plus including employees, contractors and 3rd party resources
  • Initiates, builds and maintains productive customer relationships
  • Flexibility to adjust to multiple demands, shifting priorities, ambiguity and rapid change
  • 10+ years in a corporate technology or consulting environment working with multiple disciplines to deliver projects in line with customer needs
  • Work experience in a professional services industry, preferred
29

Hadoop & Compute Infrastructure Team Lead Resume Examples & Samples

  • Lead the efforts of provisioning a firm wide Hadoop & Compute Infrastructure, being both a technical lead and managing a team of highly skilled infrastructure engineers
  • Provide technical guidance and roadmap with regards to automation, security, deployments and support towards a stable and robust clustered compute platform
  • Manage senior stakeholders and tenant relationships
  • Evaluate and address Hadoop infrastructure requirements and design/deploy solutions (high availability, big data clusters, elastic load tolerance, security, metrics, etc.)
  • Create, specify and implement a robust SDLC process for the team in line with the open source environment the team works in
  • Dig deep into performance, scalability, capacity and reliability problems to resolve issues
  • Create productive models for integrating internal application teams and vendors with our infrastructure
  • Track record of successful leadership of a DevOps, Systems Infrastructure or System Administration team
  • Proven experience building and scaling out UNIX-based Cloud Infrastructure for an enterprise (software, network, storage and related)
  • Experience of Hadoop administration or a strong and diverse background of distributed cluster management and operations experience
  • Chef, Puppet, Ansible, etc. system configuration experience (or incredible quality shell scripting for systems management -- error handling, idempotency, configuration management)
  • Expert level knowledge of UNIX internals especially Linux, good understanding of network protocols, experience with network architecture and hardware
  • Strong understanding of continuous build and automated deployment environment
  • Experience in Java, Python or Ruby development is a plus (including testing with standard test frameworks and dependency management systems, knowledge of Java garbage collection fundamentals)
30

Hadoop Specialist Resume Examples & Samples

  • Minimum of 5 years experience working within the Data Warehouse Management domain
  • Minimum of 3 years of managing and supporting applications using Hadoop or Hadoop based components - HDFS, Hbase, Hive, Impala, Sqoop, Flume etc., within
  • Experience in coding Java MapReduce, Python, Pig programming, Hadoop Streaming, HiveQL
  • Understanding of traditional ETL tools & Data Warehousing architecture
  • Knowledge of Hadoop and the Hadoop ecosystem required
  • Proven experience as a Hadoop/Big Data specialist in the Business Intelligence and Data management production support space
  • Proven experience within Cloudera Hadoop ecosystems: (MapReduce, HDFS, YARN, Hive, HBase, Sqoop, Pig, Hue, Spark, etc.)
  • Good knowledge on Agile Methodology and the Scrum process is an asset
  • Experience in Teradata and other RDBMS is a highly considerable asset
  • Experience with Cloudera
  • Hands on expertise in Linux/Unix and scripting skills are required
  • Ability to coach other team members
  • Strong communication, technology awareness and capability to interact with senior technology leaders is a must
  • Ability to propose solutions and techniques to resolve complex problems
31

Hadoop Client Technical Specialist Resume Examples & Samples

  • Experience using and developing big data solutions on technologies such as Apache Hadoop, Spark, Hive, Pig, Sqoop, Oozie, Ambari, HCatalog, Zookeeper, and Flume
  • Experience using core enabling technologies such as Unix, Linux, Window operating systems, networking, cluster computing, scripting and java programming, and relational database
  • Knowledge of streaming data solutions, such as IBM InfoSphere Streams, Apache Storm, Spark streaming, SQLStream, and Data Torrent
  • Experience with enterprise scale distributed NoSQL solutions such as HBase, Cassandra, Riak, MongoDB, and Cloudant
  • Experience with Enterprise Data Warehouses such as Teradata, Netezza, Exadata, MSSQL(Microsoft SQL), DB2, and GreenPlum
  • Experience with data integration and governance solutions such as IBM Information Server, Informatica, Oracle, Talend, and Ab Initio
  • Knowledge of Hadoop marketplace including offerings of IBM, Cloudera, Hortonworks, and MapR
  • At least 1 year experience in hands on Hadoop technologies
  • At least 1 year experience in using core enabling technologies such as NOSQL, Data Warehousing, Extract Transform and Load (ETL), Java/Scala/R/Python development
  • Readiness to travel 50% travel annually
  • At least 5 years experience in relevant working experience
  • Chinese simplified: Fluent
32

Software Eng-SQL on Hadoop Resume Examples & Samples

  • At least a Bachelor's in Computer Science. A Master's/Ph.D degree in a STEM field (e.g. Computer Science, Engineering, Mathematics, Physics, etc.) is highly desired
  • Passion for building great software
  • Excellent programming skills in Java and/or C/C++ and at least one other language
  • Experience working with database, distributed, or parallel systems internals
  • Experience with in-memory processing and/or Hadoop desirable
33

Hadoop Sales Specialist Resume Examples & Samples

  • At least 6 months experience in Experience in enterprise software sales
  • At least 6 months experience in Experience in open source applications, big data, database and/or business intelligence, and enterprise application integration
  • At least 6 months experience in Experience in based solutions for on premise and/or off premise managed solutions
34

Head, Hadoop Distributed Development Resume Examples & Samples

  • 10+ years coding skills in Java and deep experience with open source frameworks
  • Great Knowledge of Big Data platforms e.g. Vertica, Hadoop
  • Current knowledge of industry best practice, direction and trends
  • Ability to engage in solution design across application, platform and infrastructure spaces
  • Comprehensive understanding of software lifecycle including development, testing, release management and production monitoring
  • Deep understanding of production systems and operations including real-time, code level monitoring, application, platform and infrastructure configuration including tools and industry best practice
  • Strong Communication/Collaboration Skills
  • Group / Department-wide Facilitation Skills
  • Conflicts and issue resolution and escalation
  • Great ability to communicate effectively with Developers, Executives, and all levels in between
  • Familiar with EIM business strategy, needs and technology stack
  • 3+ years’ experience in managing individuals
  • Strong understanding of how to elicit success from highly skilled individuals
35

Production Services Lead-teradata / Hadoop Resume Examples & Samples

  • 5+ years of IT experience including: analysis of the Hadoop/DMX-h ETL Migration, SQL coding, performance training and understanding of ETL best practices, Teradata/Hadoop Architecture
  • 3+ years experience in ETL application development or data warehouse production support
  • 4 years of experience in Teradata Utilities, TPT, Viewpoint and Developer tools available for the Teradata System. This includes MultiLoad, FastLoad, FastExport, TPT, BTEQ, TPump, SQL Assistant
  • Support Teradata software upgrade installations and new feature testing and adoption
  • Understand and meet the standards required by the Production Application owners and Business Partners
  • Ability benchmark on new technologies or features to identify and provide the solution for the top consuming operations
  • Bachelors degree in Computer Science and business knowledge
  • Hadoop / DMX-h certification is preferred
  • Exposure to Banking and financial industry
36

Senior QA Engineer Hadoop / ETL Resume Examples & Samples

  • Participate as a tester in multiple QA projects, meeting commitments and ensuring that testing and QA activities comply with internal policies and guidelines
  • Review and provide input into requirements, specifications, design documents and other development documentation
  • Partner across various internal and external team members to ensure effective execution, leveraging scale as project demand warrants
  • Display bigger picture approach to issues and scenarios beyond the scope of individual issues
  • Coordinate with team members across different time zones
  • Create comprehensive test plans and clearly articulated test cases, utilizing various testing methods (black box; white box, performance testing; automation)
  • Develop and execute manual and automated test scripts for complex integrated systems to support regression test efforts
  • Setup data simulations to test against a wide variety of application conditions
  • Collaborate with subject-matter experts to optimize test coverage and trace to business requirements
  • Be accountable for tactical day to day decisions including estimating, scheduling, test planning, execution, and risk management
  • Defines and maintains test system configurations, test data
  • BS in Computer Science, Computer Engineering or relevant technical discipline required
  • 3-5 years of development or testing experience. Experience in both Development and testing will be an added advantage
  • Passion for finding issues and a desire to break things while testing beyond an applications limits
  • Work experience with Batch jobs and Hadoop technology (Hive, PIG and Spark)
  • Experience in using industry standard test management tools / frameworks (e.g. ALM, Quality Center, Clear Quest, JIRA, Loadrunner, HP Service Test, SOAPUI)
  • Ability to generate creative and innovative solutions for QA challenges and constraints
  • Solid understanding of black box and white box testing, ETL/ELT data flow testing, Business Intelligence reporting testing, performance testing, test automation, requirements traceability and general QA process reporting
  • Experience with automated build / continuous integration systems
  • Experience in building data transformation processes
  • Able to diagnose and drive continuous improvement in a collaborative manner
  • Strong working experience with Linux or Unix operating systems
  • Should be technically very sound and should be able to work in multiple projects using different technology in the same time
  • Should know the Data warehousing concept and should have sound knowledge of at least one ETL tool (i.e Informatica, Abinitio, etc)
  • Expert level SQL and Stored Procedure programming skills for data manipulation (DML) and validation (SQL Server, DB2, Oracle)
  • Experience in waterfall, RUP, iterative and agile methodologies is a strong plus
  • Comfortable learning new technologies quickly
37

Hadoop Environment Client Support Resume Examples & Samples

  • Providing platform support services, repairing system disruptions while planning and executing various changes
  • Provide technical documentation that includes design and operational support documentation
  • Working collaboratively with Engineering, Project Management, Implementation Planning team and vendors to schedule implementation of changes to meet overall schedule of the project and mitigate risk of customer experience impacts due to changes
  • Monitoring platform health and system alerts
  • Provide documentation of implementation plans, processes and procedures
  • Liaison with Scotiabank’s technical teams to address application, security and network communication issues in addition to coordinating projects
  • Provide production support and escalation
  • Provide off-hour and email support on a rotational basis
  • Determining, developing and delivering procedures to protect production environment from unplanned outages
  • Problem and Incident Management
  • Execution and follow through of acceptance test plans
  • Managing vendor SLAs and performance
  • Preventative maintenance routines
  • Platform security (compliance to standards, audits, restriction of changes)
  • Maintaining and optimizing service level KPIs / SLAs
  • Applying upgrades, corrective measures, and customizations
  • Integration of new platforms / applications / Services
  • Problem management to abate issues
  • Identify opportunities to save on support costs to management and implement as directed
  • Identify skills gaps to management
  • Executing development plans
  • Knowledge sharing with peers in Operations and Engineering
  • Familiar with Hadoop development tools & utilities (HDFS, YARN, SAS, Ambari, Pig, Hive,
38

Software Engineer, Hadoop Resume Examples & Samples

  • 7 + years of experience in data integrations with an enterprise integration tool - SnapLogic a significant plus
  • Experience with the AWS ecosystem - Redshift a significant plus
  • 5+ years in Hadoop ecosystems including data modeling and processing - HDFS, MapReduce, HBase, HIVE
  • Experience with real-time and event processing - Kafka, Storm, Spark or Complex Event Processing (CEP)
  • Strong scripting skills - Python, Java, Ruby
  • Experience with Tidal or enterprise scheduling systems
  • Experience with accessibility evaluation and validation software
39

Hadoop Devops Engineer Resume Examples & Samples

  • Administer, install, and support Hadoop physical clusters
  • Manage and install operating system updates, patches, and configuration changes
  • Help tune infrastructure and plan for resource management
  • Help tune and optimize Hadoop applications
  • Automate configuration and maintenance tasks
  • Participate in design and architecture of Hadoop systems and environments
  • 2+ years of work experience of relevant work experience
  • Knowledge of Hadoop architecture, administration, and support (Cloudera preferred)
  • Experience with Linux/Unix (CentOs preferred)
  • Commitment to increasing cluster reliability and performance
  • Knowledge of Python or other scripting language
  • Experience with automated configuration and deployment (Salt, Puppet, Chef etc)
  • Experience with AWS or OpenStack
40

Senior Technical Specialist, Hadoop Resume Examples & Samples

  • Analyses technical problems for senior management, development teams, support teams, and business partners in order to identify the technical approach and solution options for the given problem; this includes collaborating with project/work teams to resolve technical road-blocks, performing solution proof-ofconcept (POC), and examining third party product offerings
  • Provides design, development, and support estimates to senior management and/or project/work teams that are used in determining business initiative feasibility and priorities, and/or used by Project Managers for project planning
  • Provides technical expertise to senior management and project/work teams, including the areas of governance methodologies and processes, for purposes of ensuring the architecture and solutions are consistent with the design framework, Enterprise/industry standards, and the BMO strategic direction
  • Partners with the project Business Analyst (BA) and collaborates with other organizations (e.g., Infrastructure, Audit, and Information Security) to define and document the system non-functional requirements (e.g., Service Level Agreement [SLA] monitoring, audit controls, reporting, and security)
  • Presents solutions to internal governance groups in order to gain approval for the design and communicates the outcomes of these presentations to the project/work teams to keep them updated on the governance process
  • Reviews development and support processes and implements improvements to increase quality, reduce costs, and improve customer satisfaction
  • Provides technical coaching to team members on design patterns, development processes, and other IT processes in order to improve their technical skills
  • Setup and maintain highly scalable and extensible Hadoop platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels
  • Implement security for the Big data infrastructure
  • Develop best practices and operational support guidelines for Big data platform
  • Develop and maintain up-to-date architectural, implementation and operational documents of the Hadoop infrastructure
  • Monitor the Hadoop infrastructure, develop benchmarks, analyze system bottlenecks and propose solutions to eliminate them
  • Work in close collaboration with development teams and provide technical leadership for projects related to Big Data technologies
  • Analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. Bring these insights and best practices to BMO
  • Possesses a university degree/college diploma in Computer Science and/or 10+ years systems experience, both hands-on and as a consultant
  • Is experienced in implementing processes at an organizational level using CMMI or ITIL
  • Is experienced in CMMI assessments/International Organization for Standardization (ISO) 9001 Audits/ITIL
  • Possesses expert technical skills and is able to learn and apply new skills quickly
  • Demonstrates advanced communication skills, both written and verbal
  • Demonstrates advanced negotiating and relationship management skills
  • Advanced knowledge in administration of Hadoop components including HDFS, MapReduce, Hive, YARN, Flume, HBASE and HAWK
  • Extensive knowledge of Hadoop based data manipulation technologies
  • Experience working with Hadoop Security implementation (Kerberos, Ranger, Knox)
  • Minimum of three (3) years experience designing, developing and administering the Hortonworks distribution of Hadoop
  • Strong experience with Hadoop Monitoring/Alerting tools (Ambari, Nagios, Ganglia,)
  • Experience in Hadoop capacity Planning, Troubleshooting and Tuning
  • Experience in Hadoop Cluster migrations or Upgrades
  • Strong Red Hat Linux/SAN Administration skills and RDBMS/ETL knowledge
  • Strong knowledge of JAVA/J2EE and other web technologies
  • Proficient scripting skills in Perl / Python / Shell Scripting / Ruby on Rails
  • Experience with Configuration Management tools (Puppet, Ansible and CHEF)
  • Experience managing an EMC Isilon/OneFS storage is a plus
  • Good communication skills, self-motivated, positive attitude and ability to work independently and in a team environment
  • A critical thinker with a strong troubleshooting and application administration/Support skills
  • Proven timely delivery of key infrastructure and products
  • Must be able to work flexible hours (including weekends and nights) as well as when needed
41

Hadoop Hunter Technical Sales Specialist Resume Examples & Samples

  • Ability to articulate the value of IBM's Big Data Portfolio, including the differences between data at rest, data in motion, data in the cloud, etc
  • Ability to understand market dynamics of the Big Data space to lead competitive positioning from a technical and economical perspective
  • Demonstrated written communication skills
  • Demonstrated organizational skills, discipline, with attention to detail and ability to balance multiple tasks
  • Interact with corporate-level executives
  • Objection handling skills
  • Understand prospect “insiders” and how competition sells to the account. Develop insights from this and from public sources
  • Grow your vast technical acumen on Apache Hadoop, the ecosystem and customer critical and specific solutions
  • Stay ahead and on top of all new product features and fixes
  • Provide technical consultation and education to the Sales team by keeping them apprised on new product information
  • Limited management supervision and direction is provided. This individual will operate independently and drive results as a team and as an individual
42

Cyber Security Hadoop Data Scientist Resume Examples & Samples

  • 5+ years of real world business experience
  • Educational background in Mathematics, Statistics, Computer Science, Information Science or Engineering
  • Excellent organizational, writing and interpersonal skills
  • Experience with programming languages such as Java, Python, Perl or C++
  • Understanding of the Hadoop platform (e.g. MapReduce, Hive, Impala, Spark, Flume, Kafka) and related Big Data technologies
  • Knowledge of R or SAS
  • Fluency with SQL on one or more relational databases such as Microsoft SQL Server, MySQL, Postgres or Oracle
  • 3+ years of cyber security experience with subject matter expertise in one or more domain areas of cyber security
  • Hands-on experience integrating predictive or other analytical models in corporate environments
  • Strong stakeholder interaction skills similar to those of a technical consultant
  • Demonstrable experience integrating application programming interfaces (APIs) of varying type
  • Skill in applying machine learning techniques such as deep learning, recommender systems or natural language processing to statistical problems using such tools as Mahout, scikit-learn or MLLib
  • Compelling ranking on Kaggle
43

Hadoop Dba-consulting Database Administrator Resume Examples & Samples

  • Bachelors degree and Minimum 10 years relevant IT experience
  • 14 years of experience in lieu of degree will be accepted
  • Minimum of 5 years experience with SQL
  • General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • Strong knowledge of Linux is required including Linux security, configuration, tuning, troubleshooting, and monitoring
  • Work on automation and scripting having familiarity with open source configuration management and deployment tools such as Puppet or Chef can help. Linux scripting (perl / bash) must have experience with either Python and/or Ruby
  • Consults with Systems Development, DBA Staff and Technical Support on technical problems, methods, directions and data access methods like SQL, Stored Procedures, Triggers and User Defined Functions
  • Directs and participates with team members in the analysis, development and delivery of all support and project work for assigned function(s). Provides mentoring, knowledge, skills transfer and training to staff. Performs resource planning, deployment, tracking and reporting for all assigned team members
  • Provides requirement analysis and evaluations for proposed data management software products and solutions. Develops or directs the development of utilities used for the monitoring, tuning and analysis of database systems. Stays current on data management technologies and directions
  • Consults with application programmers, technical staff and end users in troubleshooting and improving application response time and availability. Performs database performance monitoring and analysis and proactively implements needed adjustments
  • Bachelor's Degree in Information Systems related curriculum or equivalent education and related training/work experience
  • Ten years experience with approximately 4 years as a DBA and at least 2 years of application database programming experience
  • Expert knowledge of database theory, logical and physical database design and database applications
  • Expert knowledge in the design, modeling, developing, and maintenance of large scale databases. Expert knowledge in data access methods, data extraction, migration, and loading processes
  • Two years experience in writing host system scripting languages such as JCL, Unix shell scripting, or NT batch
  • Two years experience with cross-platform database network communication techniques
44

Java / Hadoop Applications Developer Resume Examples & Samples

  • Deep knowledge and experience with OOP/OOD
  • Interfaces, Classes, Polymorphism, Inheritance, Reuse
  • Design Patterns
  • GoF (Gang of Four), MoM, SOA, MVC
  • Experience creating Architectural Diagrams in Visio or PowerPoint
  • Experience helping Application Development teams complete architecture review documentation
  • Experience conducting Proof of Concepts and collecting metrics
  • Experience creating UML Diagrams
  • Experience creating infrastructure layout diagrams
  • Big Plus: Experience with ER Diagrams and Data Modeling
  • Huge Plus: Experience with Hadoop and Big Data development
  • Experience with Agile techniques
  • Experience conducting Code Reviews
  • Core Java (Java 3+)
  • Extensive experience with Core Java coding
  • Collections (Lists, Maps, Sets)
  • Also able to code with Arrays directly
  • Able to use Arrays in instead of collections, efficiently
  • Synchronization
  • Thread creation and control
  • **Extensive*** Direct JDBC experience
  • Strings and I/O
  • Ability to read large raw data files and parse them into usable tokens for DB Loading or other processing
  • String Matching and Manipulation
  • Reading and Writing from/to Properties Files, XML, Plain Text Files
  • XML, JSON, and other common formats
  • Familiarity with JAXB, SAX, DOM, STAX, JSON Parsing, etc
  • At least intermediate SQL Knowledge is a must (Oracle dialect is a plus)
  • JMS and other Messaging concepts
  • Web Services Development
  • REST Web Services (JAX-RS)
  • SOAP Web Services (JAX-WS)
  • Custom HTTP Servlet web-services based implementations
  • EJB 3.0 is a plus
  • Unix/Linux experience (Command Line/Perl/Shell/Python)
  • Maven and Build Scripts, Jenkins in a plus
  • Job Scheduling: Autosys, Control-M, etc
  • Big Plus: Big Data / NoSQL (Specifically Hadoop)
  • Hadoop
  • Hive
  • HBase
  • HDFS
  • Map-Reduce
  • Spark is a Plus
  • Ozzie is a plus
  • MongoDB or other NoSQL databases
  • Graph Databases
  • Big Plus: Kakfa Messaging
  • Plus: In-Memory Caching Technologies
  • Gemfire
  • SqlFire or GemfireXD
  • Coherence
  • Plus: Integration with Python, C#/.NET or other non-Java Languages based Clients via Web Service and Messaging
  • Plus: Workflow Engine Integration with Java
  • Engine Examples: AquaLogic BPM, Oracle BPM, Tibco Staffware/Iprocess
  • Plus: Knowledge of ETL Frameworks such as: Abinitio and Informatica
  • Integration of these tools with Hadoop is a big plus
  • Prior Financial Services Industry experience, especially in the Risk or Finance Technology area or Reference Data experience
45

Senior Systems Administrator, Hadoop Resume Examples & Samples

  • Primary responsibility would be to keep the Hadoop infrastructure operational across various environments
  • Work with the team to build the Hadoop platform infrastructure reference architecture
  • Install and manage various tools & tech stack that is required for the Engineering Team
  • Proactively monitor systems and drive troubleshooting and tuning
  • Understand all components of the platform infrastructure in order to analyze impact of alerts and other system messages
  • Knowledge in networking to troubleshoot and isolate infrastructure issues
  • Be the point of contact for production issues and alerts and resolve the issues in a timely manner
  • Install, configure, and upgrade nodes in high availability production clusters
  • Manage multi-site production infrastructure, including deployment, maintenance, troubleshooting, performance tuning, and security
  • Ensure proper monitoring, alerting, capacity planning and reporting in the production environment
  • Contribute to the evolving design and architecture of reliable and scalable infrastructure
  • Develop processes, tools, and documentation in support of production operations
  • Evaluate new software, hardware and infrastructure solutions
  • Hadoop administrator Certification (CDH/HDP) is a big plus
  • Knowledge of retail industry
46

Hadoop Support Engineer Resume Examples & Samples

  • Manage scalable Hadoop cluster environments
  • Manage the backup and disaster recovery for Hadoop data
  • Work with Linux server admin team in administering the server hardware and operating system
  • Assist with develop and maintain the system runbooks
  • Mentor, develop and train junior staff members as needed. Provide off hours support on a rotational basis
  • Considerable IT experience
  • Proven experience in Linux systems
  • Experience in deploying and administering Hadoop Clusters
  • Good communication and trouble shooting skills
47

Hadoop Solutions Architect Resume Examples & Samples

  • The Big Data Solution Architect will be responsible for guiding the full lifecycle of a Hadoop solution, including requirements analysis, governance, capacity requirements, technical architecture design (including hardware, OS, and network topology), application design, testing, and deployment
  • Provide technical direction in a team that designs and develops path breaking large-scale cluster data processing systems
  • Interact with domain experts, data architects, solutions architects and analytics developers to define data models for streaming input and delivering analytics output
  • Design strategies and detailed approaches to integrate Hadoop with existing applications, including but not limited to Oracle, DB2 and Enterprise Data Warehouse and Data Mart applications
  • Helping Northern Trusts internal Partners develop strategies that maximize the value of their data
  • Help establish thought leadership in the big data space by contributing internal papers, technical commentary to the user community
  • Dealing with large data sets and distributed computing
  • Working in the data warehousing and Business Intelligence systems
  • Previous experience with RDBMS platforms, Oracle 11.x. being preferred
  • Hands-on administration level experience with the Hadoop stack (e.g. MapReduce, Sqoop, Pig, Hive, HBase, Flume), Spark, Kafka are a requirement
  • In depth understanding of Encryption methodologies and products (CDH Native Encryption, HPE) and security is a requirement
  • Deep experience in working on large linux clusters
  • Hands-on experience with ETL (Extract-Transform-Load) tools (e.g Informatica, DataStage)
  • Hands-on experience with BI tools and reporting software (e.g. Microstrategy, Cognos)
  • Hands-on experience with "productionalizing" Hadoop applications (e.g. administration, configuration management, monitoring, debugging, and performance tuning)
  • Knowledge of cloud computing infrastructure (e.g. Amazon Web Services EC2, Elastic MapReduce) and considerations for scalable, distributed systems
48

Hadoop / Spark Full Stack Developer Resume Examples & Samples

  • Interact with clients both within JPMIS and across the firm (including counterparts in Retail & Wholesale Banking, Risk, Digital/Mobile)
  • Automate and streamline existing processes, procedures, and toolsets
  • Experience with JavaScript web application frameworks (e.g., Angular, jQuery, Backbone or Ember)
  • 2-3 years of solid system development experience leveraging a variety of the following technologies
49

Senior Manager Hadoop / Spark Platform Resume Examples & Samples

  • Has led engineering teams through several successful releases of software products of high technical complexity
  • A genuine desire to develop the best Spark Infrastructure in the industry today
  • Excellent understanding of computer systems
  • Expert people and project management skills
  • Proven strong customer focus
  • Past experience as individual C++ or Java developer
50

Hadoop Application Support / Data Analyst Resume Examples & Samples

  • Lead technical troubleshooting efforts and perform analysis to identify workarounds and determine root causes of production issues
  • Analyze PL/SQL/Java/Pig code to identify logic errors and potential optimizations
  • Develop and execute queries against large datasets to support impact analysis, reconciliation, and data validation
  • Create and deploy scripts to automate tasks to evaluate and manipulate file data
  • Establish relationships with business partners, understand business processes, and promote business unit interests
  • Analyze reported production issues within the context of business requirements and design specifications to determine the appropriate remediation path
  • Assess potential impacts to business operations and negotiate potential solutions and associated priorities with stakeholder groups
  • Author executive summaries to explain issues and implications to senior leaders
  • Facilitate regular meetings with cross functional stakeholders to review status and next steps for outstanding issues
  • Install & configure software updates and deploy application code releases to Production & Non-Production environments
  • Proactively identify opportunities to streamline integrated job workflows, implement automation and monitoring solutions
  • Contribute subject matter expertise to IT projects to support the design and delivery of quality solutions
  • Evaluate solution designs to proactively eliminate potential production problems and contribute supportability requirements
  • Participate in standard processes for Access, Incident, Problem, and Change Management
51

Technology Project Manager / Hadoop Resume Examples & Samples

  • Leading and delivering the portfolio to drive shareholder value and client and employee experience
  • Delivering end-to-end business solutions that meet the business objectives
  • Establishing and managing the discipline and practice for Project and Portfolio Delivery
  • Overall Technology Project Manager for Technology project
  • Manage Application Development projects through all phases of the PDLC / SDLC
  • Project resourcing (Define project team)
  • Vendor management & negotiation
  • Overall project plan development / milestone tracking
  • Weekly status reporting and timely issue escalation
  • Working with numerous internal technology partners to coordinate plans & track dependencies / milestones
  • Work directly and collaboratively with business leadership
  • Support a work environment that promotes customer service, quality, innovation and teamwork
  • Financial management & tracking including regular financial forecasting
  • Report on program progress, technology spend & forecast, risks and issues
  • At least 7+ years’ experience in project management of Application Development Projects
  • Working knowledge and experience of date related project, preferably in Hadoop and Big data projects
  • Have practiced Agile/Rapid/Iterative methodology
  • Release management and application production support experience will be valuable
  • The successful candidate will have a proven track record for success having demonstrated the ability to take independent action to achieve results
  • 7+ years financial industry experience
  • Working experiences in delivering Data related projects
  • Expert knowledge of software development life cycle and TD project development life cycle
  • Advanced understanding of Test Strategy development, Test Planning, Defect Reporting and Test Case Design Techniques for initiatives with high complexity
  • Strong relationship management skills, with demonstrated success with business and technology teams
  • Ability to proactively manage risk and escalate issues quickly
  • Ability to work closely with multiple internal technology teams (EETS TS, CBAW TS, NACCMS TS, ITS etc.)
  • Demonstrated excellence in vendor management
  • Excellent communication skills both written and verbal
  • Business-focused, and delivery and solution-oriented
  • Proactive, self-motivated and change-oriented and a desire to drive the team, technology & processes to deliver business value
  • Ability to work in a complex, dynamic environment
  • Excellent judgment skills and the ability to manage conflicting priorities
  • Confident presentation and facilitation skills and strong interpersonal and leadership skills to facilitate working with technology and business partners
  • Analytics skills
  • Strong change management skills
  • Organization and time management
  • Relationship management
  • Negotiation and facilitation
  • Communication both oral and written
  • Ability to operate under strict timelines
  • He/She must work effectively in a Team environment and be able to interact with all levels of personnel from various functional areas
52

Senior Software Engineer, Hadoop Resume Examples & Samples

  • Selection and integration of Big Data tools and frameworks required to provide requested capabilities
  • Implementing ETL process
  • Proficiency with Hadoop v2, MapReduce, HDFS
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
  • Experience with Java, Apache Spark, or ApacheStorm
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Provide support operational support as needed
  • Experience with Cloudera/MapR/Hortonworksdesired
  • 5 to 7 years solid development experience working in Java/Scala
  • Bachelor's Degree in Comp. Science or related field experience
53

Hadoop Technical Sales Specialist Resume Examples & Samples

  • 1 year experience in hands on Hadhoop technology
  • 1 year experience in using core enabling technologies such as NOSQL, Data Warehousing, Extract Transform Load (ETL), Java/Scala/R/Phyton development
  • 2 years experience in hands on Hadhoop technology
  • 2 years experience in using core enabling technologies such as NOSQL, Data Warehousing, Extract Transform Load (ETL), Java/Scala/R/Phyton development
54

SQL / Hadoop / Big Data Engineer Resume Examples & Samples

  • Understanding of how to collect the logs produced by the transaction processing system and loading them on to a Hadoop cluster
  • Understanding of the format of the logs
  • Extracting data from relational database tables
  • Creating and managing Hadoop Oozie jobs
  • Organizing and displaying the data using Hive query language
  • Gathering requirements for reports and generating them using a tool like Tableau
  • Need 2+ years of Hadoop experience including using languages and tools such as: Hive, Flume, Sqoop, and Oozie
  • Certification on Big Data technologies, especially Hadoop
  • Flume
  • Oozi
  • Experience in data visualization tools such as Tableau
  • Experience with relational database schemas in SQL Server and Oracle
  • Good communication skills​
55

Hadoop Support Analyst Resume Examples & Samples

  • 3-5 years overall experience in Linux systems
  • 2 years developing or supporting applications in a Hadoop environment
  • Expert level knowledge of Spark, HBASE, Kafka and services interoperability
56

Hadoop Support Lead-hyderabad Resume Examples & Samples

  • Work with big data developers and developers designing scalable supportable infrastructure
  • Assist with develop and maintain the system run books
  • 10+ years of IT experience
  • Well versed in managing distributions of Hadoop (Hortonworks, Cloudera etc.)
57

Hadoop Resume Examples & Samples

  • Optimize and tune the Hadoop environments to meet performance requirements
  • Install and configure monitoring tools
  • BS Degree in Computer Science/Engineering
  • Well versed in installing & managing distributions of Hadoop (Hortonworks, Cloudera etc.)
  • Experience in support Java application
58

Hadoop Applications Support Lead-VP Resume Examples & Samples

  • Mentor, develop and train junior staff members as needed
  • Provide off hours support on a rotational basis
  • 5-7 years overall experience in Linux systems
  • 3+ years developing or supporting applications in a Hadoop environment
  • Advanced experience of Hive, PIG, Sqoop and Flume
  • Knowledge in performance troubleshooting and tuning Hadoop Clusters
  • Excellent customer service attitude, communication skills (written and verbal), and interpersonal skills
59

Module / Tech Lead Hadoop, Big Data Resume Examples & Samples

  • Reporting to the Program manager on project/task progress as needed. Identify risks & issues
  • Participate in all project planning, progress & development meetings with the team & global managers
  • Mandatory to have strong SQL skills
  • Strong experience in any ETL and BI tools
60

Principal Hadoop Performance Architect Resume Examples & Samples

  • Define and implement performance benchmarks that span the spectrum of big-data and storage technologies
  • The role would require a solid understanding of the performance characteristics and concepts of various big data technologies like nosql, hadoop, cassandra, druid and ability to setup various combinations of layers in the stack to run performance tests and create benchmarks
  • The candidate will be exposed to a broad variety of emerging big data and virtualization technologies, and must be a quick learner to work in a fast-paced environment
  • Open Source contributions...you may be altering code maintained as an open source project
  • BS, MS in Computer Science or equivalent
  • Experience with performance, scalability, load testing and profiling
  • A good understanding of I/O, storage, data modeling and SQL engines, and demonstrated ability to optimize I/O performance
  • Experience in configuration and tuning of Hadoop products (e.g. administration, configuration management, monitoring, debugging, and performance tuning)
  • Experience with large data sets and understanding of I/O access patterns
  • Familiarity with industry standard Hadoop benchmarks
  • Strong familiarity with Unix/Linux
  • Knowledge of NoSQL platforms (e.g. key-value stores)
  • Hands-on experience with open source software platforms and languages (e.g. Java, Python, C/C++)
  • Knowledge of cloud computing infrastructure (e.g. Amazon Web Services EC2, Elastic MapReduce)
  • Self-starter, fast learner, and the ability to work in a fast-paced environment
61

Hadoop Software Engineer Resume Examples & Samples

  • Updating existing UI design using Model-View- Controller architectural patterns
  • Participating in detailed object-oriented analysis and design
  • Developing code in accordance with the design
  • 7- 10+ years of recent experience with object-oriented programming
  • 7- 10+ years of recent experience developing and implementing user interface design using with HTML5, AngularJS, Bootstrap, JQuery, Dojo, MVC
  • 5+ years of recent experience working on a team with 4 or more developers
  • 5+ years of recent experience with Hadoop software stack, Accumulo, and Solr
  • Extensive understanding of HTML and CSS
  • Recent experience writing unit tests
  • Recent experience with Subversion, Bugzilla, Maven, Git
62

Principal Hadoop SME Resume Examples & Samples

  • Translate data services and/or analytics needs into solutions that leverage the Analytics (Hadoop platform) and applicable design patterns
  • Define technical scope and objectives through research and participation in requirements gathering and definition of processes
  • Design, review, and lead engineers in delivering optimized data transformation processes and complex solutions in the Hadoop ecosystem
  • Adhere to all applicable Enterprise Data Platform development policies, procedures and standards
  • Provide educational/mentoring assistance to project teams and others
  • Collaborate closely with Platform and Engineering teams and assist engineers, when required, to ensure timely delivery of project deliverables
  • Rapid response and cross-functional work to deliver appropriate resolution of technical, procedural, and operational issues
  • Work with the team in an Agile/Kanban/Dev-Ops environment to ensure quality products are delivered
  • Responsible for the technical design of Analytics systems including ETL, data transformations & BI services
  • Apply architecture and engineering concepts to design solutions that adhere to organization standards and cater for re-usability, scalability, maintainability and security
  • Assess tool implementations and architectures in the analytics, DW & BI space, reviewing for gaps against business needs and industry best practices
  • Contribute to business process design and re-engineering impacted areas
  • Provide oversight to Analytics and data warehousing activities in the team to ensure alignment of priorities, coordination of inter-related projects and ensuring success stories are leveraged
  • Effectively communicate analyses, recommendations, status, and results to both business & technology leads
  • Lead proof of concepts to create break through solutions, performing exploratory and targeted data analyses to drive iterative learning
  • Establish framework for automation of data ingestion, processing & security that caters to the Bank’s standards and project requirements
  • Establish best practices and guidelines to be followed by tech teams working on Hadoop platform
  • Provide on-time solutions and design for initiatives and projects on Hadoop platform
63

Hadoop / Big Data Architect Resume Examples & Samples

  • Work directly with application and development teams to completely understand all requirements for project and ad hoc requests and provide accurate estimates
  • Participate in a team of technical subject matter experts that provides 24x7 operational support of mission-critical Big Data environments for highly-available and contingent platforms
  • Create and maintain documentation and define department best practices in support of proper change management, security and operational requirements
  • Investigate and understand new technologies and additional capabilities of current platforms that support MasterCard’s vision of Big Data, leading their adoption when appropriate
  • Educate peers and cross-train teammates with Big Data concepts and application of technology to serve business needs
  • Interact with managed services resources and direct maintenance and operational support activities per MasterCard standards and industry best-practices
64

Operations Engineer, Hadoop Resume Examples & Samples

  • Perform cluster and system performance tuning
  • Work with cluster subscribers to ensure efficient resource usage in the cluster and alleviate multi-tenancy concerns
  • Coordinate and monitor cluster upgrade needs
  • Monitor cluster health and build pro-active tools to look for anomalous behavior
  • 3+ years Linux experience
  • 3+ years of distributed systems experience
  • Experience with C, Java and Bash
  • Experience with large scale Hadoop Deployments (more than 40 nodes, more than 3 clusters)
  • 5+ years of academic or industry experience (preferably at a technology company.)
65

Hadoop Data Integration Solutions Architect Resume Examples & Samples

  • 7 or more years related experience
  • Multi platform experience and expert level experience with business and technical applications or any combination of education and experience, which would provide an equivalent Background
  • Data integration with Hadoop experience required
  • Additional Technical Requirements: Hadoop Architecture and HDFS (CLoudera), Apache Sqoop, Impala and Hive, Programming in Spark, Unix
  • Netezza/Teradata experience a plus
66

Hadoop Systems Engineer, Senior Resume Examples & Samples

  • Designs, documents, implements, monitors, and supports Unix, Linux, Sun Solaris, HP/US, Suse and SAN systems, services, and applications
  • Designs, documents, plans, implements, monitors, and supports enterprise storage, backup, and disaster recovery solutions
  • Plans, configures, manages, monitors, and supports SAN infrastructure including EMC CX and DMX SAN, IBM DS SAN, and Brocade SAN switches
  • Plans, configures, manages, monitors, and supports SAN fabric design, zoning, optimization, and fault-tolerance
  • Responds to alerts or emergency issues within 15 minutes during normal business hours and when on-call
  • Reviews new technologies against business requirements to identify opportunities for process, performance, reliability, or data management, storage or backup improvements and to identify opportunities to reduce costs
  • Bachelor's Degree from an accredited college or university in information systems or information technology or related field or equivalent experience
  • Expert knowledge in Sun Solaris and/or SUSE or Red Hat Linux systems design, documentation, implementation, monitoring, and support as demonstrated by no less than 5 years experience working in Information Technology as a Unix / Linux architect, engineer, or administrator with responsibilities to include the following
  • Type 25 words a minute for data and configuration entry
  • Strong knowledge of Microsoft Office including Word, Excel, PowerPoint, Access, Visio, Project and Outlook
67

Hadoop Big Data Developer Resume Examples & Samples

  • Create, Configure, Implement, Optimize, Document, and/or Maintain
  • Bachelor's degree or equivalent professional experience
  • 3 or more years of application development experience
  • 2 or more years of experience with Data Warehousing and ETL technology
  • 1 or more years of development experience on Big Data / Hadoop projects
  • Ability to obtain favorable adjudication following submission of Department of Defense eQuip Form SF86 (applying for and receiving NAC clearance immediately after hiring is a mandatory requirement)
  • Java proficiency
  • SQL Server development experience
  • IBM CDC experience
  • Hive experience
  • Unix shell scripting experience
68

Senior Systems Engineer VM / Cloudera Hadoop Resume Examples & Samples

  • Provide System Administration and management of Cloudera Hadoop clusters in production and test environments
  • Solid understanding of YARN, MapReduce, Spark and HDFS
  • Extensive knowledge of Cloudera Manager features such as configuration management, resource management, reports, alerts and logging
  • Experience in investigating, analyzing, diagnosing, tuning and resolving Cloudera Hadoop issues
  • Employ “best practices” for deploying and maintaining Cloudera Hadoop ecosystem
  • OS - Red Hat Enterprise Linux Version 7+
  • System Management Platform - Red Hat Network Satellite 6+
  • Configuration Management System - Puppet 4+
  • Authentication and Authorization: Centrify, LDAP, Kerberos and Active Directory
  • Scripting using Python
  • Analytic Computing: SAS, R, Stata, Matlab
  • Hadoop: Cloudera
  • Virtualization: VMware, KVM, Puppet or OpenStack
  • Analytics Computing: SAS, R, Stata, Matlab
69

Hadoop Lead Developer Resume Examples & Samples

  • Writing UNIX shell scripts to load the data from different interfaces to Hadoop
  • Minimum 8 years of IT experience development with regard to Big Data Platform in must
  • Proficiency with Java, Python, Scala, HBase, Hive, Map Reduce, ETL, Kafka, Mongo, PostgreSQL, Visualization technologies etc
70

Hadoop Lead Resume Examples & Samples

  • Candidate should have minimum 8+ year’s software development experience
  • Minimum 4 Years of experience on Big Data Platform
  • Understanding of automated QA needs related to big data
  • Proficiency with agile or lean development practices
71

Module / Lead Hadoop, Big Data Resume Examples & Samples

  • Interacting with Business analysts to understand the requirements behind BRDs/FRDs/SRs
  • Complete understanding of application code through code compilation, code walkthrough, execution flow, overall design
  • Local compilation, deployment and behaviour/Unit testing
  • Unit testing, Integration testing, UAT/SIT support
  • Good to have experience in No-SQL.Core Java will be preferred; however knowledge of any OOP language is required
  • Thorough knowledge and hands on experience in a few of the following technologies Hadoop, Map Reduce Framework, Hive, Sqoop, Pig , Hue, Unix, Java, Sqoop, Impala. Cloudera certification
  • (CCDH) is an added advantage
72

Senior Bigdata Hadoop Lead Resume Examples & Samples

  • Build creative, high performing, and scalable code to support thousands of users and billions of records using Apache Kafka , Spark & Storm
  • Work with Data Architects to create Big data models for the new solution and validate the business rules
  • Build data models that supports Apache HBase, HDFS , Spark schema
  • Create resilient and enterprise level code that are scalable, secure, accurate and dependable
  • Technical design, architecture and development of Hadoop ecosystem related technologies
  • Assists in the operations, support, management and deployment of Hadoop environments
  • Provide expert level understanding of Hadoop technologies, integration and troubleshooting
  • Consult and advise a team on best tool selection, best practices and optimal processes
  • Ability to solve any ongoing issues with operating the cluster
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming - Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
  • Experience with Spark - Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
  • Experience with Cloudera/MapR/Hortonworks
  • Bachelors Degree or Higher
  • US Citizen or GC Holder
73

Hadoop Admin Resume Examples & Samples

  • Team Work and Collaboration - Collaborate and add value through participation in peer code reviews, provide comments and suggestions, work with cross functional teams to achieve goals. Work as technical interface with other teams to resolve issues related to interfacing functionalities
  • Quality and SLAs -Contribute to meet various SLAs and KPIs as applicable for account and Unit e.g. Responsiveness, Resolution, Software Quality SLAs. Ensure assigned tasks are completed on time and delivery timelines are met as per quality targets of the organization
  • Onboarding & Knowledge Sharing - Onboards new hires and trains them on processes, and shares knowledge with team members
  • 3 reasons why you should apply for this role!
  • You have a high technical aptitude and competence and looking to drive Amdocs customers forward through data analytics, machine learning
  • You desire an opportunity where you will be employ critical thinking and learn new technologies on a daily basis and get the opportunity to lead a team of Big Data experts
  • You are driven and enjoy technologies and want to learn new technologies
  • Applicants must be currently authorized to work in the United States without employer sponsorship now or in the future. **
  • 5 years of experience as a software engineer or a software support engineer
  • Ability to learn new technologies quickly and keep up with constant changes in the Big Data environment
  • Ability to develop and deliver projects in the big data area using Pig, Map Reduce/Yarn
  • Hive technologies 4+year experience
  • Fluent in at least one scripting language (Shell, Perl, Python, Java) 3+ years
  • Hadoop ecosystem experience (Zookeeper/HBASE/HDFS/Map Reduce/Oozie/Flume) 4+ years
  • Strong knowledge of Unix/Linux system 3+ years
74

Hadoop Test Lead Resume Examples & Samples

  • Understand the highlevel requirements through review of documents (eg Component Design Document/ Requirements document)
  • Document the understanding as part of the reverse KT
  • Seek signoff from the client
  • Partner with the Business Analyst to provide suggestions on the requirements to drive clarity based on experience in earlier projects
  • Update/Review KT documents created by Test Analyst
  • Seek review of updated documents from relevant stakeholders
  • Understand the critical business flow in terms of resource load, volume of data etc
  • Conduct feasibility study to identify tools /methodologies /frameworks to meet the client's requirements
  • Design estimates based on the analysis of the requirements and inputs from the test analyst
  • Maintain knowledge management portals and create knowledge artefacts (eg collaterals, reusable assets) to drive knowledge management
  • Capture and document the business/ application levels requirements details in WIKI that can be used for induction of new members to the project teams
  • Conduct KT for new team members
  • Organize boot camp for new joinees
75

Hadoop System Specialist Resume Examples & Samples

  • Bachelor’s degree in Computer Science or related fields
  • A solid foundation in data center and cluster management
  • Strong background in Unix/Linux
  • Experience in programming with scripting languages Python, Perl, Bash and Ruby
  • Willingness to take responsibility, drive new developments, plus a high level of commitment, team-spirit, flexibility, and initiative
  • Experience in Cloudera and/or Ambari platforms
  • Experience in system administration and analysis
  • Experience with Chef for automatization
  • Experience with Agile/SCRUM development methodology
76

Senior Tsa-hadoop Resume Examples & Samples

  • Creative thinker and inherently curious
  • Experience with Hadoop: Spark, Kafka, Pig, Hive, Sqoop, Flume, HBase, and MapReduce
  • Experience with Java and multithreaded application development
  • Previous experience with relational databases
  • Strong Linux experience, bash knowledge and scripting experience e.g. Python
  • Experience on large projects/programs with multiple applications with multiple interfaces and/or 3rd parties
77

Hadoop Solutions Architect Resume Examples & Samples

  • Work with business units globally to accelerate development of Hadoop and Spark based solutions upon the Experian Data Fabric
  • Design high throughput and streaming data pipelines using Experian Lambda Architecture; Internal bulk sources to include server logs, file uploads, database logs, relational sources and CDC; streaming sources to include web sockets, HTTP client callbacks, and kafka events
  • Work with business and applications teams to identify policies relative to data privacy, confidentiality, archiving, retention, etc. and design solution components to operationalize the data the policies
  • Design and build data provisioning components to deliver data for operational analytics as well as research purposes
  • Design ultra-low latency micro services to serve data from the State Container within 10 milliseconds
  • Build data pipelines using HDFS, MapReduce, Spark, Storm, Kafka and emerging compute frameworks
  • Design Parquet structures for Analytics Containers to serve business intelligence and analytics use cases
  • Design schema and streaming pipelines for real-time stores such as Cassandra and Hbase
  • 8+ years hands-on senior developer/architect
  • 3+ years experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, Cassandra, HBase, Hive, Flume, Sqoop, Spark, Kafka, etc
  • 1+ year experience working with native MapReduce and/or spark
  • 5+ years Java development background
  • 5+ years data engineering/ETL development experience working with data at scale
  • Desired - Bachelor's degree in Computer Science, Engineering or related discipline
  • Desired - experience with metadata based or rules driven ETL layer
  • Desired - Experience with development of generic data ingestion and transformation framework
78

Hadoop Software Engineer, Lead Resume Examples & Samples

  • Write, edit, and debug computer programs for assigned projects
  • Transfer knowledge to other members of the application development team
  • When required, review new technologies to identify opportunities for improvements and reduce costs
  • Required or equivalent work experience 2 additional years of relevant work experience may be substituted in lieu of degree
  • Required 8+ years of experience in the development, implementation, and maintenance of large-scale OLTP and DSS in a client-server environment
  • Preferred 5+ years of experience in both the use of structured and object oriented systems analysis, design, development tools, and techniques
  • Preferred 5+ years of experience in broad base of knowledge with various OS including all Microsoft OS, UNIX, Linux
  • Preferred 2+ years of experience in IT Healthcare
  • Hadoop Experience
79

Tableau Developer With Hadoop Experience Resume Examples & Samples

  • Design and develop reports and dashboards in Tableau using data in Hive (Hadoop) - e.g. using Hive ODBC
  • 6 years minimum experience with Tableau (expert level)
  • UI/UX experience providing dashboard presentations and working with clients on the best way to display information (visually appealing reports and dashboards including graphs/charts)
  • Tableau server configuration experience
  • Tableau security (e.g. SSO) experience
  • Experience migrating or translating reports from legacy systems like MSFT SSRS, direct SQL queries to Tableau
  • Demonstrable examples of past Tableau portfolio
  • Tableau server administration tools experience
  • Expert SQL query tuning for performance
  • Experience pulling data from Hadoop through Hive on Tableau
  • Experience working on Hadoop distributions as mentioned above
  • Very strong verbal and written communications
  • Experience with clinical testing reports and domain a plus
80

Dba-hadoop / Big Data Resume Examples & Samples

  • High School Degree and 2 year technical degree in computer programming or equivalent training required. Bachelor degree preferred
  • Strong ability to work independently and manage one's time. Strong knowledge of computer software, such as SQL, Visual Basic, Oracle, etc
  • Direct programmers and analysts to make changes to existing databases and database management systems
  • Direct others in coding logical and physical database descriptions
  • Review project requests describing database user needs to estimate time and cost required to accomplish project
  • Review and approve database development and determine project scope and limitations
  • Approve, schedule, plan and supervise the installation and testing of new products and improvements to computer systems
  • Plan, coordinate, and implement security measures to safeguard information in computer files against accidental or unauthorized damage, modification or disclosure
  • Develop standards and guidelines to guide the use and acquisition of software and to protect vulnerable information
81

Hadoop Admin With Anaconda Resume Examples & Samples

  • 4+ years of experience in Hadoop administration
  • Experience with administering and installing Anaconda
  • Experience in Cloudera Hadoop Distribution
  • Understanding of Core Hadoop components – HDFS, Map Reduce
  • Experience in working in big data relational system such as Hive, Impala, etc
  • Experience with programming frameworks - Spark, Pig etc. …
  • Experience with Python Scripting
  • Implementation experience in developing tools/scripts to monitor the Hadoop Cluster
  • May involve program product installation, hardware/software configuration, performance tuning, capacity planning, and/or support
  • Recommend and coordinate server hardware/software upgrades/replacements
  • Experience managing and supporting Hadoop Production Cluster
  • Creates and validates backup and recovery processes from a hardware/software perspective
  • Java/Scala Programming Experience preferable
  • Hadoop Hive
82

Junior Hadoop Dev / Ops Developer Resume Examples & Samples

  • Scale out the Big Data platform in a multi-cluster environment
  • Support the implementation of a manageable and scalable security solution
  • Enable the successful use of DevOps processes on the cluster
  • Assist with configuring the cluster to be ready for Data Science processes using Spark, Python, R and other tools
  • Provide 3rd Line support for the operational aspects of the Big Data platform
  • Personable, curious, intelligent, positive thinker, highly motivated, results driven, quick learner, great communicator
  • Good experience of using and administering modern Linux Distro’s. Preferably RHEL derivatives or related distro’s (CENTOS/ROSA/ORACLE Linux/Scientific Linux/Fedora)
  • Strong Infrastructure background
  • Strong understanding of Operating Systems (File systems, Processes, Networking)
  • Experience of using/administering distributed systems in some capacity. Preferably Hadoop and Cloudera distribution
  • Experience of Database administration (Postgresql, Mysql, Oracle)
  • Strong Shell Scripting or Programming. Strongly prefer Python. But Bash, Perl, ruby, powershell are acceptable
  • Background in programming and good understanding of computer science
  • Keen interest in problem solving and debugging
  • Understanding of Basic SQL
  • Exposure to ETL processes
  • Develop re-usable code for platform management and data processing
  • Additionally, experience in any of the following is welcome: Networking, Kerberos, Ldap, SSH, Java, Computer Security, Regular Expressions, Compilation from Source(Java, C/C++, make, MVN, ant etc), Web Applications, Jenkins, git, scheduling and automation
83

Strata Hadoop World Event Resume Examples & Samples

  • Large Scale Big Data Projects
  • Big Data – Data Lake
  • DataMarts, MDM, Tableau rollout
  • Data Integration Hub
  • Industry Leading Data Analytics Technologies
  • Oracle, SQL Server, GreenPlum
  • Cognos, SAS, Tableau, Informatica
  • Hadoop, IBM MDM, Attunnity
84

Need Data Engineer With Informatica / Hadoop Resume Examples & Samples

  • Bachelor’s degree in Computer Science, Information Services, Mathematics, Statistics, or other applicable area from a four-year university or equivalent industry experience
  • 5+ years maintaining and developing software in a large scale data or web application environment like Oracle and Netezza
  • 2+ years in business intelligence including big data and a more advanced understanding of EDW concepts of warehousing, data movement, and data transformation (Informatica BDE experience is a plus)
  • 2-4 years developing for Hadoop environment (including data transformations)
  • 1+ years in Agile methodology
  • Expert knowledge in SQL
  • Good knowledge of object oriented analysis, design, and programming in Scala, Java or Python
  • Solid understanding of software engineering basics including data and architecture in areas like Site Activity
  • High performer with the ability to demonstrate their higher-level technical experience on a daily basis
  • Cloudera Hadoop
  • Pig
  • MicroStrategy SDK and report development
  • Redshift
  • Cloud technologies
85

Cloud / Hadoop Devops Intern Resume Examples & Samples

  • Must be a junior in an undergrad program or higher, with a minimum 3.2 GPA (will be verified)
  • Preferred course of study: Computer science or Information Technology,
  • Linux/Unix technical operation experience reguired
  • Shell Scripting and Troubleshooting ability required
  • Network(DC) or Cloud(AWS) knowledge, Python, Node/JS, Java or other coding skills, MySQL or other RDBMS, Hadoop or other NoSQL experience desired
  • Desired skills: Ansible, Git, Jenkins, AWS, Artifactory, MySQL/RDS, HAProxy, DNS, LDAP, Apache/Nginx, Hadoop, Hive, Spark, Hbase, Cassandra, Sqoop, Hortonworks/Ambari, Docker, Cloudera, Spinnaker, Hazelcast, Redis, LDAP
  • Ability to be self-directed, learn and work well in a fast-paced environment is essential
  • Must have strong communication skills and ability to work effectively in a collaborative environment
86

Hadoop Big Data Developer Resume Examples & Samples

  • Bachelor's degree in Computer Science, Engineering or other technical field
  • 3or more years of application development experience
  • 6+ months of Talend development experience
  • 6+ months of experience with UnitedHealth Group systems and processes
  • Experience working with Data Warehousing and ETL technology
87

Test Specilaist Hadoop Testing Resume Examples & Samples

  • 2) Big Data knowledge is an advantage
  • 3) ETL knowledge is a plus
  • 4) Automation skills are handy
88

Abinitio / Hadoop Programming Analyst Resume Examples & Samples

  • Actively research new DW technologies and deploy solutions to meet the growing need for analytics
  • Pro-active and diligent in identifying and communicating scope, design, development issues and recommending potential solutions
  • Able to coach and mentor other team members
  • Bachelor's Degree or equivalent professional work experience
  • Unix developer with 3 + years work experience
  • Experienced with using ETL tools in Data Warehouse environment preferable
  • Deep understanding of Relational Databases, Hadoop and AbInitio
  • Ability to effectively work with business partners
  • Ability to think independently and work with various teams on projects
  • Understanding of Enterprise Data warehouse data models and dimensional modeling concepts
  • Ability to self-educate and seek new technologies to support business goals
89

Test Specilaist Hadoop Testing Resume Examples & Samples

  • 5 + years of Strong Testing
  • (Functional,Integration,E2E testing, Regression testing and UAT) experience in Enterprise applications with multiple technologies (Java or ETL based)
  • 3 to 5 years of proven experience with designing test scenarios/test cases for complex programs/projects (ETL/Java based)
  • Experience with advanced SQL query writing and PL/SQL scripts
  • Have knowledge in Test Automation testing
  • 1+ years of proven test experience in a range of big data architectures and frameworks including Hadoop ecosystem, Java, MapReduce, Pig, Hive, Hue, Oozie, Hbase, Ranger, Attunity etc
  • Experiecnce/Knowledge in Analytics Tools
  • 5 to 1 year of any application development experience (Preferably Java, ETL,Mainframe, LINUX/UNIX etc)
  • Act as self-starter with the ability to work on complex projects and analyses business requirements independently
  • Design test scenarios/test cases for positive, negative, alternative flows and ensure 100% requirement coverage
  • Identifying and documenting testing issues and participates in defect remediation
  • Following test case standards and guidelines in ALM
  • Expertise in Testing Process and in ALM Tool
90

Hadoop Cluster Administrator Resume Examples & Samples

  • Learn about all the technologies involved in the project like java, shell scripting, Linux administration, Hadoop, and others
  • Keep proposing new and better ways to solve problems from technology view or process view
  • Generate a confidence with the team members by a constant communication with them
  • Create additional value to his/her tasks, being proactive
  • Have the clusters for every country working all the time
  • Attend all the meeting scheduled, having a high participation there
  • Completed College Degree
  • 6+ months of related work experience
  • Have knowledge in any of the next technologies: Java, shell scripting, Linux administration, computer network, distributed systems, Hadoop
  • Effective team player
  • Strong interpersonal, communication and customer service skills
  • Good organizational skills with the ability to adapt and adjust to changing priorities
  • Positive attitude and the ability to work well with others
  • Passionate about technology
  • Pursuer of knowledge
  • Value creation
  • Creativity and innovation
  • Analytical skills to solve problems
91

Hadoop / Cloudera Developer Resume Examples & Samples

  • Experience with Hadoop development and implementation
  • Experience with Hadoop data loading utilities from disparate data sets
  • Experience analyzing and querying Hadoop data stores
  • Experience with Hadoop security and data privacy
  • Experience in logical and physical data store design
  • Experience with .Net software development
  • Preferred experience with Cloudera distribution
  • Preferred experience with Informatica Power Center
  • Administers existing databases
  • Analyzes, designs and creates new databases
  • Performs data modeling and database optimization
  • Understands and implements schemas
  • Interprets and writes complex SQL queries and stored procedures
  • Proactively monitors systems for optimum performance and capacity constraints
  • Applies development skills to complete tasks
  • Responsible for database schema design, extensive T-SQL development, integration testing and other projects that may be necessary to help the team achieve their goals
  • Performs support, development and administrative activities as required, including periodic pager duty to support department 7x24x365 applications
  • Three years MS SQL Server database experience required
  • Two years C# and/or asp.NET experience preferred
  • Programming experience with emphasis on designing complex t-SQL that scale well and are optimized for use in a high volume environment preferred
  • Experience in the physical and logical design of database architecture preferred
  • Diverse technology background required
  • Excellent verbal and written communication skills, strong organizational and analytical skills required
92

Hadoop Software Developer Resume Examples & Samples

  • Identifies and reports problems in new and existing software; recreates reported software problems to facilitate solutions; this includes validating the fix for the software problem
  • Must be able to obtain and maintain clearance level by our client. (TS/SCI with POLY+)
  • Work Experience as a Software Developer using Hadoop technologies and interest to continue coding and developing
  • Strong understanding of data modeling
  • Object Oriented Programming, Multi-threaded programming experience, experience using continuous integration
  • Experience supporting Agile software development
93

Hadoop / Cloudera Software Developer, Lead Resume Examples & Samples

  • Writes, modifies, and debugs software largely focused in the back-end and data layer
  • Designs and maintains Big Data analytical algorithms to operate on petabytes of data
  • Typically requires bachelor’s degree or equivalent, and seven to nine years of related experience
  • The clearance level required is dependent on the type of clearance supported by our client
  • Experience with Hadoop (or similar: Cloudera, Hortonworks, etc..)
  • Strongly Desired "experience with Lucene text search engine."
  • Strongly Desired - Object Oriented Programming, Multi-threaded programming experience, experience using continuous integration
  • General Hbase knowledge
94

Linux Engineer With Hadoop Resume Examples & Samples

  • SME for Red Hat Linux servers and related systems i.e. satellite
  • Be responsible for the updating of both templates and deployed Linux servers, both physical and virtual
  • Act as an escalation point for support personnel with advanced Linux and Hadoop troubleshooting
  • Lead efforts to continue building a multitenant Hadoop as a service platform
  • Provide input on processes to streamline project delivery and customer support
  • 5+ years of Linux Server OS experience
  • 5+ years of Virtualized Infrastructure support
  • 5+ years of experience in a large enterprise environments
  • 3+ years of RHEL Satellite administration
  • 3+ years of VMware experience
  • 3+ years of Server Hardware support
  • 3+ years of experience providing end user support
  • 2+ years of Hadoop administration experience
  • BA/BS in a CS related field
  • Experience with any analytics platforms: R, SAS, Hadoop, Spark
  • Experience with Database platforms: MS SQL, MySql, Oracle, HDFS, Cassandra
95

Hadoop L Operations Specialist Resume Examples & Samples

  • Manage several Hadoop clusters in development and production environment
  • Provide a secure environment, managing and mitigating risks
  • Resolve Incidents impacting hosts or environment
  • Investigate root causes of incidents via problem management
  • Work with engineering groups to deliver solutions for business requirement
  • Work on improvements on incident and problem management procedures tool
  • Strong experience in Unix Shell
  • Experience with the Cloudera and/or MapR hadoop distribution
  • Proficiency in one or more of Java, Python, SQL
  • Experience with ETL (Extract, Transform and Load) and Business Intelligence tools
96

Software Engineer, Hadoop Resume Examples & Samples

  • Write robust, high-performance, maintainable code as leading member of the data platform team
  • Be a part of a team working on a high volume, highly available data platform that is critical to the success of the business
  • Participate in architecture discussions, influence the roadmap, and take ownership and responsibility over projects
  • Build available, scalable, low latency systems designed for high traffic infrastructure
  • Conduct performance testing and monitoring of production systems
  • Review work and assure adherence to best standards and practices
  • BS in Computer Science or related technical discipline or equivalent practical work experience
  • Experience with databases and data-driven applications
  • Experience with replication, backup and restore and disaster recovery
  • Solid computer science fundamentals such as complexity analysis, data structures and software design
  • You are experienced with architecting, developing and extending large and complex systems
  • You are passionate about shipping quality high-performance code
  • You are a problem solver, a fixer, and a creative technologist
  • You need to be a great team worker and a great communicator
  • Working knowledge of Unix/Linux
  • MS or PhD in Computer Science
  • Experience with specific “Big Data” technologies including Hadoop, HBase, Storm, Kafka
  • Experience with PostgreSQL, pgplsql and Python
97

Java / Hadoop Application Developer Resume Examples & Samples

  • You will work with the Business Requirements providers to produce detailed technical design specifications for new/changed system functionality
  • Develop technical approach for system functionality and document approach, impacts, and pseudo-code
  • Work with the larger development team to ensure that they understand the specifications
  • Work with Business Requirements providers to translate business scenarios into relevant unit/system/integration/interface testing scenarios
  • Communicate technical integration decisions, issues and plans with project team
  • Document performance requirements, development standards and security requirements, develop flow charts, functional diagrams and descriptions to communicate technical design specifications
  • Utilize multiple programming languages and software technologies to build/modify applications that are fit for use and admissible for maintenance/upgrades
  • Resolve issues
  • 5+ years of Application development – Java/Unix, business analysis & project management experience preferably in Risk & Finance domain; 3+ years of which should be in the financial services industry
  • Experience in Big Data (Hadoop) preferred
98

Hadoop Delivery Engineer Resume Examples & Samples

  • BS or higher degree in Computer Science or related discipline
  • 1+ years work experience
  • Hands-on experience with Hadoop operation
  • Good understanding of Hadoop stack and tools (Hive, HDFS, )
  • Proficiency in a programming language, Java, C/C++, Python
  • Experience with Linux OS, shell scripting
  • Experience with Scala and/or Spark is a plus
  • Design and build data processing pipelines for structured and unstructured data using tools and frameworks in the Hadoop ecosystem
  • Develop applications that are scalable to handle events from millions of STBs
  • Participate in meetings with business (account/product management, data scientists) to obtain new requirements
99

Software Engineer, Hadoop Resume Examples & Samples

  • Provide tuning support and analysis for client applications
  • Provide framework support for client teams with Spark, HBase, Hive
  • Drive best practices for developing application against Hadoop Clusters
  • Provide application tuning for Spark applications
  • BS in Computer Science or related technical discipline or equivalent practical experience
  • 3+ years experience in software engineering in a big data environment
  • Real world Hadoop experience
  • Real world experience with NoSQL solutions
  • Experience with version control systems such as Git or Perforce
  • Experience in the Ad Industry space
  • Experience with Distributed Systems outside of Hadoop
  • 5+ years of academic or industry experience (preferably at a tech company.)
100

Hadoop Data Engineering Lead-intelligent Solutions Resume Examples & Samples

  • Lead build of Hadoop Data Engineering Products
  • Support product through all development phases until the product is handed over to Production Operations
  • Technical documentation, architecture diagram, data flow diagram, server configuration diagram creation
  • Evaluate application risk analysis and complete all necessary documentation
  • Work with big data developers designing scalable supportable infrastructure
  • Perform PoC’s to evaluate new products and enhancements to existing products deployed
  • 7+ years of IT experience
  • 2+ years experience in Hadoop preferred
  • 4+ years of Java experience preferred
  • 4+ years experience with ETL with tools like Informatica, Data Stage, Ab-initio, Nifi, Paxata, Talend required
  • Experience with Data modeling, Data warehouse design, development
  • Experience with Data Wrangling tools is preferred
  • SQL tuning and performance optimization
  • Exposure to Sentry, Cloudera Manager, Hive, HBase, Impala preferred
  • Experience working in cross-functional, multi-location teams
101

Hadoop Data Modeling Data Architect Resume Examples & Samples

  • Minimum of 5+ year’s of business requirements analysis and information modeling (logical and physical) of key financial products (Bilateral, OTC & exchange traded, centrally cleared) and concepts (e.g. positions, trades, financing transactions, future flows, collateral/margining) in the securities/financial industry
  • Minimum of 2+ years in handling Physical modeling (Bid Data- EXADATA, HADOOP etc.)
  • Proficiency in a data modeling tool – preferably Erwin
  • General knowledge of financial and securities industry products, processes and practices
102

Hadoop Admin Resume Examples & Samples

  • Knowledge on Cloudera distribution of Hadoop in Barclays ( Mainly Kerberos enabled)
  • CDH administration -> Upgrade and patch implementation ,HDFS HA configuration ,user management
  • Cloudera Hadoop Security
  • Kerberos and its management
  • Sentry implementation (role based privilege granting)
  • CDH support -> Troubleshooting issues related to mapreduce jobs ,hive queries, spark jobs ,impala
  • CDH services mainly in use -> Core services (HDFS,YARN,Zookeeper,Hive), Impala, Spark, Hue, Hbase, Kafka
  • Memory Tuning- > Spark & Impala
  • Resource Management ->Scheduler Queue ,Vcore ,Memory , HDFS Quota
  • BDR in Cloudera
  • Good knowledge in Hadoop Cluster Build(To build cluster from scratch)
  • Linux architecture knowledge will be an added skill which can be beneficial
  • TWS Scheduler, Basic Testing concepts
103

Hadoop Solution Architect Resume Examples & Samples

  • Coordinate and provide experience-based solution for teams deploying business intelligence, data warehouse and Big Data solutions
  • Strong knowledge of data warehousing and Big Data solutions and how data architecture fits into larger data warehousing and database implementation projects as a major component of the effort
  • Develop, implement and maintain data architecture best practices and standards
  • Utilizing data architecture best practices and standards, define and implement technical solutions in the movement of data throughout an organization
  • Provide leadership in related technical areas of importance such as Business Intelligence Reporting and Analytics
  • Gather requirements for data architecture through the use of business and technical interviews and round-table discussions
  • Evaluate and make decisions regarding the alternative processes that can be followed in data movement throughout an organization: ETL, SOA / Web Services, Bulk Load,
  • Evaluate and make decisions regarding the alternative tools and platforms that can be used to perform activities around data collection, data distribution and reporting
  • Show experience with the concepts of data modeling for both transaction based systems and reporting / data warehouse type systems
  • Evaluate data related requirements around data quality and master data management and understand and articulate how these factors apply to data architecture
  • Understand the concepts of data quality, data ownership, and data governance, and understand how they apply within a data architecture framework
  • A persuasive executive presence
  • A strong achievement orientation and a demonstrable entrepreneurial spirit
  • Superb team building skills with a predisposition to building consensus and achieving goals through collaboration rather than direct line authority
  • A positive, results oriented style, evidenced by listening, motivating, delegating, influencing, and monitoring the work being done
  • Ability to establish immediate credibility among his/her peers, a professional who is respected for his/her intelligence and technical expertise
  • An engaging/open interpersonal style complimented by the analytical pragmatism necessary to quickly dissect highly complex issues
  • Strong interpersonal/communication skills with the professional staff, senior level executives, and the business community at large
  • Demonstrated entrepreneurial and commercial instinct
  • Diplomatic, flexible, with a good team approach
104

Hadoop Business Analyst Resume Examples & Samples

  • The DSC landing teams will be responsible for replicating data from over a hundred sources to our Hadoop data lake
  • The data in Hadoop will be used to support the client’s data scientists
  • This role will be responsible for
  • Performing high level analysis on each source to gather needed requirements and prep the source for the data landing team
  • Work with our GDIA business customer to understand and document their needs for the data in Hadoop
  • Facilitate discussions between source and GDIA to identify and resolve open questions related to the data to be landed
  • Act as the proxy product owner for GDIA to help resolve issues for the development teams
  • Ability to negotiate solutions and drive consensus
  • High organizational, planning and time management skills; ability to prioritize effectively in a fast-paced, complex and global environment
  • Experience in working with agile methodologies using Rally
  • Hands on experience on SQL queries
  • Ability to lead by managing issues, risks, scope, plans, estimates and schedules, performance reporting across a complicated set of customers
  • 3+ years of experience as BA creating use cases and process flow
  • Experience with ETL, Hadoop or data wharehousing
105

Hadoop ETL Developer Data Lake ETL Java Pl\SQL Resume Examples & Samples

  • Leverage corporate Hadoop environment where you will work closely with production operations and systems engineering to design and implement tools to explore the Data Lake
  • Responsible for efficient, compliant, timely, secure acquisition, integration, provision, and exchange of enterprise data
  • Design, develop, implement, integrate, test, optimize and support innovative ETL solutions across multiple platforms, environments, domains, and locations
  • Create tools in order to enhance production stability, availability, efficiency and reliability
  • Assess and document the structure, quality, and compliance of data sources and coordinate with business and technical team to identify and resolve issues
  • Document ETL processes, programs and solutions as per established standards
  • Expected to function independently to develop ETL solutions with cutting-edge technologies and to ensure that the department's objectives are met
  • Must have at least 3 years of progressive technical working experience in ETL development
  • Must have technical working experience with ETL tools (e.g., DataStage, Informatica, SAS, SSIS)
  • Must have hands-on technical working or co-op experience with Hadoop stack (e.g., HDFS, MapReduce, HBase, Spark, Oozie, YARN, Zookeeper, etc…)
  • Must have extensive knowledge of most aspects of IT practices as well as a detailed understanding of systems architecture, business analysis and application development
  • Must possess good interpersonal and communication (verbal/written) skills
  • Must have hands-on technical experience in programming with JEE
  • Experience developing and tuning complex PL/SQL queries
  • Solid foundation in distributed systems concepts
  • Familiar with source-to-target mappings, test plans and other technical design/development documentation
  • General knowledge on financial industry, bank's products and services is an asset
106

Software Engineer, Hadoop Resume Examples & Samples

  • Software development both in Comcast data centers and AWS
  • Development of unit and integration tests and ownership of testing the software you develop
  • Maintenance of data dictionaries, data validation scripts
107

Software Engineer, Hadoop QA Resume Examples & Samples

  • Software development in a collaborative team environment using Scrum Agile methodologies, to build data pipelines using technologies like Spark
  • Hands on work administering applications and helping with DevOps tasks
  • Ownership of validating product requirements and turning functional and technical requirements into detailed designs, including documenting those designs
  • Responsible for looking directly at data for low level understanding of the data, analytics development and to work with designing data quality checks
  • Experience with high volume data ingestion
  • Experience with stream processing (Storm, Flink, Spark-Streaming)
  • Experience with python development
  • Experience with Scala development
108

Lead Hadoop Solution Architect Resume Examples & Samples

  • Minimum 5 years’ experience as a Hadoop Solution architect
  • Minimum 2 years’ experience implementing HBase nosql databases
  • Minimum 2 years’ experience with Spark and Spark streaming
  • Experience using Apache NiFi or Hortoworks Data Flow (HDF)
  • Ability to define and execute capacity planning and upgrade roadmaps
  • Demonstrated knowledge of current big data and analytics solution platforms such as Hadoop, Spark, R, Aster and integrations with Teradata
  • Solid understanding of third party analytics interfaces (Tableau, Business Objects, Informatica & others)
  • Strong communication skills (written, verbal, presentation)
  • Experience and passion in providing mentoring
109

Hadoop Data Engineer Resume Examples & Samples

  • Work closely with data scientists and data engineers to implement, validate, test and productionize recommendation models
  • Optimize and improve algorithm efficiency to scale up recommendation engine
  • Integrate massive datasets from multiple data sources for data modeling
  • Maintain and improve recommendation engine
  • BS, MS or PhD in Computer Science/Engineering or Electrical Engineering, with minimum 3 years’ experience
  • Ability to program in one or more scripting languages such as PHP or Python and one or more programming languages such as Java, C++ or C#
  • Experience with large scale data pipelines and distributed systems (Hadoop)
  • Strong ability to optimize machine learning algorithms, tackle technical issues when manipulating massive datasets, and smoothly integrate multiple data sources
  • Experience in developing at-scale machine learning systems
  • Familiar with quantitative marketing and audience targeting
  • Familiar with machine learning algorithms, including regression, GBM, recommendation systems, neural networks, clustering, etc
  • Comfortable presenting findings and creating reports via Excel and Powerpoint
110

Scala / Spark / Hadoop Big Data Developers Resume Examples & Samples

  • Build a data platform that enables the business teams at the client to create the products needed by our clients
  • Ensuring high quality though out the development process
  • Collective ownership of a cloud based platform using big data technologies
  • Actively look for ways to make things better
  • You have a passion for writing good code
  • Must be able to articulate benefits of Big Data
  • HandsSpark is a must have
  • Java/Scala are great plus
  • Hands on experience with Hadoop ecosystem: MapReduce, Pig, Hbase, Hive, JavaMR etc
  • Distributed Applications
  • Data Modelling/KeySpace definition
  • Consulting on Big Data projects: Proof Of Concepts, implementation, delivery
  • Experience building large scale distributed data processing systems/applications or large scale internet systems
  • Experience working with Agile Methodologies
  • Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration (CI)
  • Strong understanding of relational and multi-dimensional data design and reporting
  • Unit testing frameworks and functional testing frameworks
  • Experience in Requirements Engineering, Solution Architecture, Design, Development and Deployment
  • Data Visualisation skills
  • Knowledge of networking
  • Enterprise Business Intelligence and Analytics
  • Experience with statistical packages like SPSS
111

Hadoop ETL Specialist Resume Examples & Samples

  • 1) Application Software Development
  • Assist Business Analyst in performing investigation, analysis and evaluation to determine business requirements
  • Design, develop and maintain ETL specific code
  • 2) Data Testing
  • Coordinate coding, testing, implementation, integration and documentation of solution. Develop program specifications
  • Demonstrates good understanding of the Software Development Life Cycle
  • Proficiency in ETL. Specifically Hadoop (e.g., Pig, Java, etc.), Data Stage and Informatica
  • UNIX skills (i.e., Shell scripting)
  • Experience and knowledge of data architecture and concepts of relational and dimensional databases
  • Experience with enterprise application architecture and enterprise integration patterns
  • Ability to implement re-usable data-integration/ETL code in an enterprise data-warehouse environment
  • Ability to lead a work package for development and delivery
  • Ability to work well in a challenging environment
  • Ability to multi-task and prioritize for self and team members
112

Hadoop / Big Data Platfom Engineer Resume Examples & Samples

  • Bachelor’s degree in a related field or equivalent experience/training
  • At least 10+ years’ experience in related IT fields
  • Previous demonstrated proficiency in core Hadoop technologies (i.e. Hadoop, Map-Reduce, Hive, Pig, Oozie, HDFS, Spark etc.)
  • Experience managing large scale infrastructure for data-intensive applications
  • Knowledge of programming technologies, such as Python, Ruby, and shell script
  • At least 5+ years’ experience with enterprise server environments on complex corporate TCP/IP networks
  • Knowledge of HP and Dell hardware
  • Experienced in performing a variety of complicated tasks
  • Excellent documentation skills and procedural development capabilities are a core requirement
  • Ability to deal with ambiguity, manage both in plan and out of plan requirements
  • Ability to communicate and market personal and team achievements, manage conflict and escalations
  • Ability to collaborate with key stakeholders including architects, software engineering, and project managers to report out development progress and escalates issues requiring attention
  • Knowledge of Chef and other automation tools
  • Cross team agility and strong team work ethic
  • Good knowledge of Oracle Linux/Red Hat Linux, and/or SUSE Linux
  • Good knowledge of clustering technologies
  • Good understanding of network communication and demonstrated troubleshooting techniques
  • Experience with other open source technologies
  • Experience with large database implementations
113

Senior Technical Lead-hadoop Technologies Resume Examples & Samples

  • Act as the lead technical resource responsible for directing the overall technical progress of projects or application initiatives targeting the Hadoop platform. In this capacity, they will plan and administer the operational activities of a Development/Systems Analysis/ Programming unit or team
  • Work with data architects to ensure that Big Data solutions are aligned with company-wide technology directions
  • Communicate progress across organizations and levels from individual contributor to senior executive. Identify and clarify the critical few issues that need action and drive appropriate decisions and actions. Communicate results clearly and in actionable form
  • Experience with data warehouse technologies using Informatica Power Center / BDM, SQL, PL/SQL, CDC Techniques/Tools, Oracle on Exadata
  • Excellent interpersonal and communication skills, including business writing and presentations
  • ETL experience a plus
114

Hadoop Admin Global Delivery Center Resume Examples & Samples

  • Experience with ANY ONE of the following
  • Proficiency in Spark, Hive internals (including HCatalog), SQOOP, MapReduce, Oozie and Flume/Kafka
  • Web/Application Server & SOA administration (Tomcat, JBoss, etc.)
  • Hadoop integration with large scale distributed data platforms like Teradata, Teradata Aster, Vertica, Greenplum, Netezza, DB2, Oracle, etc
115

Think Big Hadoop Data Engineer Resume Examples & Samples

  • Working knowledge on Maven & Git
  • Exposure on XML processing
  • Exposure on NoSQL platforms (MongoDB, Cassandra, HBase etc)
  • Awareness about Bigdata & related technologies
116

Hadoop Program Manager Resume Examples & Samples

  • Drive the commercialization of the Teradata Hadoop Appliance and associated software releases from our 3rd party Hadoop partners
  • Manage the development and delivery of high quality deliverables required to successfully compete in the marketplace
  • Manage, maintain and execute the Teradata Hadoop product roadmap
  • Manage many internal stakeholders in Teradata Labs, Teradata CS, ThinkBig, and Teradata Alliance Mgmt
  • Excellent program management skills with strong analytical and business acumen
  • Experience managing cross-functional teams that are not under direct control including product management, engineering, sales, sales industry marketing, and customers
  • Self-directed and detail oriented, who doesn't let items "slip through the cracks"
  • Help team members manage their priorities to achieve program milestones and goals
  • Strong communication skills with the ability to present concepts to management, team members as well as non-technical audiences
  • Demonstrated knowledge of agile release processes with the ability to apply it to the deployment and product realization
  • Excellent written, verbal communication, presentation, and interpersonal skills
  • Multitask between projects with shifting priorities
  • Ability to take on additional roles and responsibilities as required
  • US Permanent Residence
  • BS or MS in computer science, engineering, business or related discipline
  • 5+ years of experience in program or product management or development for enterprise solutions
  • Masters in engineering, computer science, business or related discipline
  • Experience managing agile development and support life cycles
  • Demonstrated understanding of Hadoop, the Hadoop ecosystem, NoSQL databases, and emerging big data technologies
117

Hadoop Admin Resume Examples & Samples

  • Minimum experience of 1 year in Managing and Supporting large scale Production Hadoop environments in any of the Hadoop distributions (Apache, Teradata, Hortonworks, Cloudera, MapR, IBM BigInsights, Pivotal HD)
  • Development or administration on NoSQL technologies like Hbase, MongoDB, Cassandra, Accumulo, etc
  • Analysis and optimization of workloads, performance monitoring and tuning, and automation
  • Addressing challenges of query execution across a distributed database platform on modern hardware architectures
  • Define standards, Develop and Implement Best Practices to manage and support data platforms
  • Experience on any one of the following will be an added advantage
  • Knowledge of Business Intelligence and/or Data Integration (ETL) operations delivery techniques, processes, methodologies
118

Hadoop Senior Consultant Resume Examples & Samples

  • Have hands-on experience in the design, development or support of Hadoop in an implementation environment at a leading technology vendor or end-user computing organization
  • 5+ years experience leading and managing development teams
  • 1+ years experience implementing ETL/ELT processes with MapReduce, PIG, Hive
  • 1+ years hands on experience with HDFS, and NoSQL database such as HBASE, Cassandra on large data sets
  • Hands on experience with NoSQL (e.g. key value store, graph db, document db)
  • 5+ years experience in performance tuning and programming languages such as; Shell, C, C++, C#, Java, Python, Perl, R
  • Demonstrate a keen interest in, and solid understanding of, “big data” technology and the business trends that are driving the adoption of this technology
  • Maintain a good level of understanding about the Hadoop technology marketplace
  • Strong understanding of data structures, modeling and Data Warehousing
  • Team-oriented individual with excellent interpersonal, planning, coordination, and problem-solving skills
  • High degree of initiative and the ability to work independently and follow-through on assignments
  • Excellent oral and written communications skill
  • BS or MS degree in Computer Science or relevant fields
119

Think Big-hadoop Admin Resume Examples & Samples

  • Experience with High availability, BAR and DR strategies and principles
  • Hadoop software installation and upgrades
  • Proficiency in Hive internals (including HCatalog), SQOOP, Pig, Oozie and Flume/Kafka
  • Development, Implementation or deployment experience on the Hadoop ecosystem (HDFS, MapReduce, Hive, Hbase)
  • Articulating and discussing the principles of performance tuning, workload management and/or capacity planning
  • Exposure to tools data acquisition, transformation & integration tools like Talend, Informatica, etc. & BI tools like Tableau, Pentaho, etc
120

Application Architect Hadoop Environment Resume Examples & Samples

  • Research, optimize and monitor health of big data environment
  • Facilitate optimal configuration and service management across Hadoop cluster
  • Promote and socialize full Big Data cycle, data ingestion practices, processing and storage within the larger organization
  • Design, develop, maintain, and enforce company standards and architectural artifacts
  • Follow relevant developments within technology industry
  • Miscellaneous activities and responsibilities as assigned by manager
  • Bachelor’s degree from an accredited institution required in Computer Science, Computer Engineering, Software Engineering, Information Systems/Technology or related major field of study
  • 5 or more years of experience required in Transportation/Intermodal Operations, Information Systems, or Logistics
  • 10 or more years of experience required in Transportation/Intermodal Operations, Information Systems, or Logistics
  • Master’s degree (or higher) from an accredited institution in Computer Science, Computer Engineering, Software Engineering, Information Systems/Technology or related major field of study
  • Knowledge of Software Integration design patterns
  • Experience working with REST and SOAP protocols
  • Knowledge of Hadoop technologies: Storm, HBase, Phoenix, Kafka
  • Experience writing JUnit tests
  • Experience with Agile methodology and tools like Jira, Confluence and Bitbucket
  • Knowledge of Information/data modeling
  • Careful attention to detail
  • Technology industry knowledge
  • Ability to lead and facilitate discussions across multiple teams
  • Written and oral communication skills including interaction with business partners, vendors and technical staff
  • Knowledge of Service Oriented Architecture (SOA) Stack
  • Knowledge of World Wide Web Consortium (W3C) Standards
  • Knowledge of Architecture concepts and frameworks
121

Hadoop Data Engineer Resume Examples & Samples

  • Work with data team to efficiently use hadoop infrastructure to analyze data, build models, and generate reports/visualizations
  • Create and optimized distributed algorithms to scale out recommendation engine
  • Implement methods for automation of all parts of the predictive pipeline to minimize labor in development and production
  • Formulate business problems as technical data problems while ensuring key business drivers are captured in collaboration with product management
  • Knowledge in machine learning algorithms especially in recommender systems
  • BS, MS or PhD in Computer Science/Engineering or Electrical Engineering
  • 5+ years of working experience in specifically Hadoop Data Engineering
  • Strong grasp of one or more programming languages such as Java, C++, Python etc
  • Strong understanding of algorithms and data structures
  • Strong understanding of multi-threading and resource management
  • Experience in writing optimized MapReduce code
  • Experience working with large datasets in Hadoop (using scripts and tools - Pig/Hive etc.)
  • Familiar with machine learning algorithms, including regression, GBM, recommendation systems, clustering, etc
  • Experience working with tabular databases like MySQL
  • Strong analytics and problem solving capabilities combined with ambition to solve real-world problems
122

Hadoop Hbase Solution Architect Resume Examples & Samples

  • Hadoop, HDFS, Hbase, Hive, Sqoop, Spark
  • Experience in using Pig, Hive, Scoop, Hbase and Cloudera Manager
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice-versa
123

Hadoop Software Automation Engineering Intern Resume Examples & Samples

  • Innovate, and contribute to technical architecture, design, and code, in the areas of test automation for open-source and database software running in a Hadoop cluster
  • Develop automated tests to verify correctness, scale, performance and usability of Hadoop software processes
  • Design, build, and manage tests that leverage Teradata Hadoop Appliance, Hadoop data connectors, and Hadoop eco-system modules
  • Automate the creation and tear-down of Hadoop test clusters in a Cloud environment using virtual servers
124

Hadoop Support Consultant Resume Examples & Samples

  • A consultant/specialist who supports SAS data scientist requests in using Hortonworks Distribution technologies
  • Someone who can negotiate with the Infrastructure Architect
  • Someone who can take a technical requirement and turn it into Infrastructure using the Hadoop architecture
  • BA in relevant technical field
125

Hadoop Sales Engineer Resume Examples & Samples

  • You would be working with customers, partners, and prospects to understand and propose solutions using Cloudera technologies
  • Lead technical sales process for numerous customers day to day, from introductory meetings (net new sales) through post-sales (customer satisfaction, upselling, and subscription renewals)
  • Map your customer requirements to current and future offerings
  • Educate your prospects on the business value of Cloudera’s offerings
  • Drive progress towards successful sales, in concert with Account Executives
  • You should focus on customer success
  • Continuously learn and update your skills in quickly evolving technologies
  • Ability to switch context quickly throughout the day with numerous contending demands
  • Minimum 3 years experience and success working in a customer-facing role
  • Hands-on, technical, problem solver
  • Development experience with Java / OO programming
  • Unix/Linux expertise required
  • Comfortable partenering with customers and public speaking
  • Demonstrated experience gathering and understanding customer business requirements
  • Experience with data warehousing and relational database architecture
  • Knowledge of ETL and ETL workflows
  • Oracle, MySQL or PostgreSQL
  • SQL development and optimization
  • Concurrency and synchronization
  • Fallacies of distributed computing
  • Common IPC/RPC methods and patterns
  • High availability and business continuity
  • Queuing patterns and pipeline design
  • Batch operations
  • Messaging systems and patterns
  • Solid OS / networking fundamentals
  • Virtual memory management
  • File system design
  • System administration knowledge
  • Network architecture
126

Think Big Hadoop Application Support Resume Examples & Samples

  • Be providing applications support to Think Big Customers on Hadoop platforms
  • Be responsible for maintaining custom built application mainly utilizing Spark / MapReduce services on Hadoop or R programs or Think Big Kylo or any third party tools
  • Monitoring, debugging and troubleshooting the application
  • Route case analysis of the job failure by identifying data issues and bug in code
  • 2+ years of experience in Applications Support (Java / J2EE, ETL, BI operations, Analytics support)
  • Experience in Scripting Language (Linux, SQL, Python), you are proficient in shell scripting
  • Working experience with one of the Scheduling tools (Control-M, JCL, Unix/Linux-cron etc.)
  • Experience in JIRA, Change Management Process
  • Experience in developing / supporting RESTful applications
  • You should be ready to work in shifts
127

Hadoop Data Engineering Lead Resume Examples & Samples

  • Platform design encompassing big data platform, ETL tool , reporting & analytics tools
  • Design, build, rollout and support of the analytics warehouse
  • Design build, rollout and support of ETL processes
  • Work closely with data scientists and engineers to design and maintain scalable data models and pipelines
  • Level 3 data operations definition and support
  • Minimum 10+ years of relevant experience in information management, ETL & Business Intelligence
  • Proven ability to work with varied forms of data infrastructure, including relational databases, Map-reduce/Hadoop, and NoSQL databases
  • Hadoop Certified Developer or equivalent experience
  • Experience working with SQL/Hive, Pig, and Spark
  • Expertise in one or more programming languages (Java and/or Python preferred)
  • Knowledge of XML (e.g., DTDs, XSDs, XSLT etc) , Messaging systems (e.g., Oracle JMS, QPID, ActiveMQ etc) and SQL
  • Prior experience leading a small team is a plus (this role will be as an individual contributor - but may have additional responsibilities later as one matures in the position)
  • Work with team across global locations
128

Hadoop Systems Administrator Resume Examples & Samples

  • Maintain the communication with vendors, other use case resources and customers during all phases of the ingestion and consumption
  • Act as system analyst for projects or use cases which are infrastructure-focused
  • Provides 1st and 2nd level support and coordinates 3rd-level support with vendors, as required
  • Develop standardized practices for delivering new products and capabilities using Big Data technologies
  • Troubleshoot issues within the various cluster’s environments
  • Performance tuning of a Hadoop processes and applications
  • Able to identify and analyze complex problem, identify root cause, provide detailed description and plan, design and deliver workaround/solution
  • Comfortable interviewing non-technical people to gather/discuss requirements
  • Gets easily acquainted to new technologies
129

Hadoop Senior Data Engineer Resume Examples & Samples

  • Proficiency with Hadoop v2, MapReduce, HDFS, Sqoop1 and 2, Oozie
  • Experience in designing, developing, and maintaining software solutions in Hadoop cluster
  • Must demonstrate Hadoop best practices
  • Demonstrates broad knowledge of technical solutions, design patterns, and code for medium/complex applications deployed in Hadoop production
  • Minimum 5 year working experience in the data warehousing and Business Intelligence systems
  • Experience with NoSQL Databases, such as HBase, Cassandra, MongoDB
  • Good knowledge of Big Data querying tools, such as Pig, Hive, Impala, Drill, Presto
  • Experience with building stream-processing systems, using solutions such as Spark-Streaming or Storm
  • Experience with Spark, Flink, SpringXD
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Experience with Big Data Machine Learning toolkits, such as Mahout, SparkML, or H2O
  • Good understanding of Lambda Architecture, along with its advantages and drawbacks
  • Experience and knowledge of major Hadoop distribution such as Cloudera, MapR, Hortonworks
  • Experience with programming languages such as Java, Scala, Python, SQL
  • Problem solving /analytical thinking
  • Systems change / configuration management
  • Experiencing working in a DevOps model and a passion for automation
  • Ability to translate high level design into specific implementation steps
  • Experience working in central data team (appreciation of strong governance framework)
  • Ability to learn quickly and think outside of the box
130

Datastage, Hadoop Resume Examples & Samples

  • Strong in database concepts and working knowledge in one or more databases like Teradata/SQL Server/Oracle/DB2 etc
  • Good understanding on UNIX system, SHELL scripting etc
  • Candidates with support experience are not eligible since this is a core-development requirement and candidates needs to have hands-on experience
131

Junior Level Hadoop Admin Resume Examples & Samples

  • 3+ years dev/admin experience working on Hadoop environment. (Cloudera would be preferred)
  • Strong understanding of components under Apache Hadoop ecosystem
  • Detailed understanding of daemons under Name-node and YARN
  • Developed applications using Mapreduce, Spark, Hive, Impala, Hbase, Solr, Flume, Kafka, Oozie
  • Understanding of Kerberos for Hadoop security
  • Knowledge of Cloudera Navaigator for audits
  • Experience with Linux OS
132

Hadoop Solutions Architect Resume Examples & Samples

  • Act as the Subject Matter Expert and holds overall accountability for the technical delivery and implementation of projects involving complex integrations regardless of purchased, built, or hybrid solutions
  • Develop solution, technology and infrastructure roadmaps, including options that are mapped to technology roadmaps
  • Demonstrate strong leadership, and being able to lead multi-million projects
  • Work together with infrastructure and enterprise architects
  • Work with some predefined standards and with unknowns
  • Being able to work under pressure and is a goal getter
  • Participate in the Architecture forum
  • Research and recommend enhancements to the strategic technology evolution of the organization based on new and emerging technologies
  • Translate highly complex concepts for peers and customers
  • May participate in the full systems life cycle for designing, coding, implementing and supporting applications software
  • Must have 6 - 8 years of demonstrated technical working experience with Extraction, Transformation & Load (ETL) & data warehouse containing terabytes (TBs) of data
  • Must have working experience in leading multi-million dollar projects containing TB’s of data volume in an architect role
  • Must have working experience / knowledge on the design of work flow for complex production processes
  • Must have good working experience & knowledge with data modelling
  • Must have demonstrated working experience with data architecture
  • Must possess excellent organizational skills and the ability to manage multiple complex initiatives
  • Possess demonstrated technical leadership and is proactive + results oriented
  • Working experience in Hadoop Architecture is highly desired
  • Working knowledge in Retail Risk (lines of credit or credit cards) is an asset
  • Working experience in a multi-disciplinary setting accessing multiple databases in a multi-system, multi-architecture environment is an asset
133

Hadoop Lead / Architect Resume Examples & Samples

  • Works as part of GIS BIDM team to design and implement big data solutions in alignment with business needs and project schedules
  • Design scalable data analytics platform and solutions in Hadoop and develop visualization and reporting solutions (leveraging technologies such as Tableau, Business Objects)
  • Code, test, and documents new or modified data systems to create robust and scalable applications for data analytics
  • Works with other developers to make sure that all data solutions are consistent
  • Interfaces with business community and provides ongoing status
  • Performs technology and product research to better define requirements, resolve important issues and improve the overall capability of the analytics technology stack
  • Evaluates and provides feedback on future technologies and new releases/upgrades
  • 6-8 years of data engineering, data science experience or software engineering
  • Strong experience working hands--on with big data systems such hdfs, hive, impala and spark on one of the Hadoop distributions such as CDH, HDP etc
  • Experience with agile or other rapid application development methodologies
  • Experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures
  • Significant knowledge of data modeling and understanding of different data structures and their benefits and limitations under particular use cases
  • Strong knowledge in different programming or scripting languages. (SQL, python, Scala)
  • Strong organizational and multitasking skills with ability to balance competing priorities
  • An ability to work in a fast-paced environment where continuous innovation is desired and ambiguity is the norm
  • Experience with a RDBMS –Microsoft SQL Server, Oracle, DB2
  • Experience in NoSQL databases such as HBase, Couch Base, Vertica, MongoDB, Cassandra is a plus
  • Experience in an ETL tool and Reporting Tools (such as BO and Tableau) is a plus
  • Experience collaborating with global teams
  • Ability to work independently and collaborative with other staff members
  • Excellent verbal, written and presentation communications skills, organizational capabilities, and collaborative interpersonal skills
134

Hadoop Expert Industrial Data Analytics Resume Examples & Samples

  • You will join a team which is building and maintaining high throughput Big Data infrastructures for high volume data processing and analysis and support the interaction with our ETL chains in both, on premise and AWS instances
  • Your focus is on configuring, managing and operating a Hadoop ecosystem, including onboarding of new data sources
  • For this, it is important to have a sound UNIX background and understand the Hadoop ecosystem
  • You will work with tools like HIVE, PIG, TEZ, MAPREDUCE, HBASE, STORM, SPARK and SQOOP
  • For development and operations support, you will work with YARN, AMBARI, OOZIE, ZEPPELIN and ZOOKEEPER
  • Based on your drive and results, you may be developed into the positon of key expert for Hadoop system
  • You hold an university degree in Information Technology or similar
  • You have experience in operating and maintaining complex IT products/solutions
  • You have experience with Big Data Technologies, understand the concepts and technology ecosystem around both real-time and batch processing in Hadoop
  • You are skilled in IT Service Management, IT Service Operations - conceptual and operationa
  • You are intercultural experienced, i.e. through international projects and have worked with near and off shore teams
  • Your German and English is business fluent
135

Hadoop & Analytics Architect Resume Examples & Samples

  • Responsible for the technical design of Analytics and big data systems including ETL, data transformations, reporting and analytics services
  • Examine and evaluate Analytics & big data requirements for various business units across the organization
  • Applying architecture and engineering concepts to design solutions that adhere to organization standards and cater for re-usability, scalability, maintainability and security
  • Contributing to analytics strategy and architecture that will align investments with the highest impact areas, while producing a suite of complementary models, tools and capabilities
  • Working with cross-functional teams to discover and develop actionable, high-impact data analytics requirements and opportunity statements in a variety of core business areas
  • Assessing tool implementations and architectures in the Analytics & Big data space, reviewing for gaps against business needs and industry best practices
  • Contributing to business process design and re-engineering impacted areas
  • Providing oversight to analytics and data warehousing activities in the team to ensure alignment of priorities, coordination of inter-related projects and ensuring success stories are leveraged
  • Effectively communicating analyses, recommendations, status, and results to both business & technology leads
  • Leading proof of concepts to create break through solutions, performing exploratory and targeted data analyses to drive iterative learning’s
136

Hadoop Software Engineer, Senior Resume Examples & Samples

  • Writes, edits, and debugs new computer programs for assigned projects, including necessary records and desired output
  • Tests new or enhanced programs to ensure logic and syntax are correct
  • Preferred Other database design, development, and normalization using Oracle DBMS or Microsoft SQL Server
  • Preferred Other web service protocols (SOAP/REST/WSDL) and data formats (XML/JSON)
  • Preferred Other both the use of structured and object oriented systems analysis, design, development tools, and techniques
  • Preferred Other Broad base of knowledge with various OS including all Microsoft OS, UNIX, Linux
  • Preferred Other continuous integration technologies (TFS, Maven, Jenkins, Sonar, jUnit)
  • Advanced Demonstrated analytical skills
  • Intermediate Ability to effectively present information and respond to questions from peers and management
  • Required Advanced Other Specific technologies listed below are required based on assigned function/area (2 years exp required)
  • Required Advanced Java
  • Required Advanced J2EE
  • Required Advanced MQSeries (WebSphere MQ)
  • Required Advanced Informatica Power Center
137

Data Engineers Nosql\hadoop\hive\datastage\agile Positions Resume Examples & Samples

  • Manage and participate "hands-on" in the implementation of centralized code control, software release builds and testing, as well as application deployment mechanisms to development, testing and production environments. Also responsible to evangelize and train other agile team members on the use of the tools and best practices that ensure incremental new functionality and fixes are deployed with the frequency they are needed by the project team
  • Provide technical expertise, guidance, advice and knowledge transfer to all development staff on all aspects of code management, automated release builds and code deployment. Provide recommendations on departmental standards surrounding systems architecture, application development, release build testing and code deployment
  • May be required to carry a beeper/mobile phone and be available at all times to provide support to the system during release builds
  • Maintain a good understanding of the division's business strategies, business policies, risk management and IT processes and disciplines. In addition, required to provide leadership and specialized consultation in defining, planning and maintaining a strategy for the architecture, development and implementation of technology and systems
  • Systems Analysis & Design
  • Latest NoSQL technologies/Hadoop/Hive
  • Continuous Integration & Deployment concepts; and
  • DataStage
138

Hadoop / Spark Developer Resume Examples & Samples

  • Build and implement front-end web applications and back-end services that integrate with other products
  • Validate requirements and system design
  • 3+ year experience in Hadoop required. Strong in Spark, MapReduce, Hive, Impala, HBase, and experience using Hadoop API (Java & Rest API)
  • 5+ year experience in Java or Scala required
  • Strong SQL experience, performance tuning and optimization
  • ETL experience with tools like Informatica, Ab-initio, Nifi, Talend is a plus
  • Experience working with multiple Relational (Oracle, SQL Server, DB2 etc) and NoSQL Databases (MongoDB, Cassandra) preferred
  • Experience with Data modeling, Data warehouse design, development would be a plus
  • Experience in Shell Scripting, Procedural languages like PL/SQL, Perl or Python preferred
  • Very good knowledge of the software development life cycle, agile methodologies, and test driven development
139

Hadoop ETL Developer Resume Examples & Samples

  • Data cleansing and data quality/integrity testing to prepare data for general analytical use
  • ETL code optimization to produce production ready code
  • Documentation of all development code
  • Testing of new tools in the Hadoop ecosystem
  • Along with the rest of the team, actively research and share learning/advancements in the Hadoop space, especially related to development
  • Knowledge of Hadoop data model architecture
  • Experience optimizing and automating code as well as fail safe design
140

Data Scientist on Hadoop Platform Stack Resume Examples & Samples

  • Data Science implementation/coding on the Hadoop Platform Stack with MapReduce programming, python, Spark, R Hadoop is a must have
  • Leading development of machine learning models/predictive analytics techniques leveraging both repeatable patterns in data and discovering new that reduce cost, increase performance, reliability and or endurance in semiconductor products
  • Working through the full flow of very challenging data problems anywhere from data cleaning, to building a concept of the model to extracting the right features and testing the model to drive better decision making through various data and statistical modeling techniques, algorithm development involving real-world noisy data
  • Ability to quickly understand challenging business problems, find patterns and insights within structured and unstructured data
  • Data analysis, predictions, discovery, distilling complex science down to digestible insight, unlocking repeatable patterns that can be actionable foresight
  • Ability to access, analyze and transform large product lifecycle and process data in the semiconductor/fab manufacturing industry
  • Ability to think critically about the relationships of different metrics measured and process steps to land the right features for a given model
  • Challenge current best thinking, test theories, evaluate feature concepts and iterate rapidly
  • Owning the deliverables and managing the priorities/timelines
  • Prototype creative solutions quickly, and be able to lead others in crafting and implementing superior technical vision
  • Solid fundamentals, knowledge of machine learning algorithms, classification, clustering, regression, Bayesian modeling, probability theory, algorithm design and theory of computation, linear algebra, partial differential equations, Bayesian statistics, and information retrieval
  • Possesses strong combination of theoretical knowledge and hands-on experience in data mining, feature selection, dimensionality reduction, statistical techniques, regression analysis, machine learning algorithms and failure prediction
  • Experience with advanced data mining, predictive modeling algorithms, developing data models, reinforced, supervised and unsupervised learning methods such as SVMs, linear classifiers, Markov models, Bayesian networks and clustering techniques strong plus
  • Experience and expertise in machine learning tools and technologies, with using R for modeling, including developing R scripts and other scripting languages like Python, Perl
  • Experience/proficiency in at least one compiled/object oriented programming language e.g. Java/C++
  • Experience with big data technologies such as Hadoop, MapReduce, Mahout, Hive, Pig etc. and parallelization tools in enterprise Big Data Platform stack, technologies and ecosystem is a strong plus
  • Experience in natural language processing techniques and text analytics is a strong plus
  • Advanced degree, Masters or PH. D in Computer Science, Statistics, Applied Math, Engineering with an emphasis on Machine Learning
  • Highly motivated, team player with an entrepreneurial spirit and strong communication and collaboration skills, self-starter with willingness to learn, master new technologies and clearly communicate results to technical and non-technical audience
  • Fluency with understanding and learning the Semiconductor/wafer data, fluency learning and adapting to the existing tools Yield explorer and others, with very entrepreneurial personality trait, not afraid to play, hands on coding and iteratively problems
141

Software Engineer Lead With Hadoop on Dtra Resume Examples & Samples

  • 8+ years Software Engineering, 2 years in Lead role
  • 4+ years Java with Hadoop and Big Data
  • Development skills – SQL, XML, JSON, REST, JavaScript, JMS, Java/J2EE, Python, MapReduce, Accumolo/HBase, Hadoop, Spark, Storm, Solr/Lucene, JIRA, Jenkins, Maven
  • Developing code to meet dynamic specifications
  • Agile/SCRUM Development
  • ETL, Data Ingest and ABAC
  • Linux (RHEL, CentOS)
  • M.S. degree in Computer Science, Software Engineering, or similar technical discipline required
  • Cloudera Certified Developer for Apache Hadoop (CCDH)
142

Hadoop Database Administrator Resume Examples & Samples

  • Perform care and feeding of our Big Data environments and its interfaces built upon technologies in the Hadoop Ecosystem including Hive, Impala, HUE, Sentry, Spark
  • Day-to-day monitoring and troubleshooting of jobs, problems and performance issues in our clusters
  • Coordinate and schedule Cluster/Service downtimes to perform maintenance procedures. i.e. DB patching, upgrades, etc
  • Work with other data architecture teams on tuning their HIVE /Impala queries
  • Work with other IT infrastructure teams to evaluate new and different types of infrastructure to improve performance, reliability and capacity
  • Work with other IT and data management teams to support their Hadoop use cases, provide feedback and operational design guidance
  • Evaluate/deploy new Big Data technology features
  • Work closely with other data and architecture teams, while prioritizing responsibilities and simultaneously working on multiple high-priority tasks
  • Stay current with changes in the technical area of expertise
  • Participate in on-call rotations
  • 5-7 years’ experience in Database technologies (relational/NoSQL)
  • Experience supporting 24x7 Database environments oriented towards a zero downtime target
  • Experience working with Cloudera CDH clusters is a plus
  • SQL tuning experience with the ability to support complex SQL applications
  • Strong Linux/Unix administration skills
  • Understanding of best practices for building Data Lake and analytical architecture on Hadoop
  • Experience supporting cloud based technology
  • Programming/Linux shell scripting experience
  • Developing skills in Scala, Java, Python is a plus
  • Working experience with NoSQL Platforms (MongoDb, Couchbase, HBASE) is a plus
  • Experience with Oracle, SQl Server, MYSQL, Netezza, or Big Data platforms
143

Hadoop Lead Resume Examples & Samples

  • The responsibilities include, but not limited to the following
  • Design, deploy, maintain & upgrade Hadoop & Unix ecosystem components
  • Configuration and management of security for Hadoop clusters
144

Principal Hadoop SRE Resume Examples & Samples

  • You will be responsible for designing, building, maintaining, and scaling production services and server farms across multiple data centers for complex and data-intensive cloud services
  • You will design and enhance software architecture to improve scalability, service reliability, capacity, and performance
  • You will write automation code for provisioning and operating infrastructure at massive scale. You are not an operator, you’re an experienced software engineer focused on operations
  • You will work with development teams to make sure the applications fit nicely within the infrastructure and scalability/reliability is designed and implemented from the grounds up. You will work with QA on building pipelines and automation for delivering and deploying applications to production
  • You write postmortem reviews and remediation recommendation
  • Strong sense of architecture and design for fault tolerance, scale, and stability
  • Strong development/automation skills. Must be very comfortable with reading and writing Python code. Java is a plus
  • 10+ years of Unix/Linux experience (shell/tools/kernel/networking)
  • Subject matter expert in one of these areas: Big Data: Hadoop 2.x, Kafka, Spark, HBase, Elastic Search.:Data Center Virtualization: Containers, Mesos, OpenStack, SDN
  • Experience with Configuration Management and CI/CD. Salt and Jenkins preferred
  • Familiar with middleware software such as Nginx, HA Proxy, RabbitMQ, and typical AWS components, as building blocks of implementing services
  • Knowledgeable about collecting metrics, measuring systems and interpreting data to make decisions
  • Organized, focused on building, improving, resolving and delivering. Good communicator in and across teams, taking the lead
145

Datastage, Quality Stage, Hadoop Resume Examples & Samples

  • Strong development experience in Datastage 8.5 or higher
  • Good knowledge of Quality Stage
  • Working knowledge of HADOOP components is preferable
146

Network Big Data CoE, Hadoop Specialist Resume Examples & Samples

  • Apply technical experience and leadership in technical solutions design and development (e.g., reliability, scalability, integrity, cost effectiveness, security, availability, ease of maintenance and operations etc.)
  • Participate in analyzing the various Hadoop, Streaming and IOT cluster infrastructures to identify optimization opportunities and/or provide and execute upgrade recommendations
  • Act as project manager for small and medium projects when the scope focuses on technical operational implementations
  • Act as deployment prime when implementing OS, security and administration solutions into the various environments after ensuring that system tests were completed successfully
  • Ensure the end-to-end technical integrity of the designed solutions
  • Responsible of the various cluster’s security, passwords and logins
  • Able to proactively plan his/her work over multiple timeframes – week-months-year and juggle multiple priorities and deliver as per commitments
  • Capable to evaluate without supervision the effort & time required to complete a deliverable and/or task thru collaboration, teamwork, honesty, commitment and respect
  • Wireless/Telecom Operations and Engineering business Knowledge including basic understanding of Radio access, Core network and Value added Services technologies and configurations
147

Hadoop System Administrator, Pinsight Media Resume Examples & Samples

  • Hadoop - Hortonworks; Cluster tuning, planning & deployment; Data access control; Service management, health monitoring and execution; Software updates
  • Scripting experience - Python; Bash
  • Cross functional, multi-role job responsibilities
  • Self-motivated, detail-oriented, strong organizational skills, with a methodical approach to all tasks
  • Able to succeed in a fast-moving high-demand environment and willing to ?roll up your sleeves? when necessary
  • Excellent communication and writing experience
  • Adaptable and willing to approach tasks with velocity and a high metabolism
  • Excitement and willingness to learn new technologies and tackle BIG issues
  • Bachelor's degree and four years related work experience or eight years related work experience post high school
  • Three years experience working with the identification and adoption of new, emerging technologies
  • Three years project management and project implementation experience
  • Postgres database
  • Advanced Statistics/Mathematics
  • Business analyst/intelligence
  • Ability to automate software/application installations and configurations hosted on Linux servers
148

Hadoop Specialist Resume Examples & Samples

  • Ensure that solutions are optimally designed, so that applications meet their defined SLA s, reuse of existing structures is maximized and operational costs are minimized
  • To ensure that all infrastructure and environmental changes are appropriately tested and implemented in such a manner that there is no impact to RBC systems and clients
  • To provide related technical support for development and production environment including required level of 7X24 on call support
149

Hadoop Internals Engineer, Cloud Platform Resume Examples & Samples

  • Architect, engineer, test, document, and deploy the core systems of the BDS Hadoop as a Service offering
  • Improve and extend core components of the Hadoop ecosystem (HDFS, YARN, Hive, Pig, Oozie, etc.) to work at scale on our multi-tenanted cluster
  • Engage with customers to help them maximize the use of our services as part of a small and growing agile team
  • Obsessive about great engineering and Open Source
  • Expert level proficiency in one or more of the following languages: Java, Ruby, C/C++, Python
  • DevOps experience a plus
  • Proficiency in one or more of the following languages: Java, Ruby, C/C++, Python
  • Experience with build-chain tools, such as Maven, Jenkins, Git
  • Knowledge of SQL & No-SQL databases
150

Hadoop Internals Engineer Resume Examples & Samples

  • Work with Product Management to understand, design, and implement core features and customer-driven enhancements
  • Deep understanding and experience with Hadoop internals (MapReduce (YARN), HDFS), Pig, HCatalog, Oozie, Hive, HBase; Hadoop committer a big plus
  • In depth understanding of two or more of
  • System schedulers, data storage and management, virtualization, workload management, availability, scalability and distributed data platforms, virtualization, security infrastructure
  • BEng/BA in Computer Science, related field or equivalent experience; Masters and PhD a plus
  • Proficiency with Linux, virtual machines, and open source tools/platforms
151

Lead Hadoop Systems Administrator Resume Examples & Samples

  • Experience with installation, support and management of Hadoop Ecosystem and Linux Systems Programming
  • Strong coordination and project management skills
  • Working understanding of the Cloudera (Hadoop) Distribution, specifically the maintenance and support of the Distribution
  • Working understanding of one or more Advanced Analytics tools, specifically the maintenance and support of the tool(s) in a Hadoop environment
  • Bachelor's in Computer Science or related discipline
152

Senior Manager, Hadoop Center of Excellence Resume Examples & Samples

  • Defining an actionable plan following the 2017 EDL roadmap
  • Working with the stated stakeholders to execute, monitor and report on the plan
  • Lead (direct or matrix) a technical team to execute the plan, including providing technical guidance and oversight
  • Manage and mitigate risks and issues associated with the plan
  • Coordinate activities across multiple teams and ensure deliverables meet timelines
  • Monitor, manage and mitigate risks and issues related to individual work streams and overall progream
  • Monitor, track, and report progress - prepare project dashboards and status reports, including executive dashboards
  • Meet financial objectives by forecasting requirements, preparing budgets, scheduling expenditures, analyzing variances; initiating corrective actions
  • Collaborate with IT and business partners across the bank to manage the EDL Intake process
  • Provide techical leadership and guidance to the team drawing on expertise from subject matter experts, including vendor partners
  • Maintain effective vendor relationship with key vendor partners in plan execution and advancing the EDL platform
  • Solid understanding of software development life cycle models as well as expert knowledge of both Agile and traditional project management principles and practices
  • Strong interpersonal skills including mentoring, coaching, collaborating, and team building
  • Strong analytical, planning, and organizational skills with an ability to manage competing demands
  • Excellent oral and written communications skills and experience interacting with both business and IT individuals at all levels including the executive level
  • Creative approach to problem-solving with the ability to focus on details while maintaining the “big picture” view
  • Strong technical savvy with good understanding of the Big Data/Hadoop ecosystem
153

Java / Hadoop With Data Science Resume Examples & Samples

  • Analyze business and technical requirements, as well as existing data models and reports, to understand the detailed data management requirements
  • Conduct data profiling, data cleansing, and specify business and data rules for automation
  • Create conceptual, logical and physical data and metadata models to represent data
  • Propose solution to the business and take the responsibility to follow up its implementation
  • Interpret and assist clients with the design of data governance strategies for business requirements and enhance the maturity of the organization
  • Navigate through the client's organization and coordinate with other departments to build big data IT processes
  • Coordinate with data source departments and manage data sourcing and data integration processes
154

Hadoop / Java Developer Resume Examples & Samples

  • Participate in development of data store as a developer
  • Rapid prototyping and evaluate open source frameworks and other in house solutions. Make technology recommendations
  • Familiarity with Hadoop ecosystem
  • NoSQL DB knowledge (Mongo, Redis, others)
  • Relational Databases (oracle pl/sql development for example)
155

Hadoop Resume Examples & Samples

  • Evaluate requirements for their relevancy to and impact on existing models and reference architecture
  • Coordinate activities with other full time BI COE Architecture team members to facilitate a cohesive and coherent project solution, while maturing the overall BI COE architecture
  • 5-7 years of experience in project related work (systems analysis and a systems development environment, or an equivalent combination of education and experience)
  • 2-3 years of project management experience required
  • 3-5 years of data warehousing/business intelligence implementation and operations experience
  • Knowledge of SAP HANA, Hadoop or equivalent data warehousing technologies
  • Certification: Technical certification e.g. Cisco, VMWare; or achieved professional certification TOGAF 9 level 1; or has / seeking IAF level 1
  • Should have experience in Functional Architecture Design and Architecture Knowledge, Negotiating, Financial Analysis and Continuous (Service) Improvement
  • Should be proficient in Business Knowledge, Software Engineering Leadership, Technical Solution Design and Change Management
156

Expert Analytics Developer for Hadoop Analytics Platform & Applications Resume Examples & Samples

  • Perform Trouble Shooting and Customer Support
  • Design and drive the implementation of data platform functional and non-functional requirements, and MapR, HDFS/HBase applications working closely with system engineer
  • Be an owner of the implemented functionality: from design and implementation to production troubleshooting
  • Work closely together with the design team members and other teams in the organization (integration and verification team, product documentation)
  • Be responsible for the quality of the technical solution
  • Master/BSc level in a technical discipline
  • Solid experience with Big Data Technologies and related languages: Spark, Scala, Python
  • Tableau expertise to help create new use cases
  • Solid experience with Java, JEE, database technologies, web technologies
  • Solid JBOSS expertise
  • Solid C, C++ expertise
  • Solid experience with access to databases (Greenplum) and MapR (HBase)
  • Experience in designing and developing probes, protocol parsers is an advantage
  • Experience working in a collaborative environment using agile methodology is desirable
  • Telecommunication domain knowledge is a strong advantage
157

Hadoop Module Lead Resume Examples & Samples

  • Hadoop : HDFS, MapReduce, Hive, Hbase, Pig, Mahout, Avro, Oozie
  • Languages : Java, Linux, Apache, Perl/Python/PHP
  • Databases : MySQL/other RDBMS
  • 4 - 6 Years
158

Hadoop Designer / Developer Resume Examples & Samples

  • Degree in Computer Science/ Engineering/Science/Math
  • 5-6 years of IT experience
  • Strong ETL and UNIX skills
  • Experience with Hadoop, DMX-h, Hive and Sqoop a must
  • Strong knowledge of DTL coding in DMX-h
  • Experience in creating techniques for masking data
  • Experience working on UNIX & ETL Processing
  • At least 5 years of project management experience working with large scale mainframe and distributed applications
  • Responsible for completing all project deliverables and documentation (issue/risk logs, status reports, requirements documentation, design documentation, test plans, test scripts, etc.)
  • Responsible for weekly executive level reporting outlining project timelines, risks, issues, financials
  • Proficiency with Hadoop and DMX-h tool with a minimum of 4 years of development experience
  • Must have solid experience with extracting data from different data sources on Mainframe and Distributed systems
  • Experience in generating, interpreting mapping documentation and translating into detailed design specifications on Hadoop
  • Experience on performing detailed design, development, unit testing and deployment of Hadoop projects
  • Experience with data mapping, data extract, transformations, reusability, availability of ETL load and ETL reconciliation processes
  • Recent experience with high volume databases and/or data warehouses
  • Strong understanding of tools such as Autosys and Subversion
  • Highly developed analytical and problem solving skills, and meticulous attention to detail including the ability to analyze data from different platforms such as Mainframe, Oracle, DB2, Teradata etc
  • Test Data Management Experience
  • Technical and functional knowledge of Hadoop Ecosystem
  • SQL Knowledge (Impala/HiveQL/Teradata)
  • Experience in MapReduce, Sqoop, Flume and Kafka
  • Ability to effectively manage in a cross-matrix environment with Global resources
  • Strong interpersonal skills and communication techniques; written and verbal
  • Ability to monitor and effectively manage Data Gateway Service job/problems and drive process and system improvements to enhance efficiency
  • Ability to communicate crisply and candidly, both orally and in writing
  • Tools: MS Office, Visio
  • Knowledge of core banking functions, processes and operations
  • Ability to manage multiple priorities and projects
  • Detail-oriented with strong analytical skills
  • Ability to drive collaboration and manage conflict
  • Ability to adapt to frequent changes
  • Potential to work on multiple projects/efforts simultaneously
  • Cloudera Hadoop experience/Certification
  • Knowledge of data warehousing concepts and technologies in Big Data
  • CA7 and/or Autosys scheduling knowledge
  • Exposure to banking and financial industry
  • PPRT, Nexus, SMART, MS Project, Visio
159

Hadoop Application Support Lead Resume Examples & Samples

  • End-to-end Incident and Problem resolution for responsible LOBs
  • Responsible to ensure that the applications met their required service levels
  • Responsible to support process improvements to continuously improve the stability and performance of the applications
  • Identify system bottlenecks and opportunities for processing improvements
  • Application wellness
  • Prioritize workload, provide timely and accurate resolutions
  • Perform in-depth research and identify sources of production issues
  • Effectively perform root cause analysis of issues and report the outcome to business community and Management
  • Develop / utilize core support tools and processes to perform work while improving day-to-day practices for support team members with the goal of delivering service improvements to the business
  • Coordinate and work with the global production support team
  • Application Monitoring, Application Tuning
  • Lead and/or Manage production support team to deliver the services
  • Strong sense of ownership - Feeling of personal accountability for all areas directly or indirectly supporting the business/service area for which they are responsible. Willingness to drive people on all sides of an issue to a common understanding and then drive them toward resolution
  • Communication – Should be able to clearly communicate ideas in technical or business terms with senior business leadership, leadership outside the LOB (internal or external), their peers across IT, as well as their team
  • Leadership - Assess a situation, prioritize requirements, and then go out to any IT team, including both development and infrastructure, and get support as needed
  • Self Directed - Has a strong sense of self and purpose. Understands tasks and role, and does not need daily reassurances, yet maintains an open dialogue with all of the stakeholders
  • Bachelor's/Master's Degree in Computer Science, related field or Equivalent work experience
  • 10+ years of professional experience in the field of information and technology
  • 5+ years of experience in Data Management/ data warehouse
  • 2 years hands on experience with Hadoop
  • 5-6 years of Java experience
  • 4-5 years experience with Linux and database integration·
  • Experience in multi-threaded application
  • Experience in SQL debugging skills
  • Experience with UNIX scripting
  • Strong interpersonal relationship and communication skills
  • Ability to multi-task /change focus quickly
160

Senior Software Engineer Hadoop Resume Examples & Samples

  • The big data universe is expanding rapidly. This position demands the best and the brightest software engineers who are passionate about Hadoop, Java and related technologies, and building large scale applications utilizing them! You should have several years of experience developing and/or using Hadoop. The ideal candidate will bring a lot of smarts, energy, initiative and excitement! Will be ready to learn and explore new ideas, processes, methodologies and leading edge technologies. Have a “hacker” mentality toward building solutions and problem-solving. It requires strong CS fundamentals, system knowledge and Java system programming skills
  • The following skills/experience are required
  • At least 8 years of experience in IT industry
  • At least 4 years of experience in developing and/or using Hadoop and related products
  • Strong education in Computer Science, Databases, Algorithms, Operating Systems, Networking, etc
  • Strong knowledge of Systems/Applications Design & Architecture
  • Strong object-oriented programming and design experience in Java
  • Experience with processing large amounts of data
  • Experience in handling architectural and design considerations such as performance, scalability and security issues
161

Principal Hadoop Java Developer Resume Examples & Samples

  • Development work related to GLRI, which will include data sourcing, data tagging, reporting and the building out of data sets, as well as automated reconciliation and data governance processes
  • Application development to support new features, requests, fixes, etc, for the existing development queue and new projects as needed
  • Be able to read, understand and perform fixes to code in support of daily regulatory reports
  • Work on new business, both project related and course of business
  • Experience working within global teams across different locations
  • Experience working with large volumes of data within a data warehouse
  • Working knowledge of data warehouse technical concepts and designs
  • Working knowledge of technical concepts and designs for business intelligence & adhoc/query reporting
  • Java and UNIX/Linux background for integration of Hadoop and Jaspersoft reporting into other areas of BNY Mellon
  • Working knowledge of Microsoft SQL Server and SSIS stack
  • Advanced skills working with Microsoft Excel
  • Financial industry knowledge, especially as it relates to a balance sheet, is preferred. This includes the types of instruments held by a banking institution for its day to day operations
  • L1-SH1
162

Hadoop SRE Resume Examples & Samples

  • Experience supporting hosted services in a high-volume customer facing environment
  • Dev-ops skills in Java, Python, Ruby or UNIX shells, C, C++
  • Experience with Hadoop, Cloudera, Pig, Splunk or other big data frameworks
  • Experience with Puppet, Chef, or Ansible for configuration management
  • Experience with relational databases such SQL, DB2, HBase
  • Background building distributed, server-based infrastructure supporting a high volume of transactions in a mission critical environment
  • Excellent communication skills and ability to work closely with operations and development teams
  • Demonstrated ability to work on small, focused teams to complete critical milestones under pressure with tight deadlines
  • Drive to take initiative and own issues
163

Hadoop Technology Development Strategist Resume Examples & Samples

  • Two years with Hadoop - Hortonworks; Cluster tuning, planning & deployment; Data access control; Service management, health monitoring and execution; Software updates
  • Three years of Linux/Unix heavy usage and/or administration
  • DBA experience - SQL Database; Data manipulation
  • NoSQL Experience - Cassandra; MongoDB
  • Java Developer Experience (intermediate level)
  • Experience and comfort operating within an agile SCRUM methodology
  • One year experience working with the identification and adoption of new, emerging technologies
  • One year project management and project implementation experience
  • Telecom data
  • Puppet or Chef
  • Advertising industry
  • Network and system administration experience, preferably in large-scale data center environments
164

Hadoop Developr Resume Examples & Samples

  • Responsible for the design, development, implementation and administration on the Hadoop ecosystem
  • Loading and processing from disparate data sets using appropriate technologies including but not limited to, Hive, Pig, MapReduce, HBase, Spark, Storm and Kafka
  • Perform analysis of vast data stores to uncover insights
  • Managing and deploying HBase
  • Maintains and advances knowledge of Big Data technologies and concepts and advances overall team capabilities
  • B.S. degree in Computer Science, Information Technology, or equivalent experience
  • 5+ years Business Intelligence/Data Warehouse experience
  • Minimum 2-3 years' experience developing and managing Hadoop environments, and proven expertise in multiple Big Data technologies
  • Advanced Linux skills
  • Strong understanding of algorithms, data structures and performance optimization techniques
  • Ability to understand business requirements and leverage technology to provide solutions
165

Data Scientist With Hadoop Experience Resume Examples & Samples

  • Prepare presentations and reports on findings
  • Provide support in the development of solutions and preparation of RFP responses
  • Hands-on work using, Excel, PowerPoint etc. on a daily basis
  • Identify reusable assets
  • Review any best practices / innovations as circulated within the group
  • Participate and network with Community of Practice to discuss/ resolve any business problems as faced during projects
166

Hadoop / Big Data Developer Resume Examples & Samples

  • Expertise in Hadoop ecosystem products such as HDFS, MapReduce, Hive, AVRO, Zookeeper
  • Experience of Business Intelligence - Ideally via Tableau/Microstrategy using Hive
  • Experience with data mining techniques and analytics functions
167

Data Scientist With Hadoop / SAS Experience Resume Examples & Samples

  • Work at a client site or in a home office with a team of 1-3 associates developing and applying data mining methodologies
  • Member of onsite/near-site consulting team
  • Coordination with external vendors and internal brand teams
  • Recommend next steps to ensure successful project completion and to help team penetrate client accounts
  • Outlining and documenting methodological approaches
  • Keep up to date on latest trends, tools and advancements in the area of analytics and data
  • Identify Project level tools or other items to be built for the Project
168

Hadoop Operations Manager Resume Examples & Samples

  • At least 6 years of experience in engineering, system administration and/or Devops
  • At least 4 years of experience in designing, implementing and administering Hadoop
  • Managing 24x7 shifts with the onsite/offsite engineers, responding to PagerDuty alerts
  • Experience working within the requirements of a change management system
  • Proven ability to adapt to a dynamic project environment and manage multiple projects simultaneously
  • Proven ability to collaborate with application development and other cross functional teams
  • Ability to coach and provide guidance to junior team members
  • Understanding of HDFS
  • Understanding of YARN
  • Using Ambari in administering Hadoop
  • Experience in administering Cluster size greater than 6 PB OR 200 Datanodes
  • Knowledge in bash shell scripting to automate administration tasks
  • Onboarding new users to Hadoop
  • Maintaining SOX compliance
  • Experience in writing HQL (Hive Queries)
  • Understanding of Hive metadata store objects
  • Monitoring Linux host health in Ganglia and responding to Nagios alerts/Pager alerts
  • Experience in capacity planning the big data infrastructure
  • Providing optimization tips to ETL team about efficient methods in performing operations in Hadoop Platform (Hive)
  • Involvement on Open source products/technologies development is a great plus
169

Hadoop Admin Resume Examples & Samples

  • Demonstrated knowledge/experience in all of the areas of responsibility provided above
  • General operational knowledge such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage and networks
  • Must have knowledge of Red Hat Enterprise Linux Systems Administration
  • Must have experience with Secure Hadoop - sometimes called Kerberized Hadoop - using Kerberos
  • Knowledge in configuration management and deployment tools such as Puppet or Chef and Linux scripting
  • Must have fundamentals of central, automated configuration management (sometimes called "DevOps.")
  • Providing expertise in provisioning physical systems for use in Hadoop
  • Installing and Configuring Systems for use with Cloudera distribution of Hadoop (consideration given to other variants of Hadoop such as Apache, MapR, Spark, Hive, Impala, Kafka, Flume, etc.)
  • Administering and Maintaining Cloudera Hadoop Clusters
  • Provision physical Linux systems, patch, and maintain them
  • Performance tuning of Hadoop clusters and Hadoop Map Reduce/Spark routines
  • Management and support of Hadoop Services including HDFS, Hive, Impala, and SPARK. Primarily using Cloudera Manager but some command-line
  • Red Hat Enterprise Linux Operating System support including administration and provisioning of Oracle BDA
  • Answering trouble tickets around the Hadoop ecosystem
  • Integration support of tools that need to connect to the OFR Hadoop ecosystem. This may include tools like Bedrock, Tableau, Talend, generic ODBC/JDBC, etc
  • Provisioning of new Hive/Impala databases
  • Provisioning of new folders within HDFS
  • Setting up and validating Disaster Recovery replication of data from Production cluster
170

Enterprise Technology Consultant Hadoop & Enterprise Cloud Resume Examples & Samples

  • Provide thought leadership and technical consultation to the sales force on innovative Big Data solutions: including Hadoop and other-relational and non-relational data platforms. This position will also work on the AWS, Azure and Teradata proprietary cloud
  • Assume a leadership role in selecting and configuring Teradata technology for field personnel
  • Provide technical support and answer complex technical questions, including input to RFPs, RFQs and RFIs
  • Participation in account planning sessions
  • Train field technical personnel on Teradata Hadoop and cloud technology
  • Liaise with development personnel to facilitate development and release activities
  • Validate proposed configurations for manufacturability, conformance to configuration guidelines, and acceptable performance
  • Coordinate with IT and other stakeholders to support process and system improvement
  • Adept at operating in the large enterprise Linux/Unix space
  • Able to assist sales team in translating customer requirements into database, Hadoop or Cloud solutions
  • Understand and analyze performance characteristics of various hardware and software configurations, and assist the sales teams in determining the best option for the customer
  • Apply knowledge and judgment in providing sales teams with validated customer configurations for submission to the Teradata Order System, and evaluate configurations for accuracy, completeness, and optimal performance
  • Identify and engage the correct technical resource within Teradata Labs to provide answers to questions asked by field personnel
  • Learn and independently use on-line tools to perform tasks, i.e., configuration tools, performance tools, data repositories, and the configuration and ordering tool
  • Work well with others and communicate effectively with team members, Teradata Labs personnel, field sales personnel, and customers
  • Understand when it is appropriate to inform and involve management regarding roadblocks to success or issue escalations
  • B.S. in Computer Science or related fields
  • Field technical experience in the large enterprise segment in the Linux/Unix space. Experience with sales process, sales teams, and customer facing positions is a plus
  • Knowledge of relational databases and non-relational databases: Hadoop experience is a premium
  • Broader technical knowledge of servers, storage and cloud
171

Enterprise DBA With Hadoop Resume Examples & Samples

  • Work with database vendors to resolve database software bugs
  • Architect and develop ETL solutions across platforms
  • Understand application and system architecture
  • Troubleshoot OS Level event logs as well as database platform logging in resolving complex issues
  • Utilize multiple scripting languages such as PowerShell/Python/BASH shell scripts to administer and perform daily tasks
  • Handle intermediate level script development for automation
  • Multiple Hadoop components
  • Multiple NoSQL engines
  • Non-database open source technologies
172

Hadoop Data Architect Resume Examples & Samples

  • Good working experience of Microsoft Office suite
  • High degree of Motivation and adaptability
  • Influence skills to work with development teams whose short-term goals may differ
  • Exposure to Big data is a must
173

Hadoop Cluster Admin Resume Examples & Samples

  • Responsible for implementation and support of the Enterprise Hadoop environment
  • Involves designing, capacity planning, cluster set up, monitoring, structure planning, scaling and administration of Hadoop components ((YARN, MapReduce, HDFS, HBase, Zookeeper,
  • Storm, Kafka, Spark, Pig and Hive)
  • Work closely with infrastructure, network, database, business intelligence and application teams to ensure business applications are highly available and performing within agreed on service levels
  • Strong Experience with Configuring Security in Hadoop using Kerberos or PAM
  • Evaluate technical aspects of any change requests pertaining to the Cluster
  • Research, identify and recommend technical and operational improvements resulting in improved reliability efficiencies in developing the Cluster
174

Hadoop / Bigdata Admin Resume Examples & Samples

  • Strong understanding of Hadoop eco system such as HDFS, MapReduce, Hadoop streaming, flume, Sqoop, oozie and Hive,HBase,Solr,and Kerberos
  • Deep understanding and experience with Cloudera CDH 5.7 version and above Hadoop stack
  • Apache Kafka & Impala Experience
  • Responsible for cluster availability and available 24x7
  • Knowledge of Ansible & how to write the Ansible scripts
  • Unix/Linux commands and shell scripting
  • CAWA – CA Workload Automation
  • Familiarity with open source configuration management and deployment tools such as Ambari and Linux scripting
  • Knowledge of Troubleshooting Core Java Applications is a added advantage
175

Java Hadoop Tech Lead Resume Examples & Samples

  • 8+Years’ hands-on experience designing, building and supporting high-performing J2EE applications
  • 5+ years’ experience using Spring and Hibernate, TOMCAT, Windows Active Directory
  • Strong experience developing the Web Services and Messaging Layer using SOAP, REST, JAXB, JMS, WSDL
  • 3+ years’ experience using Hadoop especially Horton works Hadoop (HDP)
  • Expertise with HDFS, HCatalog, Hive
  • Good understanding of Knox, Ranger, Ambari and Kerberos
  • Expertise with Java Hadoop REST APIs
  • Experience with database technologies such as MS SQL Server, MySQL, and Oracle
  • Experience with unit testing and source control tools like GIT, TFS, SVN
  • Expertise with web and UI design and development using Angular JS, Backbone JS
  • Expertise with HTML5 and CSS
  • Strong Linux shell scripting and Linux knowledge
  • Knowledge on TIBCO Spotfire is a plus
  • Self-starter and self-organizer
  • Code reviews/ensure best practices are followed
176

Hadoop / Grid Project Manager Resume Examples & Samples

  • This person will need to have had exposure and worked on projects involving Hadoop or GRID computing
  • 10+ years project management experience in Large Enterprise environment
  • PowerPoint presentation skills - will be building PP presentations around said people/process improvements they have made suggestions for and presenting to senior level leadership
  • Managing of project end to end, team work set of mind and determined individual
  • *This can not sit remote, Must be able to work on W2 basis ONLY Without Sponsorship
177

Hadoop Support Engineer Resume Examples & Samples

  • Troubleshoot problems encountered by customers
  • File bug reports and enhancement requests as appropriate
  • Work with our issue tracking and sales management software
  • Participate in new product introduction
178

Hadoop System Analyst Lead Big Data Resume Examples & Samples

  • Partners with product owner(s) to review business requirements and translates them into user stories and manages healthy backlog for the scrum teams
  • Works with various stakeholders and contributes into produce technical documentation such as data architecture, data modeling, data dictionary, source to target mapping with transformation rules, ETL data flow design, and test cases
  • Discovers, explores, performs analysis and documents data from various sources with different formats and frequencies into Hadoop to better understand the total scope of Data Availability at Workforce technology
  • Participates in the Agile development methodology actively to improve the overall maturity of the team
  • Helps identifying roadblocks and resolving the dependencies on other systems, teams etc
  • Collaborate with big data engineers, data scientists and others to provide development coverage, support, and knowledge sharing and mentoring of junior team members
  • Escalate issues and concerns as needed on time
  • Must have a passion for Big Data ecosystem and understands the structured, semi-structured or unstructured data pretty well
  • The individual must have overall 10+ years of diversified experience in analyzing, developing applications using Java, ETL, RDBMS or any big data stack of technologies
  • 5+ years of experience working in such technical environments as system analyst
  • 3+ Years of experience into Agile (Scrum) methodology
  • 3+ years of hands-on experience with data architecture, data modeling, database design and data warehousing
  • 3+ years of hands-on experience with SQL development and query performance optimization
  • 3+ years of hands-on experience with traditional RDBMS such as Oracle, DB2, MS SQL and/or PostgresSQL
  • 2+ years of experience working with teams on Hadoop stack of technologies such as Map Reduce, Pig, Hive, Sqoop, Flume, HBase, Oozie, Spark, Kafka etc
  • 2+ years of experience in data security paradigm
  • Excellent thinking, verbal and written communications skills
  • Strong estimating, planning and time management skills
  • Strong understanding of noSQL, Big Data and open source technologies
  • Ability and desire to thrive in a proactive, highly engaging, high-pressure environment
  • Experience with developing distributed systems, performance optimization and scaling
  • Experience with agile and test driven development, Behavior Driven Development methodologies
  • Familiarity with Kafka, Hadoop and Spark desirable
  • Basic exposure to Linux, experience developing scripts
  • Strong analytical and problem solving skills is must
179

DNA Fs-technology Lead-hadoop Resume Examples & Samples

  • At least 2 years of experience in Project life cycle activities on DW/BI development and maintenance projects
  • At least 3 years of experience in Design and Architecture review
  • At least 2 years of hands on experience in Design, Development & Build activities in HADOOP Framework and , HIVE, SQOOP, SPARK Projects
180

Technology Lead Hadoop Resume Examples & Samples

  • At least 4 years of experience with Big Data / Hadoop
  • 4+ years of strong coding skills in JAVA
  • 2+ years of experience in ETL tool with hands on HMFS and working on big data hadoop platform
  • 2+ years of experience implementing ETL/ELT processes with big data tools such as Hadoop, YARN, HDFS, PIG, Hive
  • 1+ years of hands on experience with NoSQL (e.g. key value store, graph db, document db)
  • 2+ years of solid experience in performance tuning, Shell/perl/python scripting
  • Experience with integration of data from multiple data sources
  • Knowledge of various ETL techniques and frameworks
  • 3+ years of experience in Project life cycle activities on development and maintenance projects
  • Ability to work in team in diverse/ multiple stakeholder environments
  • Experience in Finance and banking domain
  • Experience and desire to work in a Global delivery environment
181

Hadoop Services Administrator Resume Examples & Samples

  • Cloudera Manager - deploy, upgrade, operate
  • Cloudera Kerberisation
  • Hive / Impala / Hbase administration
  • HDFS Data Management - raw, parquet, avro, sequence filetyes and administration
  • BDR - data replication, snapshot administration
  • Sqoop / Sqoop2 / Flume / Kafka / Teradata connector for Hadoop (Sqoop) administration
  • YARN resource management tuning to support workloads
  • Operational Service Management reporting on Coudera platform operations
  • Hadoop troubleshooting / incident & problem management
  • Data warehouse experience (eg Teradata, Greenplum), Visualisation tools (e.g COGNOS, SAS VA)
  • Testing skills
  • Experience in managing cloud/datacentre infrastructure (eg. Virtual Private Cloud)
  • Configuration version control (eg CVS tools such as GIT/TFS
  • Configuration automation / compliance (eg Chef)
  • Hadoop development/automation skills (eg. Java, shell scripting, scala, python, pig)
  • DBA skills (eg. database schemas as relates to Hive/Impala/Avro/Parquet & performance implications)
  • Experience in developing in a range of Hadoop environments - Java, Spark/Scala, Python
  • Candidates with certifications such as Cloudera Certified Administrator for Apache Hadoop (CCAH) will have an advantage
  • This position represents an opportunity to use your experience and skills in a dynamic, complex, organisation where there is ongoing scope for business improvements, span of control and future career progression. Please apply without delay
182

DNA Rcl-technology Lead-hadoop Resume Examples & Samples

  • Responsible for data Ingestion, data quality, development, production code management and testing/deployment strategy in BigData development (Hadoop)
  • Acts as a lead in identification and troubleshooting processing issues impacting timely availability of data in the BigData or delivery of critical reporting within established SLAs. Provide mentoring to Level 2 production support team
  • Identifies and recommends technical improvements in production application solutions or operational processes in support of BigData platform and information delivery assets (ie, data quality, performance, supporting Data scientists etc.)
  • Focuses on the overall stability and availability of the BigData platform and the associated interfaces and transport protocols
  • Researches, manages and coordinates resolution of complex issues through root cause analysis as appropriate
  • Establishes and maintains productive relationships with technical leads of key operational sources systems providing data to BigData plaform
  • Establishes and maintains productive relationships with technical leads of key infrastructure support areas, such as system/Infra engineers
  • Participate in daily standup meetings
  • Ensure adherence to established problem / incident management, change management and other internal IT processes
  • Responsible for communication related to any day to day issues and problem resolutions. Communication may include direct department management as well downstream/upstream application contacts and business partners
  • Ensures comprehensive knowledge transition from development teams on new or modified applications moving to ongoing production support
  • Seeks improvement opportunities in design, solution implementation approaches in partnership with development team for ensuring the performance and health of the BigData platform
  • Ensures timely and accurate escalation of issues to management
183

Hadoop Cluster Administrator Resume Examples & Samples

  • Learn about all the technologies involved in the project like java, shell scripting, Linux administration, Hadoop, databases, security and monitoring
  • Ensure that every cluster and services, are always available without performance issues
  • Attend to all the scheduled meetings, having a high participation
  • Ensure that procedures and infrastructure details are properly documented and shared amongst the team
  • Automate daily tasks
  • Prioritize and give proper solutions to users
  • Completed College Degree IT Computer Science
  • 1 year of Hadoop administrator experience or Linux Server Administrator experience minimum
  • Advanced Linux system administration, regardless of the distribution
  • Have knowledge in the next technologies: Java, shell scripting, Linux administration, networking, distributed systems, Hadoop, automation
  • Effective and responsible team player
184

Hadoop Testing QA Engineer Resume Examples & Samples

  • Requires Bachelor’s degree in Computer Science or Information Systems
  • 5+ years of testing and product quality experience; or any combination of education and experience, which would provide an equivalent background
  • Hadoop Data Validation (1-2 years experience)
  • 2 years experience in each of the following: ETL Test execution and validation; Back end test writing and executing SQL queries; Test execution/validations using Informatica; GUI text execution and validation
  • Agile testers with primary skills in automating test cases, test-driven development and manual testing are required
  • 1 yr experience as a: Tester in Agile Team; working in JIRA and Confluence
  • Quality Certification, such as CSTE, CSQA, CMST, CSQE preferred
  • Not sure if you are ready to make a change? Or have questions? Please contact deborah.meyer@anthem.com to learn more and decide if this is the next step in your career
185

Hadoop Platform Engineering Lead Resume Examples & Samples

  • Demonstrable experience developing and contributing to distributed compute frameworks: Hadoop, Spark
  • Experience and insight into designing, implementing and supporting highly scalable infrastructure services
  • Experience in administering highly available Hadoop clusters using the Cloudera Hadoop distribution
  • Experience in stream data processing and real-time analytics
  • Extensive knowledge of Linux Administration and Engineering
  • Hands on experience with Hadoop, Spark, Storm, Cassandra, Kafka, ZooKeeper and Big Data technologies
  • In-depth knowledge of capacity planning, management, and troubleshooting for HDFS, YARN/MapReduce, and HBase
  • Must have experience with monitoring tools used in the Hadoop ecosystem: Nagios, Cloudera Manager, Ambari
  • Sound knowledge of UNIX and shell scripting
  • Strong attention to detail and excellent analytical capabilities
  • Strong background in Systems-level Java essential - garbage collection internals, Concurrency models, Native & Async IO, Off-heap memory management, etc
  • Cloudera Certified Administrator for Apache Hadoop (CCAH) a plus
  • Understanding of core CS including data structures, algorithms and concurrent programming
  • CentOS/Red Hat Enterprise Linux (RHEL)
  • Active member or contributor to open source Apache Hadoop projects a plus
  • An advanced background with a higher level scripting language, such as Perl, Python, or Ruby
186

Hadoop / Linux Support Specialist Resume Examples & Samples

  • Manage several Hadoop clusters in development and production environments
  • Maintain technology infrastructure, providing operational stability by following and using the tools, policies, processes and procedures available
  • Execute operational and functional changes on the plant
  • Work together with L1/L2 teams and L3 team members in other regions
  • Work on improvements on incident and problem management procedures tools
  • Experience troubleshooting applications and application code
  • Strong UNIX system administration experience
  • Knowledge and experience in Linux/Unix computer networking,
  • Knowledge and experience in storage systems (SAN, NAS)
  • Knowledge and experience with either Perl or Python programming
  • Experience with large-scale Linux environments
  • Experience with Hive, MapReduce, YARN, Spark, Sentry, Oozie, Sqoop, Flume, HBase, Impala, etc
  • Kerberos experience a plus
187

Hadoop Data Engineer Resume Examples & Samples

  • Minimum of 3-5 years of experience working in Hadoop/ Big Data related field
  • Working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce etc
  • Minimum 2-3 years of hands on programming experience in Java, Scala, Python, Shell Scripting
  • Experience in end to end design and build process of Near-Real time and Batch Data Pipelines
  • Strong experience with SQL and Data modelling
  • Experience working in Agile development process and has good understanding of various phases of Software Development Life Cycle
  • Experience using Source Code and Version Control systems like SVN, Git etc
  • Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components
  • Self-starter who works with minimal supervision. Ability to work in a team of diverse skill sets
  • Ability to comprehend customer requests & provide the correct solution
  • Strong analytical mind to help solve complicated problems
  • Desire to resolve issues and dive into potential issues
  • Good team player, interested in sharing knowledge with other team members and shows interest in learning new technologies and products
  • Ability to think out of box and provide innovative solutions
188

Hadoop Data Engineer Resume Examples & Samples

  • Desire to want to resolve issues and dive into potential issues
  • Great communication skills to discuss requests with customers
  • A broad knowledge of Hadoop & how to leverage the data with multiple applications
  • Hadoop experience for 3 - 5 yrs
  • SPARK
  • Kafka, Hive, Sqoop, Impala
  • Java, Scala, Python
  • Hbase
189

Hadoop System Administrator Resume Examples & Samples

  • Bachelor’s Degree and 4+ years of experience; OR, High School equivalent and 8+ years of experience
  • A current Security + CE certification is required
  • Experience managing Hadoop Clusters, including providing monitoring and administration support
  • Minimum 2 years’ experience with Linux System Administration
  • Prior DoD experience is preferred
  • Must possess strong interpersonal, communication, and writing skills to carry out and understand assignments, or convey and/or exchange routine information with other contractor and government agencies
  • Must be able to work with minimal supervision in high intensity environment and accommodate a flexible work schedule
190

Hadoop Devops Engineer Resume Examples & Samples

  • Utilize open source technologies to create fault-tolerant, elastic and secure high performance data pipelines
  • Work directly with software developers to simplify processes, enhance services and automate application delivery
  • BS Degree in Computer Science/Engineering required
  • Experience with configuration management tools, deployment pipelines, and orchestration services (Jenkins)
  • Familiar with Hadoop security knowledge and permissions schemes
191

Module / Tech Lead-hadoop, Big Data Resume Examples & Samples

  • Reporting to the Program manager on project/task progress as needed. Identify risks and issues
  • Conceptual understanding of data structures
  • Passion for technology and self- starter
  • Orientation towards Disciplined development processes Strong Software Development Lifecycle management experience
  • Other preferred branches are EE, ECE. Candidates with passion for coding and systems development from other disciplines also can apply
  • Work experience in a product company is an added advantage
192

Hadoop Data Engineer Resume Examples & Samples

  • Build and support scalable and durable data solutions that can enable self-service advanced analytics atHomeAway using both traditional (SQL server) and modern DW technologies (Hadoop,Spark, Cloud, NoSQL etc.) in an agile manner
  • Support HomeAway’s product and business team’s specific data and reporting needs on a global scale
  • Close partnership with internal partners from Engineering, product, and business (Sales, Customer Experience, Marketing etc.)
  • Bachelor’s or Master’sDegree in Computer Science or Engineering or related experience required
  • 6+ years of EDW development experience including 2+ years in Big data & Cloud space (i.e. building data pipelines in Hadoop, AWS), working with Petabytes of data
  • Development experience using Spark, Scala, Hive, S3, Kafka streams and APIs
  • Experience with Clickstream data is highly desirable
  • Experience with of all aspects of data systems (both Big data and traditional) including database design, ETL/ELT, aggregation strategy, performance optimization
  • Product and business acumen
  • Experience in eCommerce or Internet industry preferred
193

Analyst, IT Application Mgmt-hadoop Resume Examples & Samples

  • HDFS support and maintenance. Responsible for implementation and ongoing administration of Hadoop infrastructure. Have an excellent understanding of how the Hadoop components work and be able to quickly troubleshoot issues
  • Should be well versed with Hadoop environment administration including installation/configuration , upgrades/migration , performance tuning & troubleshooting
  • Fault tolerant configuration set-up, Backup and Recovery, Manage & review of Hadoop log files, File system management and monitoring
  • Monitor Hadoop cluster connectivity and security. Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users
  • Contribute ideas to application stability, performance improvements and process improvements. Exposure to automation scripts, performance improvement & fine-tuning services / flows
  • Resolve all incidents, service requests, and WOWs within SLA targets
  • During crisis conditions, willing to act in the role of problem facilitator to drive cross functional efforts of various teams toward issue resolution
  • 100% compliant to all the procedures and processes
  • Ensure proper handoffs of issues during shift change
  • BE/BTech/MS/MCA
194

Hadoop Admin Resume Examples & Samples

  • Hadoop skills like Ambari, Ranger, Kafka, Knox, Spark, HBase, Hive, Pig etc
  • The most essential requirements are
  • Familiarity with open source configuration management and Linux scripting
  • Knowledge of Troubleshooting Python, Core Java and Map Reduce Applications
  • Working knowledge of setting up and running Hadoop clusters
  • Knowledge on how to create and debug Hadoop jobs
  • Comfortable doing feasibility analysis for any Hadoop UseCase
  • Able to design and validate solution architecture as per the Enterprise Architecture of the Organization
  • Excellent Development knowledge of Hadoop Tools - Ambari, Ranger, Kafka, HBase, Hive, Pig, Spark, Map Reduce
  • Excellent knowledge on security concepts in Hadoop e.g. Kerberos
  • Able to do Code reviews as per organization's Best Practices
  • Good Knowledge of Java, SQL and no-Sql databases
  • Deployment and debug of Applications in multiple environments
  • Good knowledge of Linux and shell scripting as Hadoop runs on Linux
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Knowledge on Data-warehousing Concepts and Domain knowledge of Automobile industry is a plus
  • Ability/Knowledge to implement Data Governance in Hadoop clusters
  • Interpret analytics results to create and provide actionable client insights and recommendation
195

Hadoop Admin Resume Examples & Samples

  • Experience working on hadoop ecosystem components like hdfs, hive, map-reduce, yarn, impala, spark, Sqoop, HBase, Sentry, Hue and Oozie
  • Installation, configuration and Upgrading Cloudera distribution of Hadoop
  • Exposure to Kafka and Apache NIFI
  • Responsible for implementation and on-going administration of Hadoop Infrastructure
  • Experience working on Hadoop security aspects including Kerberos setup, RBAC authorization using Apache Sentry
  • Create and document best practices for Hadoop and Big data environment
  • File system management and cluster monitoring using Cloudera Manager
  • Performance tuning of Hadoop clusters, Impala, Hbase, Solr, and Hadoop MapReduce routines
  • Cloudera Manager configuration for Static and Dynamic Resource Management Allocations for MapReduce and Impala Workloads
  • Strong troubleshooting skills involving map reduce, yarn, sqoop job failure and its resolution
  • Analyze multi-tenancy job execution issues and resolve
  • Backup and disaster recovery solution for Hadoop cluster
  • Experience working on Unix operating system who can efficiently handle system administration tasks related to Hadoop cluster
  • Knowledge or experience working on NO-SQL databases like HBase, Cassandra, Mongodb
  • Troubleshooting connectivity issues between BI tools like Datameer, SAS and Tableau and Hadoop cluster
  • Working with data delivery teams to setup new Hadoop users. (Job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and Map Reduce access for the new users
  • Point of contact for vendor escalation; be available for 24*7 On Call support
  • Participate in new data product or new technology evaluation; manage certification process
  • Evaluate and implement new initiatives on technology and process improvements
  • Interact with Security Engineering to design solutions, tools, testing and validation for controls
  • Candidate will have 3 to 4 year experience with Hadoop data stores/cluster administration and 5 to 8 years relational database experience preferable with Oracle/SQL Server DBA experience
  • Strong Hadoop cluster administration expertise; Understanding of internals
  • Excellent performance and tuning skills of large workload inside Hadoop cluster
  • Strong Partitioning knowledge ( data sharding concepts)
  • Scripting Skills – Shell and Python
  • Experience in upgrading Cloudera Hadoop distributions
  • Experience in performance tuning and troubleshooting - drill down approach with O/S, database and application - End to End Application connectivity
  • Familiarity with NoSQL data stores (MongoDB / Cassandra/HBase)
  • Familiarity with Cloud Architecture (Public and Private clouds) – AWS , AZURE familiarity
  • Prior experience of administration of Oracle or any other relational database
196

Hadoop Configuration & Development Engineer Resume Examples & Samples

  • Performance tuning of Hadoop clusters
  • Help Application and Operations team to troubleshoot the performance issues
  • Assist in data modeling, design & implementation based on recognized standards
  • Software installation and configuration
  • Query and execution engine performance monitoring and tuning
  • Responsible for code (ETL/Ingestion, SQL/Data Engineering, and Data Science Model) migrations to production using bitbucket, git, Jenkins, and artifactory
  • Provide operational instructions for deployments, for example, Java, Spark, Sqoop, Storm
197

Technology Lead-hadoop Admin Resume Examples & Samples

  • Setting up Linux users, groups, Kerberos principals and keys
  • Aligning with the Systems engineering team in maintaining hardware and software environments required for Hadoop
  • Software installation, configuration, patches and upgrades
  • Working with data delivery teams to setup Hadoop application development environments
  • Screen Hadoop cluster job performances
  • Data modelling, Database backup and recovery
  • File system management, Disk space management and monitoring (Nagios, Splunk etc)
  • Planning of Back-up, High Availability and Disaster Recovery Infrastructure
  • Diligently teaming with Infrastructure, Network, Database, Application and Business Intelligence teams to guarantee high data quality and availability
  • Collaborating with application teams to install operating system and Hadoop updates, patches and version upgrades
  • Experience to XXdomain
198

Healthcare Data Scientist With Hadoop Experience Resume Examples & Samples

  • Qualification: Postgrad in one of the following fields with strong academic credentials
  • Computer Science/IT
  • Operations Research/Applied Math
  • Document all modeling steps in a systematic way including modeling process, insights generated , presentations , model validation results and checklists built in the project
  • Prepare a one pager document that outlines and quantifies the business impact due to the DS project
199

Hadoop / CDH Operations Engineer Resume Examples & Samples

  • Interacts with users and evaluates vendor products
  • Makes recommendations to purchase hardware and software, coordinates installation and provides backup recovery
  • May program in an administrative language
  • BASH shell fluency
  • Experience operating Hadoop clusters, specifically Hive
  • Linux system administration best practices
  • CDH 4.x and 5.x familiarity
  • Solid understanding of network services (DNS, LDAP)
  • Experience with AWS services (VPC, EC2, Cloud formation)
  • X.509 certificate management a plus
200

Principal Hadoop Data Analyst Resume Examples & Samples

  • Provides senior level consulting in identifying and solving complex and highly critical issues. Functions as a senior level engineer on operational issues, engineers and manages policies and standards
  • Helps evangelize the big data ecosystem within the development community for the appropriate use of the technology stack in line with the overall data platform strategy. This includes developer best practices and movement/management of data cross the ecosystem
  • Helps lead database strategic planning and consulting on the big data ecosystem platform selection, version implementation, software product recommendation, and usage of enhanced functionality
  • Keeps abreast of latest products and technology and their successful deployment in similar corporate settings
  • Assists the larger data platform team in providing problem identification and complex resolution support through analysis and research
  • Participates in the design, implementation, and ongoing support for data platform environments, associated network and system configurations as requested by related support groups
201

Hadoop / Messaging Engineer Resume Examples & Samples

  • Contribute to evolving design, architecture, and standards for building and delivering unique services and solutions
  • Implement best-in-industry, innovative technologies that will expand Inovalon’s infrastructure through robust, scalable, adrenaline-fueled solutions
  • Leverage metrics to manage the server fleet and complex computing systems to drive automation, improvement, and performance
  • Support and implement DevOps methodology; and
  • Develop and automate robust unit and functional test cases
  • Experience engineering Hadoop solutions
  • Technology experience required; Kafka, RabbitMQ or other Enterprise messaging solution
  • Other technologies good to have; Storm, Spark, Flink, Flume, Sqoop, NIFI or Horton DataFlow
  • In depth experience with HortonWorks or one of the major Hadoop distributions
  • Java scripting and application support experience strongly desired
  • Working experience in at least one automation scripting language such as Ansible, Puppet/Chef, etc
  • Working experience in test case automation tool
  • Experience with virtualization technologies required
  • You are driven to solve difficult problems with scalable, elegant, and maintainable solutions
  • Expert troubleshooter – unwilling to let a problem defeat you; unrelenting, persistent, confident
  • A penchant for thinking outside the box to solve complex and interesting problems
  • Extensive knowledge of existing industry standards, technologies, and infrastructure operations; and
  • Manage stressful situations with calm, courtesy, and confidence
202

Hadoop Platform Architect Resume Examples & Samples

  • Responsible for defining Big Data Architecture and developing Roadmap for Hortonworks HDP 2.5 platform
  • Proven architecture and infrastructure design experience in big data batch and real-time technologies like Hadoop, Hive, Pig, HBase, Map Reduce, SPARK and STORM, Kafka
  • Proven architecture and infrastructure design experience in NoSQL including Cassandra, HBASE, MongoDB
  • Prior experience leading software selection
  • Ability to create both logical architecture documentation and physical network and server documentation
  • Coordination of initial Big Data platforms implementation
  • Overall release management /upgrade coordination of Big Data platform
  • Good to have Hadoop Administration hands-on skills
  • Good to have understanding of security aspects related to Kerborised Cluster
203

Big Data Engineer Hadoop Front Office Athena Resume Examples & Samples

  • Appreciation of challenges involved in serialization, data modelling and schema migration
  • A passion for Big Data technologies and Data in general
  • Understanding of derivatives (Swaps, Options, Forwards). Concepts around pricing, risk management and modelling of derivatives
  • Experience in stream processing (Kafka), serialization (Avro) and BigData (Hadoop) platforms
  • Experience with object oriented programming using Python
  • Experience with Java and or Scala
204

Platform Engineer Hadoop Resume Examples & Samples

  • Platform provisioning strategies and tools
  • Cloud technologies,
  • Infrastructure as Code (Puppet / Ansible / Chef / Salt),
  • Continuous delivery, tests, code scans,
  • Data security and privacy (privacy-preserving data mining, data security, data encryption)
205

Senior Tech Lead-hadoop Technologies Resume Examples & Samples

  • Act as focal point in determining and making the case for applications to move into the Big data platform
  • Hands on experience leading large-scale global data warehousing and analytics projects
  • Ability to communicate objectives, plans, status and results clearly, focusing on critical few key points
206

Hadoop Systems Engineer Resume Examples & Samples

  • Participate in installation, configuration, and troubleshooting of Hadoop platform including hardware, and software
  • Plan, test and execute upgrades involving Hadoop components; Assure Hadoop platform stability and security
  • Help design, document, and implement administrative procedures, security model, backup, and failover/recovery operations for Hadoop platform
  • Act as a point of contact with our vendors; oversee vendor activities related to support agreements
  • Research, analyze, and evaluate software products for use in the Hadoop platform
  • Provide IT and business partners consultation on using the Hadoop platform effectively
  • Build, leverage, and maintain effective relationships across technical and business community
  • Participates and evaluates systems specifications regarding customer requirements, transforming business specifications into cost-effective, technically correct database solutions
  • Prioritizes work and assists in managing projects within established budget objectives and customer priorities
  • Supports a distinct business unit or several smaller functions
  • Responsibilities are assigned with some latitude for setting priorities and decision-making using established policies and procedures
  • Results are reviewed with next level manager for clarification and direction before proceeding
207

Hadoop Cloudera Administrator Resume Examples & Samples

  • 3 to 5 years of Hadoop administration experience, preferably using Cloudera
  • 3+ years of experience on Linux, preferably RedHat/SUSE
  • 1+ years of experience creating map reduce jobs and ETL jobs in Hadoop, preferably using Cloudera
  • Healthcare industry experience preferred
  • Experience sizing and scaling clusters, adding and removing nodes, provisioning resources for jobs, job maintenance and scheduling
  • Familiarity with Tableau, SAP HANA or SAP BusinessObjects
208

Hadoop Big Data Specialist Resume Examples & Samples

  • Proven experience as a Hadoop Developer/Analyst in Business Intelligence and Data management production support space is needed
  • Strong communication, technology awareness and capability to interact work with senior technology leaders is a must
  • Critical Thinking/Analytic abilities
  • Strong knowledge and working experience in Linux , Java , Hive
  • Working knowledge in enterprise Datawarehouse
  • Should have dealt with various Data sources
  • Experience in Datastage is an Asset
209

Hadoop Tester Resume Examples & Samples

  • Cloud enablement – Implementing Amazon Web Services (AWS)
  • BI & Data Analytics – Implementing BI and analytics and utilizing cloud services
  • 5+ years of Experience testing applications on Hadoop products
  • 5+ years of Experience in setting up Hadoop test environments
  • Expertise in developing automated tests for Web, SOA/WS, DW/ETL, JAVA backend applications
  • Expertise in automation tools: Selenium (primary), HP UFT
  • Expertise in test frameworks: Cucumber, JUnit, Mockito
  • Proficiency in applying TDD, ATDD/BDD
  • Expertise in programming languages: JAVA (primary), JavaScript
  • Proficiency with build tools: SVN, Crucible, Maven, Jenkins
  • Strong SQL skills in Oracle
  • Strong Unix and Windows skills
  • Experience with project management tools: Rally, JIRA, HP ALM
  • Experience leading projects
210

Hadoop Ecosystem Engineer / Developer Resume Examples & Samples

  • Experience in developing and maintaining Hadoop clusters (Hortonworks, Apache, or Cloudera)
  • Experience with Linux patching and support (Red Hat / CentOS preferred)
  • Experience supporting Java environments
  • Experience upgrading and supporting Apache Open source tools
  • Experience with LDAP, Kerberos and other authentication mechanisms
  • Experience with ZooKeeper, Oozie, Ambari
  • Experience with HDFS, Yarn, HBase, SOLR, Map-Reduce code
  • Experience in deploying software across the Hadoop Cluster using, Chef, Puppet, or similar tools
  • Familiarity with NIST 800 – 53 Controls a plus
  • Substantial experience, and expertise, in actually doing the work of setting up, populating, troubleshooting, maintaining, documenting, and training users
  • Requires broad knowledge of the Government's IT environments, including office automation networks, and PC and server based databases and applications
  • Experience using Open Source projects in Production preferred
  • Experience in a litigation support environment extremely helpful
  • Ability to lead a technical team, and to give it direction, will be very important, as will the demonstrated ability to analyze the attorneys' needs, and to design and implement a whole system solution responsive to those needs
  • Undergraduate degree strongly preferred; preferably in the computer science or information management/technology disciplines
211

Big Data Platform Engineer Hadoop Resume Examples & Samples

  • 3+ years of software development and design
  • 1+ years developing application in a Hadoop environment
  • Experience with Spark, Hbase, Kafka, Hive, Scala, Pig, Oozie, Sqoop and Flume
  • Understanding of managed distributions of Hadoop, like Cloudera, Hortonworks, etc
  • Strong diagramming skills – flowcharts, data flows, etc
212

Lead Big Data Platform Engineer Hadoop Resume Examples & Samples

  • Bachelor's degree in Computer Science or equivalent work experience
  • 5+ years of software development and design
  • 3+ years developing application in a Hadoop environment
  • 3+ years of diverse programming in languages like Java, Python, C++ and C#
  • Well versed in managed distributions of Hadoop, like Cloudera, Hortonworks, etc
  • Understanding of cloud platforms like AWS and Azure
213

Hadoop Java Developer Resume Examples & Samples

  • 5+ years experience in server side Java programming in a Websphere/Tomcat environment
  • Strong experience with Spring Framework
  • Strong understanding of Java concurrency, concurrency patterns, experience building thread safe code
  • Experience with SQL/Stored Procedures on one of the following databases (DB2, MySQL, Oracle)
  • Experience with high volume, mission critical applications
214

Hadoop Systems Architect Resume Examples & Samples

  • Sound understanding and experience with Hadoop ecosystem (Cloudera). Able to understand and explore the constantly evolving tools within Hadoop ecosystem and apply them appropriately to the relevant problems at hand
  • Experience in working with a Big Data implementation in production environment
  • Lead and assist with the technical design/architecture and implementation of the Big data cluster in various environments
  • Able to guide/mentor development team for example to create custom common utilities/libraries that can be reused in multiple big data development efforts
  • Provide infrastructure system expertise, requirements and assistance to Systems and Database Administrators, other System Architects and application development teams
  • Work with line of business (LOB) personnel, external vendors, and internal Data Services team to develop system specifications in compliance with corporate standards for architecture adherence and performance guidelines
  • Provide technical resources to assist in the design, testing and implementation of software code and infrastructure to support data infrastructure and governance activities
  • Assist in both external and internal audit questionnaires and application assessments
  • Assess current technical architecture and estimate system capacity to meet near- and long-term processing requirements
  • Evaluate, select, test, and optimize hardware and software products
  • Discuss with end-users, clients and senior management to define infrastructure requirements for complex systems and infrastructure development
  • Design and oversee development and implementation of end-to-end integration of infrastructure solutions
  • Document the Bank's existing solution architecture and technology portfolio and make recommendations for improvements and/or alternatives
  • Develop, document and communicate needs for investing in infrastructure evolution, including analysis of cost reduction opportunities
  • Liaise with Enterprise Architects to conduct research on emerging technologies, and recommend technologies that will increase operational efficiency, infrastructure flexibility and operational stability
  • Develop, document, communicate and enforce a policy for standardizing systems and software, as necessary
  • Instruct, direct and mentor other members of the team
  • Bachelor's degree in a technical or business-related field, or equivalent education and related training
  • Twelve years of experience in data warehousing architectural approaches
  • Exposure to and strong working knowledge of distributed systems involving internet protocols
  • Excellent understanding of client-service models and customer orientation in service delivery
  • Ability to derive technical specifications from business requirements and express complex technical concepts in terms that are understandable to multi-disciplinary teams, in verbal as well as in written form
  • Ability to grasp the 'big picture' for a solution by considering all potential options in impacted area
215

Hadoop Admin Resume Examples & Samples

  • Involves designing, capacity planning, cluster set up, monitoring, structure planning, scaling and administration of Hadoop components ((YARN, MapReduce, HDFS, HBase, Zookeeper, Storm, Kafka, Spark, Pig and Hive)
  • Accountable for performance tuning and resource management of Hadoop clusters and MapReduce routines
  • Strong Experience with LINUX based systems & scripting (either of Shell, Perl or Python)
  • Experience with configuration management tools like puppet, chef or salt
  • Good knowledge of directory services like LDAP & ADS and Monitoring tools like Nagios or Icinga2
  • Strong troubleshooting skills of Hive, Pig, Hbase and JAVA Mapreduce codes/jobs
216

Hadoop Data Engineer Resume Examples & Samples

  • Creates / comprehends intermediate-to complex logical (conceptual, relational, subject-level and dimensional) data models and other metadata deliverables at varying levels of detail and functionality across multiple business areas of responsibility
  • Performs / comprehend Analysis and volumetric parameters that dictate data profiling activity preferred
  • Maintains understanding of software industry, particularly with respect to data warehousing and analytics
217

Hadoop Adminstrator / Java Developer Resume Examples & Samples

  • This is not a Hadoop Developer role. Hadoop Administration and Java Development only
  • Provide operational support for the Garmin Hadoop clusters
  • Develop reports and provide data mining support for Garmin business units
  • Participate in the full lifecycle of development from conception, analysis, design, implementation, testing and deployment, and use Garmin and Third Party Developer APIs to support innovative features across Garmin devices, web, and mobile platforms
  • Use your skills to design, develop and perform continuous improvements to the current build and development process
  • Strong background in Linux OS skills
  • Experience with automation of administrative tasks using scripting language (Shell/Bash/Python)
  • Familiarity with Apache Hadoop ecosystem
  • Experience with general systems administration
  • Knowledge of the full stack (storage, networking, compute)
  • Experience using config management systems (Puppet/Chef/Salt)
  • Experience with management of complex data systems (RDBMS/ or other NoSQL data platforms)
  • Current expereince with Java server-side development
  • We use Hadoop technolgies, including HBase, Storm, Kafka, Spark, and MapReduce to deliver personalized Insight about our cutomer's fitness and wellness. We work in a fast-paced, agile environment where there are always new features to innovate and implement
218

Hadoop Platform Engineer, Infrastructure Resume Examples & Samples

  • Helping design and/or implement shared service(s) for Big Data technologies like Hadoop and/or other NoSQL technologies
  • Performing technical analysis to present pros and cons of various solutions to problems
  • Engineering enterprise solutions
  • Because Big Data involves many disciplines the candidate will need to be able to work with the full platform stack while providing expertise in at least a subset of areas. These areas include
  • Dev Ops–proficiency with RHEL systems and scripting languages such as, python and shell
  • Security - Core principles, Kerberos and Active Directory/LDAP, Public Key Infrastructure
  • Experience with administration of Hadoop Platform (HDFS, YARN, Sqoop, Flume, Hive, HBase, Spark, Oozie), Cloudera Distribution is preferred
  • Minimum 2+ years of applicable platform engineering or related Hadoop experience
  • Automation with tools such as Ansible, SSH, and Shell
  • Experience working in a flexible Agile development environment
  • Problem analysis and solving in multiple layers such as hardware, Linux, networking and Hadoop
  • Java - Some experience with design, development, and use of classes and objects, etc
  • Able to adapt to a changing environment
  • An excellent team worker
  • Self-motivated, delivery focused with the ability to work independently where required
  • Able to own and drive solution understanding the real issues behind Business Requirements
  • Able to use tools of an agile ecosystem like Jira and GIT
  • Able to successfully interface with various stakeholders
  • Committed and have a strong sense of ownership
219

Hadoop Database Administrator Resume Examples & Samples

  • Supports the Database Administration functions of the system. Designs and implements modifications or enhancements to forms, menus, and reports
  • Implements processes for data management including data standardization and cleansing initiatives as part of the overall design, development, fielding, and sustainment of a system
  • Executes advanced database concepts, practices and procedures
220

Hadoop ELT Developer Resume Examples & Samples

  • Analyze, define and document requirements for data, workflow, logical processes, hardware and operating system environment, interface with other systems, internal and external checks, controls, and outputs
  • Design, develop and maintain ELT specific code
  • Design reusable components, user defined functions
  • Perform complex applications programming activities. Code, test, debug, document, maintain, and modify complex applications program
  • Provide support in the implementation of processes within the QA and production environments
  • Possesses solid knowledge of design and analysis methodology and application development processes from both an industry and client perspective
  • Skills on Python/Java/Scala
  • Experience with MapReduce coding
  • Ability to work in an Agile Development environment
  • Excellent writing skills, oral communication skills, strong process skills, and leadership ability
  • Examine and solve the performance bottlenecks in the ETL process
221

Hadoop Product Support Engineer Resume Examples & Samples

  • 1) Conduct problem triage, trouble shooting and problem analysis on complex incidents that are escalated to GSC
  • 2) Knowledge Management and related activities. Validate Knowledge content for accuracy, relevancy, and currency. Create and maintain Technical Alerts and other related technical artifacts. Create Defect tracking tickets. Lead and/or participate on teams focused on creating technical bulletins, procedures and support processes (Change Templates, FRO bulletins, Service bulletins, Support tools. Development and enhancement of problem scenario reporting rules and associated knowledge
  • 3) Interface into other organizations (internal and external). Ability to maneuver cross-organizationally and demonstrate a high-level of professionalism. Ability to deliver succinct and concise presentations
  • 4) Product / problem analysis. Gather data and prepare templates as required for data analysis. Assist senior analyst in completing analysis
  • 5) Mentoring and development of resources as needed
  • BS in Computer Science or other technical discipline
  • 2-5 years of Teradata support experience
  • Understanding of Teradata product knowledge
  • Experience in Hadoop open source framework and architecture (HDFS, YARN, MapReduce, etc)
  • Experience with common Hadoop components (i.e., MR, Ambari, Zookeeper, HBase, Pig, Hive)
  • Cultural awareness
  • Working experience of SUSE LINUX operating systems
  • Experience analyzing data
  • Closed Loop/Corrective Action experience
  • Experience with application programming
  • Association with a development environment
222

Hadoop Software Engineer Resume Examples & Samples

  • Create and modify User stories
  • Understand and participate in the development of product backlog and road map
  • Develop and communicate design patterns to support large scale data processing and transformation using modern ETL techniques across large number of diverse and concurrent data feeds
  • Code, test, modify, debug, document, and implement code in support of data Ingestion / ETL techniques in Hadoop environment leveraging native Java and rich set of Cloudera Hadoop APIs
  • Deep and strong Perl and bash scripting skills to automate and orchestrate data processing components to automate the execution of code in Hadoop cluster environment
  • Help develop automated test cases/script and leverage test harness for unit testing
  • Participate in unit testing and integration testing
  • Help develop code migration strategies from one environment to the other
  • Participate in design reviews / code reviews and ensure that all solutions are aligned to pre-defined architectural specifications; identify/troubleshoot application code-related issues; and review and provide feedback to the final user documentation
  • Assume ownership and accountability for deliverables during sprints
  • Provide coaching and mentoring to less experienced member of the team
  • Bachelor degree in Computer Science or Equivalent
  • Knowledge and hands on expertise with Hadoop, SPARK, etc
  • Full awareness of full development lifecycle implementation
  • Must have 6 - 10 years of experience in the design and delivery of and Big Data solutions and building out of its components
  • Require hands on working knowledge of - MapReduce, Spark
  • Clearly understand working of HDFS at a filesystem level and AVRO as data serialization
  • Proficient with at least one code repositories and associated practices for GIT, SVN, MAVEN etc
  • Must 4 years of execution with Aglie and SCRUM
  • Ability to communicate at all levels of management and with customers. Demonstrated skills both written and oral are required
  • Experience in the development of financial applications is a strong plus
  • Exhibits leadership
  • Ability to multi-task and handle multiple priorities
  • Embraces team-based approach
  • Strives for Continuous Improvement
223

Hadoop Data Analyst Lead Resume Examples & Samples

  • Familiar with a flavor of Hadoop (Cloudera a nice to have) and a common relational database (Oracle, SQL Server, DB2, MySQL)
  • Exposure to SAP
  • Provide leadership in establishing analytic environments required for structured, semi-structured and unstructured data
  • Qualifications: 3-9 years experience, Bachelor’s Degree
224

Hadoop Solutions Architect Resume Examples & Samples

  • Provides design recommendations based on long-term IT organization strategy
  • Develops application and custom integration solutions, generally for one business segment; solutions include enhancements and interfaces, functions and features
  • Uses a variety of platforms to provide automated systems applications to customers
  • Provides solid knowledge and skill regarding the integration of applications across the business segment
  • Determines specifications, then plans, designs and develops moderately complex software solutions, utilizing appropriate software engineering processes – either individually or in concert with project team
  • Will assist in resolving support problems
  • Recommends programming and development standards and procedures and programming architectures for code reuse
  • Has solid knowledge of state-of-the art programming languages and object-oriented approaches in designing, coding, testing and debugging programs
  • Understands and consistently applies the attributes and processes of current application development methodologies
  • Researches and maintains knowledge in emerging technologies and possible application to the business
  • Knowledge of BKFS’ products and services
  • In-depth knowledge of end-to-end systems development life cycles (including waterfall, iterative and other modern approaches to software development)
  • Ability to estimate work effort for project sub-plans or small projects and ensure the project is successfully completed
  • Positive outlook, strong work ethic, and responsive to internal and external customers and contacts
  • May require a thorough understanding of design patterns and their application
  • May require a thorough understanding of Model-View-Controller design patterns for web applications
  • May require a fluency in developing and understanding sequence diagrams, class models, etc
  • MQ / Topics
  • Oozie
  • Storm/Kafka
  • Working with XML/JSON/ORC/CSV/TXT formats
225

Hadoop / Hortonworks Admin Resume Examples & Samples

  • Strong Knowledge in Hadoop Architecture and its implementation
  • Manage Hadoop environments and perform Installation, administration and monitoring tasks
  • Strong understanding of best practices in maintaining medium to large scale Hadoop Clusters
  • Design and Maintain access and security administration
  • Design, Implement and Maintain backup and recovery strategies on Hadoop Clusters
  • Design, Install ,Configure and maintain High Availability
  • Perform Capacity Planning of Hadoop Cluster and provide recommendations to management to sustain business growth
  • Create Standard Operational Procedures and templates
  • Experience in mentoring other Admin’s
  • Experience in whole Hadoop ecosystem like HDFS, Hive , Yarn, Flume, Oozie, Flume, Cloudera Impala, Zookeeper, Hue, Sqoop, Kafka, Storm, Spark and Spark Streaming including Nosql database knowledge
  • Proactively identify opportunities to implement automation and monitoring solutions
  • Proficient in setup/using Cloudera Manager as a monitoring and diagnostics tool and also to identify/resolve the Performance issues
  • Good knowledge of Windows/Linux/Solaris Operating systems and shell scripting
  • Strong desire to learn a variety of technologies and processes with a 'can do' attitude
  • Ability to read, write, speak and understand English
  • Ability to show judgment and initiative and to accomplish job duties
  • Ability to work with others to resolve problems, handle requests or situations
  • Ability to effectively consult with department managers and leaders
  • 8-10 years of hands-on experience in handling large-scale software development and integration projects
  • 6+ years of experience with Linux / Windows, with basic knowledge of Unix administration
  • 3+ years of experience administering Hadoop cluster environments and tools ecosystem: Cloudera/Horton Works/Sqoop/Pig/HDFS
  • Experience in Spark, Kerberos authorization / authentication and clear understanding of cluster security
  • Exposure to high availability configurations, Hadoop cluster connectivity and tuning, and Hadoop security configurations
  • Expertise in collaborating with application teams to install the operating system and Hadoop updates, patches, version upgrades when required
  • Experience working with Load balancers, firewalls, DMZ and TCP/IP protocols
  • Understanding of practices for security, support, backup and recovery
  • Experience with hardware selection and capacity planning
  • Experience in working with RDBMS and Java
  • Exposure to NoSQL databases like MongoDB, Cassandra etc
  • Certification in Hadoop Operations or Cassandra is desired
226

Hadoop Data Scientist Resume Examples & Samples

  • Bachelor degree in Computer Science, Information Systems or related discipline; Or 6 years of prior equivalent work related experience in lieu of a degree
  • Working knowledge of statistics, programming and predictive modeling
  • Code writing abilities
  • Experience working in data mining or natural language processing
  • Mastery of statistics, machine learning, algorithms and advanced mathematics
  • Code writing in R, Python, Scala, SQl, Spark (1.6 and 2.0) for machine learning
  • Shows strong knowledge of basic and advanced prediction models
  • Data mining knowledge that spans a range of disciplines
  • 2+ years of hands-on development, installation & integration experience with Hadoop technologies
  • Experience securing Hadoop clusters with Kerberos & Active Directory
  • Data ingestion experience leveraging Sqoop and Oozie
  • Hands on development experience with Apache technologies such as Hadoop, Spark, Hbase, Hive, Pig, Solr, Sqoop, Kafka, Oozie, NiFi, etc
  • Hands-on development experience with one or more of Java, Python, Scala
  • Experience designing data queries against data in the HDFS environment using tools such as Apache Hive and Apache HBase
227

Hadoop Technologies Resume Examples & Samples

  • Design and develop big data solutions using industry standard technologies
  • Develop services that make big data available in real-time for in-product applications
  • Serve as technical “go to” person for our core Hadoop technologies
  • Lead fast moving development teams using standard project methodologies
  • Lead by example, demonstrating best practices for unit testing, performance testing, capacity planning, documentation, monitoring, alerting, and incident response
  • Experience in Unix shell scripting, batch scheduling and version control tools
  • Ability to design custom solutions for customers who desire Big Data options to enhance their current technical limitations
  • Demonstrated industry leadership in the fields of database, data warehousing
  • Data Analysis- Input, understand, analyze and act on data
  • Business Owner Mindset- Operate with keen business knowledge, expense, risk & controls driven mindset
228

Hadoop Big Data Operations Engineer Resume Examples & Samples

  • Help create an innovative environment in which experimentation is welcomed and new solutions can be quickly implemented and iterated, while still maintaining a high level of quality
  • Setup highly available Hadoop clusters, understanding of how and when to scale, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, schedule, and configure backups
  • Collaborate with peers on work estimation/planning and implementation of new versions/features of tools
229

Hadoop Infrastructure Engineering Lead Resume Examples & Samples

  • The responsibilities include administration of Hadoop cluster, but not limited to the following
  • Monitor Hadoop cluster for performance and capacity planning
  • Automate repetitive operational tasks
  • Manage and administer NoSQL Database systems
230

Think Big Principal Hadoop Admin Resume Examples & Samples

  • Windows Administration
  • Experience of Deploying and Maintaining Server Hardware at scale
  • Degree or equivalent professional experience
  • Excellent communications skills and experience in a Customer-facing role
  • Ability to quickly pick up and use new tools and technologies
  • Experience of leading technical teams and mentoring more junior staff members
  • Capable of working independently, managing time effectively and supporting multiple production environments simultaneously
231

Tech Lead-hadoop Resume Examples & Samples

  • Perform application, component and infrastructure design, database design and modeling and performance tuning activities in the project
  • Lead the tasks related to installation, maintenance, deployments and/or upgrade of the bank’s Hadoop components, related third-party software and applications across all platforms
  • Assist with the installation, maintenance, deployments and/or upgrade of the bank’s Hadoop components, related third-party software and applications across all platforms
  • Produce well structured, high quality Design and maintainable code; produce Unit Tests, detailed specifications and documentation, and will QA code that peers have written
  • Work closely with Architects and SMEs to understand project requirements, do estimations prepare high level and low level design
  • Expertise in creating technical design documents and giving walk-throughs to the stakeholders; expertise in Performance Tuning and Optimization Techniques for the ETL processes on the BigData/Hadoop platform
  • Expertise in incident, Problem and Change Management processes with experience in at least 1 tool; Cloudera certification (any other Hadoop Technology)
  • Thorough with the Hadoop Architecture and having good awareness on the different Hadoop Toolsets; experience on Hive, Spark, Oozie and Cloudera Hue and Java – Hadoop Integration
  • Strong hands-on experience on Unix/Linux shell scripting; around five years of experience in a Support projects; skills with Configuration management tools (ClearCase preferred)
  • Expertise in Performance Tuning and Optimization Techniques for the ETL processes on the BigData/Hadoop platform; experience in working with database and SQL queries
232

Infrastructure Developer Lead-hadoop Technologies Resume Examples & Samples

  • Implement and develop Cloudera Hadoop data driven platform with advanced capabilities to meet business and infrastructure needs
  • Leads the discovery phase, design and development of medium to large scale complex projects with agile approach and security standards
  • Leads and participates in proof-of-concept for prototypes & validate ideas, automating platform installation, configuration and operations processes and tasks (Site reliability engineering) of global events data platform
  • Contributes to continuous improvement by providing optimized practices, efficiency practices in current core services (platform, and infrastructure) areas
  • Work with offshore team and provides development opportunities for associates
  • Supporting change management and operations support for security events platform with ITSM/ITIL standards
  • 10+ years of work experience within one or more IT organizations. Prior work experience in the technology engineering and development is plus
  • 5+ years of advanced Java/Python Development experience (spring boot/python, server-side components preferred)
  • 2+ years of Hadoop ecosystem (HDFS, Hbase, Spark, Zookeeper, Impala, Flume, Parquet, Avro) experience for high volume based platforms and scalable distributed systems
  • Experience working with data model, frameworks and open source software, Restful API design and development, and software design patterns
  • Experience with Agile/Scrum methodologies, FDD (Feature data driven), TDD (Test Driven Development), Elastic search (ELK), Automation of SRE for Hadoop technologies, Cloudera, Kerberos, Encryption, Performance tuning, and CI/CD (Continuous integration & deployment)
  • Capable of full lifecycle development: user requirements, user stories, development with a team and individually, testing and implementation
  • Knowledgeable in technology infrastructure stacks a plus; including: Windows and Linux Operating systems, Network (TCP/IP), Storage, Virtualization, DNS/DHCP, Active Directory/LDAP, cloud, Source control/Git, ALM tools (Confluence, Jira), API (Swagger, Gateway), Automation (Ansible/Puppet)
  • Production Implementation Experience in projects with considerable data size (in Petabytes PB) and complexity
  • Strong communication and written communications skills with the ability to be highly effective with both technical and business partners. Ability to operate effectively and independently in a dynamic, fluid environment
233

Hadoop Migration Project Manager Resume Examples & Samples

  • More than 10+ years Experience in IT project management, specifically in managing medium-to-large scale software application development projects
  • Effectively coordinate resources and assignments among project assignees
  • Manage and monitor project progress within the constraints of the project management plan, schedule, budget and resources. Proactively identify risks and issues affecting project schedules and objectives and appropriately escalate these issues, with recommendations, to senior managers. Effectively identify change and use appropriate protocols to manage and communication this change effectively
  • Collect, maintain and distribute project status meeting minutes to stakeholders
  • Provide routine status reports and briefings to project team, customers and senior managers. Maintain ongoing and effective communications among all project stakeholders, both business and technical
  • Experience in developing and maintaining detailed project schedules using Microsoft Project Required
  • Must have Solid understanding of software development lifecycle methodologies
  • Excellent written and oral communication skills & Excellent listening and interpersonal skills
  • Develop strong and collaborative relationships with customers to achieve positive project outcomes
  • Certification: Manages engagements with complexity on EM level 1 or higher. Is, if possible, certified on this level and Project Financials, KPI & Reporting
  • Should be experienced in Negotiation, Vendor Management, Risk Management, Continuous (Service) Improvement
  • Should have progression skills in Quality Management
234

Hadoop Delivery Architect Resume Examples & Samples

  • 5-8 years of experience in following
  • Must have experience with RDBMS, Big Data -Hadoop Ecosystem HDP Platform -- Sqoop, Flume, Pig, Hive, Hbase and Spark
  • Must have experience with data modeling tools (ERWIN or ER Studio)
  • Must have experience with Unix and Windows operating systems
  • Must have experience with ETL Tools (Informatica, SSIS, etc.)
  • Must have experience with Reporting, Analytic and OLAP tools (Business Objects)
  • You define the structure of the system, its interfaces, and the principles that guide its organization, software design and implementation
  • You are responsible for the management and mitigation of technical risks, ensuring that the Delivery services can be realistically delivered by the underlying technology components
  • Should be experienced in technology awareness & leveraging and innovation & growth capability
235

Hadoop DBA Resume Examples & Samples

  • At least four, typically six or more years experience in systems analysis and application program development, or an equivalent combination of education and work experience
  • Requires a broad knowledge of the client area's functions and systems, and application program development technological alternatives
  • Requires experience with state of the art application development support software packages, proficiency in at least two higher level programming languages, some management capabilities, strong judgment and communication skills, and the ability to work effectively with client and IT management and staff
236

Hadoop Analyst Resume Examples & Samples

  • Lead analytics projects working as the BI liaison to other business units in Bell
  • Work in an iterative environment to solve extremely challenging business problems
  • Drive BI self-serve with other business units in Bell using tools like Microstrategy and Tableau
  • Documentation of all analytical processes created
  • Opportunity to be cross trained in other complementary area in Hadoop
  • Along with the rest of the team, actively research and share learning/advancements in the Hadoop space, especially related to analytics
  • Technical subject matter expertise in any of the following areas: Statistics, Graph Theory
  • Knowledge of various advanced analytical techniques with the ability to apply these to solve real business problems
237

Hadoop Database Analyst Resume Examples & Samples

  • Analyzes data requirements, application and processing architectures, data dictionaries, and database schema(s)
  • Designs, develops, amends, optimizes, and certifies database schema design to meet system(s) requirements
  • Gathers, analyzes, and normalizes relevant information related to, and from business processes, functions, and operations to evaluate data credibility and determine relevance and meaning
  • Develops database and warehousing designs across multiple platforms and computing environments
  • Develops an overall data architecture that supports the information needs of the business in a flexible but secure environment
  • Experience in database architecture, data modeling and schema design
  • Experience in orchestrating the coordination of data related activities to ensure on-time delivery of data solutions to support business capability requirements including data activity planning, risk mitigation, issue resolution and design negotiation
  • Ability to design effective management of reference data
  • Familiar with data standards/procedures and data governance and how the governance and data quality policies can be implemented in data integration projects
  • Experience in Oracle Data Administration and/or Oracle Application Development
  • Experience in SQL or PL/SQL (or comparable language)
  • Experience in large scale OLTP and DSS database deployments
  • Experience in utilizing the performance tuning tools
  • Experience in the design and modeling of database solutions using one or more of the following: Oracle, SQL Server, DB2, any other relational database management system
  • Hadoop experience preferred
  • Experience in normalization/denormalization techniques to optimize computational efficiency
  • Expert level knowledge of SQL
  • Experience with NoSQL modeling (HBASE, MongoDB, etc..) preferred
  • Have a Java background, experience working within a Data Warehousing/Business Intelligence/Data analytics group, and have hand’s-on experience with Hadoop
  • Design data transformation and file processing functions
  • Help design map reduce programs and UDFs for Pig and Hive in Java
  • Define efficient tables/views in Hive or other relevant scripting language
  • Help design optimized queries against RDBMS’s or Hadoop file systems
  • Help with QA on ETL/ELT
  • Have experience with Agile development methodologies
  • Work with support teams in resolving operational & performance issues
238

Hadoop Data Business Analyst Resume Examples & Samples

  • Provides expertise for multiple areas of the business through analysis and understanding of business needs; applies a broad knowledge of programs, policies, and procedures in a business area or technical field
  • Provides business knowledge and support for resolving technology issues across multiple areas of the business
  • Uses appropriate tools and techniques to elicit and define requirements that address more complex business processes or projects of moderate to high complexity. Analyzes and translates business, data, functional, and user requirements and/or business rules into analytic and reporting requirements, data models and metadata deliverables by leading and guiding discussions internally with the project team and externally with business and technology partners
  • Analyzes requirements and create or contribute to functional designs, leveraging more advanced technical skills; Assist with design application prototypes
  • Constructs and executes complex SQL queries and review data profiling results to discover semantics, patterns and value domains of data under analysis
  • Performs complex data analysis utilizing multiple techniques to determine, document and communicate existing and potential future data relationships, profiles and quality levels; uncover anomalies and variances in data content from expected results
  • Assists with project initiation or planning for projects of moderate complexity. Plans, develops and leads information gathering sessions
  • Consults on the functional test plans and conditions for assigned applications or projects of moderate complexity; ensure that the test plans cover the testing of all defined functional requirements; supports data analysis and validation from functional tests
  • Participates in design reviews, application design impacts between concurrent releases, and technical walk-throughs
  • Updates and maintains all logical (conceptual, relational, subject-level and dimensional) data models and other metadata deliverables at multiple levels of detail and functionality for a specific business subject area
  • Plans, develops and facilitates inspection sessions to validate that modeled data and related metadata deliverables match and satisfy business data requirements
  • Conducts reviews of physical data model and related technical detail with database administrators and technical developers as part of a single project's development life cycle
  • Provides informal training to Band B and Band C1 members on the team. Understands, applies, teaches others and drive improvements in the use of corporate metadata development tools and processes
  • Executes change control process to manage changes to base lined deliverables and scope for projects of moderate to high complexity
  • Develops and keeps current an approach to data management across multiple business AoRs
  • Applies knowledge of tools and processes to drive data management for a business AoR
  • Creates complex technical requirements through analyzing business and functional requirements
  • Education: College degree or equivalent experience; Post secondary degree in management / technology or related field or a combination of related experience and education a plus; 5+ years working in insurance, project management, and/or technology preferred
  • Experienced in writing technical requirements
  • Hands-on SQL and DB querying exposure preferred
  • Data Profiling a plus
  • Exposure to Hadoop Eco-system a big plus
  • Extensive experience working with project team following an agile scrum a must; exposure / experience to Waterfall software development lifecycle a plus
  • Advanced insurance industry / business knowledge
  • Proven ability to be flexible and work hard, both independently and in a team environment with changing priorities
  • Willingness to work outside of normal business hours
  • Excellent English oral / written communication skills
239

Hadoop Software Engineer for the SAP Big Data Vora Team Resume Examples & Samples

  • Data storage technologies (HDFS, S3, Swift)
  • Cloud infrastructures and virtualization technology
  • Fault tolerance and security
  • A solid foundation in computer science with strong competencies in data structure, algorithms, and software design
  • Expert skills in one ore more of the following languages: C++, Java, Scala, Python, R, Lua, Golang
  • A deep understanding of one or more of the following areas: Hadoop, Spark, HDFS, Hive, Yarn, Flume, Storm, Kafka, ActiveMQ, Sqoop, MapReduce
  • Experience with the state of the art development tools (Git, Gerrit, Bugzilla, CMake, …) and the Apache Hadoop eco system
  • Experience with NoSQL Databases and Technologies like Cassandra, MongoDB, Graph Databases
  • Knowledge in design patterns and object oriented programming
  • Contributions to OpenSource projects in the Hadoop eco system (especially Spark) are a big plus
240

Junior Hadoop Admin Resume Examples & Samples

  • Bachelor’s degree or higher in Computer Science or a related field
  • Good understanding of distributed computing and big data architectures
  • Experience (1-2 years) in Unix/Linux (RHEL) administration and shell scripting
  • Proficient in at least one programming language like Python, Go, Java etc
  • Experience working with public clouds like Azure, AWS etc
  • DevOps: Appreciates the CI and CD model and always builds to ease consumption and monitoring of the system. Experience with Maven (or Gradle or SBT) and Git preferred
  • Personal qualities such as creativity, tenacity, curiosity, and passion for deep technical excellence
  • Familiarity with configuration management tools (Ansible, Chef)
  • Familiarity with server virtualization technologies (e.g. VMware), networking concepts (VLANs, routing, firewalls, subnetting), logical volume management, NFS, SNMP, sftp, and SMTP
241

Hadoop Data Movement Expert Resume Examples & Samples

  • Provide automation on diverse data feeds into our lake
  • Review feeds design
  • Check for PII or PCI data items and take needed actions to ensure compliance with standards
  • Industrialization and orchestration of the loads
  • Interface with third parties to setup secure feed connections
  • ETL flows between DBs and the Lake
  • External interfaces via API connections
  • Ensure performance of the data sets stored in the lake
  • Perform performance analysis on a regular basis
  • Assist the Hadoop specialist with tuning activities (shredding & sharding)
  • Take proactive actions to meet customer’s performance expectations
  • Effectively communicate with the rest of the Scrum team (Product Owner, Developer, QA, and Business Analysts)
  • Provide update on activities as well as known impediment/issues
  • Able to moderate conference calls
  • Written technical documentation authoring skills (user manuals, how to)
  • MS Office, PowerPoint, Outlook
  • Hadoop data movement toolset
  • Knowledge of the Cazena product or Cloudera distribution
  • Experienced with SQOOP, HIVE and Spark
  • Experienced with Oozie and Hue
  • ETL tools (Informatica or equivalent)
  • RestFul APIs for provisioning data into a big data system
  • Experience working with sensitive data (PCI, PII) and how to secure them for data analytics usage
  • 3 years+ experience in industrialization of data feeds to an Hadoop cluster in a managed way
  • Multi-cultural experience on geographically distributed projects
  • Travel Industry related experience is a plus
242

Hadoop Database Administrator Resume Examples & Samples

  • Install and configure multiple RDBMS’s and components
  • Install and deploy databases
  • Create backup and disaster recover plans
  • Test disaster recovery scenarios
  • Manage security
  • Predictive Analytics for resource management
  • Works on projects with developmentand engineering teams
  • Provide on-call support
  • Perform patching and upgrades to all database platforms
  • Experience/Familiarity with multiple Hadoop components
  • Experience /Familiarity with multiple NoSQL engines
  • Experience with multiple RDBMS’s (MySQL, PostgreSQL, SQL Server)
  • Experience with non-database open source technologies a plus
243

Hadoop Platform Engineer Resume Examples & Samples

  • Responsible for Hadoop cluster availability, security and performance
  • Responsible for implementation and ongoing administration of Hadoop clusters
  • Collaborate with systems engineering teams to propose and deploy new hardware and software required for Hadoop and related environments
  • Troubleshoot capacity, performance bottlenecks and related details of cluster memory, CPU, OS, storage, and networks
  • Maintain and develop open source configuration management and deployment tools such as Puppet or Chef and Linux scripting
  • Work with data delivery teams to administer Hadoop users
  • Work with security, networks and other teams to monitor and implement secure practices and assure secure access to data
  • Detailed knowledge of Hadoop ecosystem from an administration perspective, ideally including Spark
  • Good knowledge of Linux (RHEL)
  • Familiarity with open source configuration management and deployment tools such as Puppet or Chef and Linux scripting
  • Kerberos implementation and troubleshooting
  • A good knowledge of Python, Java is a plus