Big Data-hadoop Resume Samples

4.9 (98 votes) for Big Data-hadoop Resume Samples

The Guide To Resume Tailoring

Guide the recruiter to the conclusion that you are the best candidate for the big data-hadoop job. It’s actually very simple. Tailor your resume by picking relevant responsibilities from the examples below and then add your accomplishments. This way, you can position yourself in the best way to get hired.

Craft your perfect resume by picking job responsibilities written by professional recruiters

Pick from the thousands of curated job responsibilities used by the leading companies

Tailor your resume & cover letter with wording that best fits for each job you apply

Resume Builder

Create a Resume in Minutes with Professional Resume Templates

Resume Builder
CHOOSE THE BEST TEMPLATE - Choose from 15 Leading Templates. No need to think about design details.
USE PRE-WRITTEN BULLET POINTS - Select from thousands of pre-written bullet points.
SAVE YOUR DOCUMENTS IN PDF FILES - Instantly download in PDF format or share a custom link.

Resume Builder

Create a Resume in Minutes with Professional Resume Templates

Create a Resume in Minutes
GC
G Collier
Gwen
Collier
338 Eldora Glen
Detroit
MI
+1 (555) 403 1646
338 Eldora Glen
Detroit
MI
Phone
p +1 (555) 403 1646
Experience Experience
Phoenix, AZ
Big Data / Hadoop Specialist
Phoenix, AZ
Runolfsdottir-Roberts
Phoenix, AZ
Big Data / Hadoop Specialist
  • Responding to automated alerts on the health of systems
  • Performing routine audits of systems for the purpose of preventative maintenance of applications and reporting on their status
  • Executing scheduled or unscheduled tasks relating to operational maintenance and monitoring of applications
  • Analysis, troubleshooting, and diagnosis of incidents relating to off-the-shelf and proprietary applications and/or platforms
  • Effective call management including logging, monitoring / updating, prioritizing and resolving calls in a timely fashion
  • Ensure Big Data practices integrate into overall data architectures and data management principles (e.g. data governance, data security, metadata, data quality)
  • Assist in the development of comprehensive and strategic business cases used at management and executive levels for funding and scoping decisions on Big Data solutions
Chicago, IL
Big Data / Hadoop Development Lead
Chicago, IL
Botsford-Gerhold
Chicago, IL
Big Data / Hadoop Development Lead
  • Perform ongoing capacity management forecasts including timing and budget considerations
  • Assist with developing and maintaining the system runbooks
  • Work with Linux Server Administration team on administering the server hardware and operating system
  • Provide direction to junior programmers
  • Handle technical documentation, architecture diagram, data flow diagram, server configuration diagram creation
  • Complete Risk questionnaires for the application
  • Lead the build of the Hadoop platform and of Java applications
present
Houston, TX
Senior Big Data Hadoop Operations Administrator
Houston, TX
McKenzie, Gusikowski and Kihn
present
Houston, TX
Senior Big Data Hadoop Operations Administrator
present
  • Performance tuning applications and systems for high volume throughput
  • Ensures solutions developed adhere to security and data privacy policies
  • Familiarity with version control, job scheduling and configuration management tools such as Github, Puppet, UC4
  • Intermediate Java programming with frameworks, Scrum Agile, SOA, and software architecture
  • Lead investigations and proof of concepts as Big Data technology evolves
  • Lead troubleshooting on Hadoop technologies including HDFS, MapReduce2, YARN, Hive, Pig, Flume, HBase, Cassandra, Accumulo, Tez, Sqoop, Zookeeper, Spark, Kafka, and Storm
  • Install and maintain platform level Hadoop infrastructure including the additional tools like AtScale, Actian and R
Education Education
Bachelor’s Degree in Computer Science
Bachelor’s Degree in Computer Science
University of California, Berkeley
Bachelor’s Degree in Computer Science
Skills Skills
  • · Microsoft Windows Server
  • · Linux RedHat/CentOS Server
  • · SQL Server
  • · Hadoop HortonWorksTechnology Stack
  • · Ambari,Atlas, HDFS, HiveServer2, TEZ, WebHDFS, Knox, Pig, MapReduce, Ranger, YARN, ZooKeeper, Spark, Hbase, Kafka, Storm
  • · Microsoft R Server
  • · Python
  • · CRAN
  • · Packages
  • · Ansible
Create a Resume in Minutes

12 Big Data-hadoop resume templates

1

Software Engineer, Big Data Hadoop Resume Examples & Samples

  • 2+ years’ experience in database development, reporting, and analytics
  • 2+ years’ experience in web service or middle tier development of data driven apps
  • Knowledge and experience in “big-data” technologies such as Hadoop, Hive, Impala
  • Ability to work in a fast paced, evolving, growing and dynamic environment
  • Ability to work with and influence others
  • Willingness to explore new ideas and have a passion to make them happen
2

Big Data / Hadoop Development Lead Resume Examples & Samples

  • Lead the build of the Hadoop platform and of Java applications
  • Support applications until they are handed over to Production Operations
  • Provide direction to junior programmers
  • Handle technical documentation, architecture diagram, data flow diagram, server configuration diagram creation
  • Complete Risk questionnaires for the application
  • Drive project team to meet planned dates and deliverables
  • Work with other big data developers, designing scalable, supportable infrastructure
  • Work with Linux Server Administration team on administering the server hardware and operating system
  • Assist with developing and maintaining the system runbooks
  • BS Degree in Computer Science/Engineering preferred
  • 2+ years experience in Hadoop required
  • 4+ years of Java development experience preferred
  • 4+ years experience in ETL with tools like Informatica, Data Stage or Ab-initio preferred
  • Exposure to Greenplum an advantage
  • Exposure to SQL/Shell and procedural languages like PL/SQL, C, C++
  • Understanding of Storage, Filesytem , Disks, Mounts , NFS
  • Experience in financial services industry preferred
  • Experience leading technology initiatives
3

Big Data / Hadoop System Administrator Resume Examples & Samples

  • To succeed in this role, the candidate should have a broad set of technology skills to be able to build and support robust Hadoop solutions for big data problems and learn quickly as the industry grows
  • The candidate should have 2+ years cluster management and support experience in the Big Data/Hadoop space
  • A record of working effectively with application and infrastructure teams; and executing on multiple projects in varying stages of the development life cycle is important
  • The candidate must also demonstrate the ability to learn new technologies quickly and be able to transfer knowledge to others
  • Possess strong problem solving skills
  • Ability to work in a fast paced, team atmosphere
  • Should work effectively with a minimum of direction
  • Be able to grasp the problem at hand and recognize appropriate approach, tools and technologies to solve it
  • Work within Northern Trusts Change Management process
4

Software Engineer, Big Data Hadoop Engineer Resume Examples & Samples

  • Work with stakeholders to understand their information needs and translate these into business functional and technical requirements and solutions
  • Design and develop service oriented applications that encompass both ETL processes in the backend and the web service presentation layer that are performant and scalable
  • Analyze complex data systems and document data elements, data flow, relationships and dependencies to contribute to conceptual, logical and physical data models
  • Perform thorough testing and validation to ensure the accuracy of reports and transformations
  • Ensure data quality and data governance, create/maintain data dictionary and related metadata
  • Collaborate in the planning, design, development, test, and deployment of new BI solutions
  • Provide production support for new and existing BI solutions. Resolve issues with reports and data as identified by users
  • Ability to understand and effectively communicate at both a business and technical level
  • 4+ years experience in database development, reporting, and analytics
  • 2+ years experience in web service or middle tier development of data driven apps
  • Experience in “big-data” technologies particularly Hadoop, Hive, and other open source breeds
  • Experience with a BI reporting framework (such SSRS, Tableau, Business Objects, etc) is a plus
  • Good understanding of the challenge and various solutions in handling large data volume, whether in terms of complex SQL queries and stored procedures, or brute-force caching, or paradigms of parallel processing
  • Able to find creative solutions and move quickly to deliver them
  • Strong ability to collaborate with team members throughout the process and consult with other project teams on the design and use of enterprise data
  • Good written and oral communication skills and a desire to educate co-workers and business users about our data and how to access it
  • Bachelor's Degree and 5+ years experience OR Master's Degree and 2+ years experience
5

Senior Software Engineer, Big Data Hadoop Resume Examples & Samples

  • 4+ years’ experience in database development, reporting, and analytics
  • 4+ years’ experience in web service or middle tier development of data driven apps
  • Experience and ability to demonstrate advanced proficiency, in writing complex SQL queries and stored procedures
6

VP, Digital Analytics, Big Data & Hadoop Resume Examples & Samples

  • Some statistical analytic experience, including controlled experimental design and simple modeling; experience with more advanced modeling and machine learning helpful
  • Modern web and digital analytics including reporting with SiteCatalyst, tagging processes including marketing/campaign coding, digital marketing, and site performance metrics
  • Online marketing from branded display to direct response channels
  • Some experience with DMPs and modern online data desirable
  • Experience with attribution approaches desirable
  • Online experiences across social, mobile, and other current advanced marketing and usage trends
  • Some of the tech behind web pages and digital capabilities including cookies, html, javascript
  • Databases including some SQL experience
  • Understanding of data challenges: digital data differences, QA, database match, record identifiers/keys, import/export big datasets, file formats, usage rights and compliance
  • Understanding of measuring and analyzing online marketing campaigns including: natural and paid search, email, affiliate, display and social
  • Understanding of website optimization tools for A/B and multivariate testing, e.g. Google Website Optimizer, Visual Website Optimizer, Optimize, Maximizer, etc
  • Some analytic systems including SAS, R, SPSS, etc
  • Must have 5+ years of experience working on digital analytics either on site-side or on digital marketing measurement
  • Can be client side or agency, but preferably for consumer experiences more so than B2B
  • Must be able to work and build long term relationships across cross-functional teams involving technical, design, marketing, and business leaders
  • Must be able to handle Excel, PowerPoint, and Word
  • Should be comfortable presenting work to senior leaders
7

Senior Big Data Hadoop Engineer Resume Examples & Samples

  • Document solutions for long term support
  • Travel to client site as needed
  • Assist client support team with Application issue resolution
  • Gathers technical requirements and translates requirements into hardware and software capabilities (Technical)
  • Communicates technical architecture direction and coordinates technical proof of concept analyses (Technical)
  • Assists in solving technical and performance problems within the technical infrastructure (Technical)
  • Develops required work products for technology, application, and data domains of change (Methodology)
  • Leads a team of 5-8 resources (company, client and third-party resources), in area of expertise, to conclusion of a project phase (Management)
  • Participates in providing gap analyses, from a technical perspective, highlighting current state, future state, client needs, best practices and competition (Business)
  • Interacts with confidence and ease when interacting with middle- and senior-level managers (client and CSC); uses complex strategies like indirect influence to build consensus and support (Relationship Management)
  • Assists sales staff in qualifying leads and generating proposals (Leverage)
  • 2-3 years’ experience with Hadoop suite of applications
  • Solid understanding of Hadoop architectures
  • Should have 1+ years’ experience Hadoop ecosystem setup/administration
  • Should have at least one certification from either Hortonworks or Cloudera
8

Senior Big Data Hadoop Engineer Resume Examples & Samples

  • 5+ years’ experience in database development, reporting, and analytics
  • 5+ years’ experience in web service or middle tier development of data driven apps
  • Strong analytical skills with a data driven approach and ability to measure impact of project
9

Big Data Hadoop Engineer Resume Examples & Samples

  • Strong knowledge and experience in “big-data” technologies such as Hadoop, Hive, Impala
  • Strong experience and ability to demonstrate advanced proficiency, in writing complex SQL queries and stored procedures
  • Data science experience with predictive modeling or machine learning a plus
10

Big Data-hadoop Solution Architect Resume Examples & Samples

  • Use your strong natural leader capabilities to build and grow customer relationships at all levels
  • Engage clients, capture business requirements and translate them into technical requirements
  • Design, and document HLD (High Level Design) and decompose them into LLD (Low Level Design)
  • Lead the development of technical solutions and take accountability for technical validity of the solutions including, testing, installation and integration
  • Provide leadership and direction during the full software development lifecycle
  • Manage solutions with multi-vendor Big Data and Analytics products for all types of service providers including wireless, wire-line, cable, or utilities
  • BS or MS in Computer Science, or Computer Engineering or acceptable equivalent
  • Over 10 years of IT experience designing and integrating EDW, Business Intelligence, and/or Big Data Analytics platforms
  • 3+ years’ working hands-on with Production-level Big Data and Analytics solutions
  • 3+ years’ experience with major Hadoop distributions
  • 1+ years’ experience with MapR Hadoop distributions
11

Big Data Hadoop Architect Resume Examples & Samples

  • Ensures programs and platforms are envisioned, designed, developed, and implemented
  • Tracks and documents requirements for enterprise development projects and enhancements
  • Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture
  • Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations required to shape the enterprise Hadoop architecture
  • Gathers and understands client needs, finding key areas where technology leverages possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction
  • Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent
  • Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems
  • Contributes to and supports effort to further build intellectual property via patents
12

Big Data / Hadoop Specialist Resume Examples & Samples

  • Analysis, troubleshooting, and diagnosis of incidents relating to off-the-shelf and proprietary applications and/or platforms
  • Determination of root cause of incidents (configuration vs. defect)
  • Liaise with appropriate teams for the development of corrective actions or viable workarounds to resolve incidents
  • Installation of applications in supported environments
  • Deployment of application upgrades and fixes
  • Executing scheduled or unscheduled tasks relating to operational maintenance and monitoring of applications
  • Adhere to E&Y and ITIL guidelines for Incident, Problem and Change Management
  • Effective call management including logging, monitoring / updating, prioritizing and resolving calls in a timely fashion
13

Senior Big Data Hadoop Operations Administrator Resume Examples & Samples

  • Install and maintain platform level Hadoop infrastructure including the additional tools like AtScale, Actian and R
  • Lead troubleshooting on Hadoop technologies including HDFS, MapReduce2, YARN, Hive, Pig, Flume, HBase, Cassandra, Accumulo, Tez, Sqoop, Zookeeper, Spark, Kafka, and Storm
  • Maintain clear documentation to help increase overall team productivity
  • 4+ years of administrator experience on Big Data Hadoop Ecosystem and components (Sqoop, Hive, Pig, Flume, etc.)
  • Intermediate Java programming with frameworks, Scrum Agile, SOA, and software architecture
  • In depth understanding of system level resource consumption (memory, CPU, OS, storage, and networking data), and the Linux commands such as sar and netstat
  • Experience in administering high performance and large Hadoop clusters
  • Familiarity with version control, job scheduling and configuration management tools such as Github, Puppet, UC4
  • Ability to lead and take ownership of projects
  • Knowledge of NoSQL platforms
  • Cloudera Hadoop Certified
14

Cate Big Data Hadoop Engineer Resume Examples & Samples

  • 5+ years overall IT experience
  • 2+ years of experience with Big Data solutions and techniques
  • 2+ years Hadoop application infrastructure engineering and development methodology background
  • Experience with Cloudera distribution (CDH) and Cloudera Manager is preferred
  • Advanced experience with HDFS, MapReduce, Hive, HBase, ZooKeeper, Impala, Pig, and Flume
  • Experience installing, troubleshooting, and tuning the Hadoop ecosystem
  • Experience with Clojure, Cascading, AVRO, and JSON
  • Experience with Apache Mahout, SOLR
  • Experience with full Hadoop SDLC deployments with associated administration and maintenance functions
  • Experience with designing application solutions that make use of enterprise infrastructure components such as storage, load-balancers, 3-DNS, LAN/WAN, and DNS
  • Experience with concepts such as high-availability, redundant system design, disaster recovery and seamless failover
  • Overall knowledge of Big Data technology trends, Big Data vendors and products
  • Good interpersonal with excellent communication skills - written and spoken English
  • Able to interact with client projects in cross-functional teams
  • Ability to create documents of high quality. Ability to work in a structured environment and follow procedures, processes and policies
  • Self-starter who works with minimal supervision. Ability to work in a team of diverse skill sets and geographies
  • Exposure to Citigroup internal standards, policies and procedures is a plus (does not apply to external candidates)
15

Lead Big Data / Hadoop Application Developer Resume Examples & Samples

  • Develop or review development of test protocols for testing application before user acceptance. Review test results and direct further development
  • 8+ years of hands on experience and strong and deep knowledge of Java application development
  • 6+ years of hands on experience in LINUX, Java/J2EE, SOA and Oracle platforms
  • Experience processing large amounts of structured and unstructured data. MapReduce experience is a plus
  • 1 – 4 years experience building and coding applications using Hadoop components – HDFS, Hbase, Hive, Sqoop, Kafka, Storm, etc
  • 1 – 4 years experience coding Java MapReduce, Python, Pig programming, Hadoop Streaming and HiveQL
  • 1 – 4 years experience implementing relational data models
  • Minimum 2 years experience work experience or exposure to traditional ETL tools & RDBMS databases
  • Minimum 1 year experience developing REST web services
  • Experience leading and managing large scale, complex applications with high performance needs
  • Vendor management experience leveraging staff augmentation and/or outcome based project delivery models and statement of work planning and incremental demand forecasting
  • Experience managing on-site and off-site staff and demonstrated ability to collaborate and influence others to ensure timely and effective completion of project tasks
16

Solution Engineer Big Data / Hadoop Resume Examples & Samples

  • Supporting and independently completing project tasks
  • Identifying key drivers of a defined, straightforward problem and proposing solutions
  • Manages work plans for components on engagements
  • Hadoop (Cloudera distribution)
  • Apache Accumulo
  • SQL and NoSQL data stores
  • UNIX, Java
  • Sentry
  • Experience working with big data ecosystems including tools such as MapReduce, Yarn, Hive, Pig, Impala, Spark, Kafka, and Storm
  • Understands the tradeoffs between different approaches to Hadoop file design
  • Experience with techniques of performance optimization for both data loading and data retrieval
  • Experience with NoSQL databases such as HBase, Apache Cassandra, Vertica, or MongoDB
  • Ability to build and test MapReduce code in a rapid, iterative manner
  • Ability to articulate reasons behind the design choices being made
  • Bachelor of Science in Computer Science, Engineering, or MIS, or equivalent experience
17

Big Data Hadoop & Teradata Administrator Resume Examples & Samples

  • Performance tuning, statistics collection, health analysis, analyzing explain plans and table statistics
  • Capacity planning and space allocation for database and tables
  • Point of contact for vendor for coordinating access and schedule for cluster and/or node maintenance and upgrades
  • Familiarity with Viewpoint and its capabilities for monitoring Teradata systems, including administration of Data Labs
  • Knowledge of SLES 11 OS
  • Candidates with these skills will be given preferential consideration
  • Experience in Big Data Storage and File System Design
  • Experience in data ingestion technologies with reference to relational databases (i.e. DB2, Oracle, SQL)
18

Big Data / Hadoop Scientist Resume Examples & Samples

  • The Data Ops Analyst role resides within the client GDIA organization which was formed to support the client’s Blueprint for Mobility strategy and the client plan for profitable growth. In this role, you will support the data requirements of the Data Scientists who are using analytics to achieve business outcomes including insights to accelerate our Connected Vehicle / Autonomous Vehicle initiatives
  • You will be responsible for providing the data support for enterprise data integration tasks, including ingestion, standardization, enrichment, mastering and assembly of data products for downstream applications. In addition, you will
  • Drive the Data Strategy and align it with continuously changing business landscape
  • Continuously increase Data Coverage by working closely with the Data Scientists, understanding and evaluating their data requirements and delivering the data they need
  • Support the data requirements of the different functional teams like MS&S, PD, Quality, etc. and all the regional KPI / Metrics initiatives
  • Evaluate, explore and select the right data platform technologies including Big Data, RDBMS & NoSQL to meet the analytics requirements
  • Provide visibility to Data Quality issues and work with the business owners to fix the issues
  • Implement an Enterprise Data Governance model and actively promote the concept of data sharing, data reuse, data quality and data standards
  • Perform any necessary data mapping, data lineage activities and document information flows
  • Build the Metadata model & Business Glossary by gathering information from multiple sources: business users, existing data sources, databases and other relevant documents and systems
  • Serve as data subject matter expert and demonstrate an understanding of key data management principles and data use
  • Drive Data profiling and data exploration for basic understanding and profiling of the data
  • Ability to work in different database technologies including Big Data / Hadoop (HDFS, MapReduce, Hive, Shark, Spark, etc.), RDBMS and NoSQL
  • Experience in creation of scripts to manipulate files within HDFS is preferred
  • Knowledge of data transfer and ingestion technologies including Sqoop and Attunity
  • Demonstrated experience building visualizations using Tableau/Qlikview
  • Ability to write complex SQL queries needed to query & analyze data
  • Knowledge of data management standards, data governance practices and data quality
  • Ability to communicate complex solution concepts in simple terms
  • Ability to apply multiple solutions to business problems
  • Ability to quickly comprehend the functions and capabilities of new technologies
  • Strong Oral and written communication skills
  • Strong team player, with the ability to collaborate well with others, to solve problems and actively incorporate input from various sources
  • Demonstrated customer focus, with the ability to evaluate decisions through the eyes of the customer, build strong customer relationships, and create processes with customer viewpoint
  • Strong analytical and problem solving skills, with the ability to communicate in a clear and succinct manner and effectively evaluates information / data to make decisions
  • Strong interpersonal, and leadership skills, with proven abilities to communicate complex topics to leaders and peers in a simple, clear, plan oriented manner
  • Ability to anticipate obstacles and develop plans to resolve those obstacles
  • Change oriented, with the ability to actively generates process improvements, support and drives change, and confront difficult circumstances in creative ways
  • Resourceful and quick learner, with the ability to efficiently seek out, learn, and apply new areas of expertise, as needed
  • Highly self-motivated, with the ability to work independently
  • Superior organization, coaching and interpersonal skills, combined with effective leadership, decision-making, and communication
  • Strong oral and written communication skills (English)
  • Strategic and clear thinking to translate discreet and complex ideas to business-driven results
  • Demonstrated acceptance and adherence to high ethical, moral, and personal values
  • Six Sigma Green Belt Certified (client Internal Only)
  • Minimum of 5 years of experience in a Data Management role including running queries and compiling data for analytics
  • Minimum of 3 years of experience in data design, data architecture and data modeling (both transactional and analytic)
  • Minimum of 2 years of experience in Big Data, structuring and performing analysis on time-series data, NoSQL technologies including Greenplum & Hadoop (HDFS, MapReduce, Hive, Shark, Spark, etc.), especially command line experience with loading and manipulating files within HDFS
  • Bachelor’s Degree in Computer Science or related field from an accredited college or university
19

Big Data Hadoop Consultant Resume Examples & Samples

  • Design, implement and deploy custom applications on Hadoop
  • Implementation of complete Big Data solutions, including data acquisition, storage, transformation, and analysis
  • Design, implement and deploy ETL to load data into Hadoop
  • Minimum 1 year of Building and deploying Java applications
  • Minimum 1 year of building and coding applications using at least two Hadoop components, such as MapReduce, HDFS, Hbase, Pig, Hive, Spark, Sqoop, Flume, etc
  • Minimum 1 year of coding, including one of the following: Python, Pig programming, Hadoop Streaming, HiveQL
  • Minimum 1 year understanding of traditional ETL tools & RDBMS
  • Minimum of a Bachelor’s Degree or 3 years IT/Programming experience
  • Full life cycle Development
  • Minimum 1 year of experience Developing REST web services
  • Data Science and Analytics (machine learning, analytical models, MAHOUT, etc.)
  • Data Visualization
20

Big Data Hadoop Manager Resume Examples & Samples

  • Assess and define tactical and strategic opportunities to enable our clients to achieve new business capabilities through the use of Big Data and traditional tools and technologies
  • Deliver large-scale programs that integrate processes with technology to help clients achieve high performance
  • Design, implement and deploy Big Data solutions and applications that can perform at scale on one or more of the following components data streaming, ingestion, munging, machine learning, publication
  • Minimum 2 years of recent experience Building Java apps
  • Minimum 2 years of building and coding applications using Hadoop components - HDFS, Hbase, Hive, Sqoop, Flume
  • Minimum 2 years of recent, hands on, Data Flow Programming, including - MapReduce coding using tools such as Java, Python, Pig, Hadoop Streaming, and HiveQL
  • Minimum 3 years implementing relational and dimensional data models
  • Minimum 3 years understanding of traditional and Big Data ETL/data movement tools & RDBMS such as Ab Initio, DataStage, Informatica, Trifecta, and Kafka
  • Minimum of a Bachelor’s Degree or 3 years of IT/Programming experience
  • 6+ months experience leading technical teams
  • Minimum 2 yrs. experience leading and delivering an operational Big Data solution using one or more of the following technologies: Hadoop, HortonWorks, Cloudera, Cassandra, GreenPlum, Vertica, Aster
  • 3+ year of experience Developing REST web services
  • Industry experience (financial services, resources, healthcare, government, products, communications, high tech)
21

Big Data Hadoop Senior Data Modeler Resume Examples & Samples

  • 3-5 years of Hadoop and No-SQL data modelling/canonical modeling experience with Hive, HBase or other
  • 2 years experience with In memory databases or caching tools and frameworks
  • Familiarity with Lambda Architecture and Serving/Consolidation Views, Persistence layers
  • Hands on Experience with open source software platforms Linux
  • 2+ years of Oracle relational modelling development experience
  • Demonstrable experience in developing, validating, publishing, maintaining LOGICAL data models with exposure to or experience in developing, validating, publishing, maintaining PHYSICAL data models
  • Demonstrable experience using data modeling tools - e.g., ErWin
  • Evaluate existing data models and physical databases for variances and discrepancies
  • Experience with managing meta data for data models
22

Big Data Hadoop Project Manager Resume Examples & Samples

  • Bachelor’s degree in Business or Computer Science, or equivalent experience, required
  • A minimum of 10 years or managing information technology projects in a matrix environment. Experience with executing global projects is required
  • Skills and Abilities Knowledge of Big Data technology stack
  • Technical development background in a major development language or Database technology
  • Proficiency with Microsoft Project and Office applications required
  • Project management certification (e.g., PMP) preferred
  • Knowledge of TTS (Cash Management) products would be preferred
23

Big Data / Hadoop Architect Resume Examples & Samples

  • Perform architecture design, data modeling, and implementation of Big Data platform and analytic applications for Hitachi Consulting’s clients
  • Analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings; bring these insights and best practices to Hitachi Consulting’s Insights and Analytics practice
  • Stand up and expand data as service collaboration with partners in US and other international markets
  • Apply deep learning capability to improve understanding of user behavior and data
  • Develop highly scalable and extensible Big Data platforms which enable collection, storage, modeling, and analysis of massive data sets
24

Big Data / Hadoop Architect Resume Examples & Samples

  • Over 8 years of engineering and/or software development experience
  • Hands-on experience in Big Data Components/Frameworks such as Hadoop, Spark, Storm, HBase, HDFS, Pig, Hive, etc
  • Experience in architecture and implementation of large and highly complex projects
  • Deep understanding of cloud computing infrastructure and platforms
  • History of working successfully with cross-functional engineering teams
  • Demonstrated ability to communicate highly technical concepts in business terms and articulate business value of adopting Big Data technologies
  • Proficiency in Java/C++, SQL/RDBM, and data warehousing
25

Senior Data Engineer Big Data-> Hadoop Resume Examples & Samples

  • Experience in SQL and No-SQL data modeling (object buckets, document DBs, column stores, KV stores, graph databases)
  • HA on replicated storage backends in the cloud (S3, HDFS, DynamoDB, Redshift, Cassandra, HBASE, RDS, etc)
  • Data infrastructure monitoring and scalability (CloudWatch, Graphana, Prometheus, DataDog, etc)
  • Ability to solve any ongoing issues with operating the infrastructure
  • Experience in architecting and building connectivity data systems and integrations
  • Proven track record of delivering strategic solutions for end-users
  • Passion for agile development of very well tested code (unit and integration) in a containerised/cloud environment
  • Exposure to CI/CD development scenarios (Jenkins, Jira, nexus, git, canary deployment)
  • Flexible and dynamic approach to development
  • Proven ability to work in an interdisciplinary team
  • Experience of M2M data environments
  • Experience of AWS Data Analytics toolsets, including EMR, ElasticSearch, Kinesis and Redshift
26

Data Engineer Big Data-> Hadoop Resume Examples & Samples

  • Strong background in data and analytics systems, primarily the Apache Big-Data toolsets
  • Experience of integrating very large datasets with line of business data
  • Proven track record of developing robust requirements specifications
  • The ability to quickly adopt new concepts, languages and techniques quickly and convey the benefits to others
  • Good understanding and experience of agile application development practices
  • Strong planning & time management skills
  • Ability to communicate complex ideas simply
27

Big Data Hadoop & Teradata Architect Resume Examples & Samples

  • Working with data delivery, development teams, and Information Assurance to setup new Hadoop users and policies. This includes setting up new LDAP users, validating and troubleshooting Ranger policies, testing access for the users, and troubleshooting a secured Hadoop environment using Kerberos, Knox, and Ranger
  • Familiarity of Hadoop cluster maintenance with Ambari
  • Performance tuning of Hadoop
  • Monitor Hadoop cluster job performance, capacity, connectivity and security
  • Responsible for maintaining documentation of Hadoop system architecture and processes
  • Hive performance monitoring and tuning
  • Responsibility for 24x7 normal DBA functionality
  • Partition management and health
  • Monitoring of the data loading and data mart updating
  • Familiarity with BTEQ, Fastexport, Multiload, and Fastload
  • Responsible for maintaining documentation of TDW system architecture and processes
  • Must be a US Citizen
  • Bachelors or Master’s Degree in technical or business discipline or related experience required. BS in Computer Science, Math or Engineering desired and 7 to 9 years of related work experience
  • 4+ years of proven experience in a range of big data architectures and frameworks including Hadoop ecosystem, Java MapReduce, Pig, Hive, Spark, Impala etc
  • 5 years of proven experience working with, processing and managing large data sets (multi TB scale)
  • Proven experience in ETL (Syncsort DMX-h, Ab Initio, IBM - InfoSphere Data Replication, etc.), mainframe skills, JCL
  • DOD Secret clearance and 8570 certification required
  • Experience with Hadoop and TeraData Administration
  • Experience with Linux Administration
  • Experience in coding shell scripting in Python
  • Experience in analytic programming, data discovery, querying databases/data warehouses and data analysis
28

Principal Business Analysis Cons t With Big Data & Hadoop Resume Examples & Samples

  • Data; hands on data sourcing, analyze the data for quality, harmonize the data and work with sources to finalize requirements, develop feeds, document business rule transformations and work with development team to implement. Perform detailed reconciliations between data sets
  • Lead implementations for enhancements/projects and ability to communicate clearly issues, risks and status updates to developed plans and dashboards
  • Assist to design data architecture, data flows and contribute to the overall application ecosystem
  • Iterate with the business and technology to define and develop coherent and visually aesthetic and graphic based business intelligence and dashboards
  • Provide production support on data issues and processes in a challenging and time crunch environment working closely with business, vendor, and data teams to resolve the issues
  • Authors various documents on projects such as use cases, process flows and other related project documents and adheres to life cycle documentation; BRDs, FRDs and test documentation
  • Experience in hands on data mapping and data ware housing concepts and reporting platforms, working with Big Data, Hadoop a plus
  • Strong experience SQL and working with large data sets knowledgeable in Microsoft SQL Server, SSIS, SSRS, SSAS and Tableau is a plus
  • Experience working in the financial and banking industries and knowledge of financial products on The Bank of New York Mellon Corporation Balance Sheet
  • Capable of working to resolving production issues in a stressful and time critical environment
  • Previous development knowledge is a plus
  • Advanced knowledge and extensive experience working with SQL
  • Experience working with offshore development teams
  • Strong testing and quality assurance mindset. Must be able to craft detailed test plans and test cases to ensure final deliverables meet specifications
29

Big Data Hadoop Team Lead Resume Examples & Samples

  • 10+ years of total IT programming experience in Java and SQL. With 5 years of Hadoop Map/Reduce, Hive, Sqoop and Oozie experience and Hadoop java programming experience
  • 2+ Years of Data management or data modeling experience within Big Data (Hbase/Hive)
  • 5 years of Core Java or Python Programming experience
  • 2+ years of Oracle and/or MYSQL database application development experience
  • Strong leadership, interpersonal, influence, negotiation, and written and verbal communication skills required
30

Big Data / Hadoop Specialist Resume Examples & Samples

  • Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support
  • Develop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis
  • Ensure Big Data practices integrate into overall data architectures and data management principles (e.g. data governance, data security, metadata, data quality)
  • Create formal written deliverables and other documentation, and ensure designs, code, and documentation are aligned with enterprise direction, principles, and standards
  • Train and mentor teams in the use of the fundamental components in the Hadoop stack
  • Assist in the development of comprehensive and strategic business cases used at management and executive levels for funding and scoping decisions on Big Data solutions
  • Troubleshoot production issues within the Hadoop environment
  • Good knowledge on Agile Methodology and the Scrum process
  • Delivery of high-quality work, on time and with little supervision
  • Bachelor in Computer Science, Management Information Systems, or Computer Information Systems is required
  • Minimum of 4 years of Building Java apps
  • Minimum of 2 years of building and coding applications using Hadoop components - HDFS, Hbase, Hive, Sqoop, Flume etc
  • Minimum of 2 years of coding Java Scala / Spark, Python, Pig programming, Hadoop Streaming, HiveQL
  • Minimum 4 years understanding of traditional ETL tools & Data Warehousing architect
  • Strong personal leadership and collaborative skills, combined with comprehensive, practical experience and knowledge in end-to-end delivery of Big Data solutions
  • Experience in Exadata and other RDBMS is a plus
  • Must be proficient in SQL/HiveQL
  • Strong in-memory database and Apache Hadoop distribution knowledge (e.g. HDFS, MapReduce, Hive, Pig, Flume, Oozie, Spark)
  • Experience and proficiency in coding skills relevant for Big Data (e.g. Java, Scala, Python, Perl, SQL, Pig, Hive-QL)
  • Proficiency with SQL, NoSQL, relational database design and methods
  • Deep understanding of techniques used in creating and serving schemas at the time of consumption
  • Identify requirements to apply design patterns like self-documenting data vs. schema-on-read
  • Played a leading role in the delivery of multiple end-to-end projects using Hadoop as the data platform
  • Successful track record in solution development and growing technology partnerships
  • Ability to clearly communicate complex technical ideas, regardless of the technical capacity of the audience
  • Experience working with multiple clients and projects at a time
  • Knowledge of predictive analytics techniques (e.g. predictive modeling, statistical programming, machine learning, data mining, data visualization)
  • Familiarity with different development methodologies (e.g. waterfall, agile, XP, scrum)
  • Demonstrated capability with business development in big data infrastructure business
  • Preferred Skills
  • Bilingual (French/English) who has the capacity to adapt his communication to most of the situations and audience. Proficient in planning his communications, facilitating a meeting or workshop
  • Able to plan and execute complex tasks without supervision, identify potential roadblocks and mobilise resources to remove them and achieve goals
  • Gets easily acquainted to new technologies e.g. programming language within 2-3 days
31

Big Data Hadoop AWS Architect Resume Examples & Samples

  • Hands on experience with Hadoop applications including administration, configuration management and production troubleshooting and tuning
  • Hands-on experience working with AWS Cloud technology including S3, EMR etc
  • Knowledge and understanding of Java, Python, Linus is useful
  • Knowledge of traditional data analytics warehouses like Teradata
  • Experience in benchmarking systems, analyzing system bottlenecks and propose solutions to eliminate them
  • Be able to clearly articulate pros and cons of various technologies and platforms
  • Hands-on Experience in technologies like: Spark, Hive, Pig, Kafka, R, Storm is useful
  • Be able to document use cases, solutions and recommendations
  • Liaison with Project Managers and other solution architects in planning and governance activities related to the project
  • Bachelors in computer science, data science, business intelligence or related technical field. 5+ years of experience in Data Analytics field
32

Q-dna-technology Lead-big Data / Hadoop Resume Examples & Samples

  • At least 3 years of hands on design and development experience on Big data related technologies – PIG, Hive, Mapreduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting
  • Should be a strong communicator and be able to work independently with minimum involvement from client SMEs
  • Should be able to work in team in diverse/ multiple stakeholder environment
  • Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files
  • Must have experience working on Big Data Processing Frameworks and Tools – MapReduce, YARN, Hive, Pig
  • Should have worked on large data sets and experience with performance tuning and troubleshooting
33

Technology Lead-big Data / Hadoop Resume Examples & Samples

  • At least 5 years of Design and development experience in Big data, Java or Datawarehousing related technologies
  • Atleast 3 years of hands on design and development experience on Big data related technologies – PIG, Hive, Mapreduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting
  • Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture
34

Big Data / Hadoop Consultant Resume Examples & Samples

  • Minimum 1 year of building Java apps
  • Minimum 1 year of building and coding applications using Hadoop components, HDFS, HBase, Hive, Sqoop, Flume etc
  • Minimum 1 year of coding with Python, Pig programming, Hadoop Streaming, Hive QL
  • Minimum 1 year of experience with designing and implementing relational data models
  • 2 plus years of overall experience with Hadoop and other Big Data technologies
  • 2 plus years of hands-on experience designing, implementing and operationalizing production data solutions using emerging technologies such as Hadoop Ecosystem (MapReduce, Hive, HBase, Spark, Sqoop, Flume, Pig, Kafka etc.), NoSQL (e.g. Cassandra, MongoDB)
  • 2 plus years of experience with Big Data Infrastructure Architecture
  • 2 plus years of experience working with Spark SQL
35

Senior Big Data / Hadoop Architect Resume Examples & Samples

  • 1) 3+ years of experience with Big Data architecture
  • 2) 3+ years of experience with Hadoop
  • 3) 3+ years of experience with AWS
36

Dna-big Data / Hadoop Resume Examples & Samples

  • Minimum 7 years of experience in IT and 4 years of experience in Java technologies – J2EE framework and web technologies (JavaScript, HTML, XML, Web 2.0, J2EE development toolset)
  • Min 3 years’ experience in working on Master Data Management tools like Informatica, IBM etc.,
  • Min 1 year experience in technical development, configuration and custom development of Stibo STEP modules
  • Should have good understanding of Product and Customer MDM
  • Working knowledge of Oracle, SQL server, UNIX
  • Strong knowledge on PIM architecture, design, and development skills
  • At least 5years of experience in software development life cycle
  • At least 5 years of experience in Project life cycle activities on development and maintenance projects
  • STIBO certification nice to have
37

Q-dna-tl-big Data-hadoop / Java Engineer Resume Examples & Samples

  • Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala
  • Strong understanding of Hadoop fundamentals
  • Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically
  • Strong understanding of File Formats – Parquet, Hadoop File formats
  • Proficient with application build and continuous integration tools – Maven, SBT, Jenkins, SVN, Git
  • Experience in working on Agile and Rally tool is a plus
  • Strong understanding and hands-on programming/scripting experience skills – UNIX shell, Python, Perl, and JavaScript
  • Knowledge of Java Beans, Annotations, Logging (log4j), and Generics is a plus
  • Knowledge of Design Patterns - Java and/or GOF is a plus
  • Knowledge of Spark, Spark Streaming, Spark SQL, and Kafka is a plus
  • Experience to Financial domain is preferred
38

Big Data / Hadoop Specialist Resume Examples & Samples

  • · Full lifecycle support for software skills in Bold
  • · Install and configure software for new engagement team environments
  • · Patch and upgrade software
  • · Project demand and maintain standards
  • · Vulnerability remediation (less than 2 days)
  • · Provide L3 and L4 incident response for defined software
  • · Troubleshoot issues, including up/down, performance and misconfiguration
  • · Interface with vendors to escalate issues
  • · Execute Service Requests for defined software
  • · Resolve Incidents for defined software
  • · Triage
  • · Troubleshooting
  • · Coordination of multiple technology owners
  • · Microsoft Windows Server
  • · Linux RedHat/CentOS Server
  • · Hadoop HortonWorksTechnology Stack
  • · Ambari,Atlas, HDFS, HiveServer2, TEZ, WebHDFS, Knox, Pig, MapReduce, Ranger, YARN, ZooKeeper, Spark, Hbase, Kafka, Storm
  • · Microsoft R Server
  • · Python
  • · CRAN
  • · Packages
  • · Ansible
  • · Anaconda
  • · Mongo
  • · Neo4J
  • · Jupyter