Big Data Engineer Resume Samples
4.5
(124 votes) for
Big Data Engineer Resume Samples
The Guide To Resume Tailoring
Guide the recruiter to the conclusion that you are the best candidate for the big data engineer job. It’s actually very simple. Tailor your resume by picking relevant responsibilities from the examples below and then add your accomplishments. This way, you can position yourself in the best way to get hired.
Craft your perfect resume by picking job responsibilities written by professional recruiters
Pick from the thousands of curated job responsibilities used by the leading companies
Tailor your resume & cover letter with wording that best fits for each job you apply
Resume Builder
Create a Resume in Minutes with Professional Resume Templates

CHOOSE THE BEST TEMPLATE
- Choose from 15 Leading Templates. No need to think about design details.
USE PRE-WRITTEN BULLET POINTS
- Select from thousands of pre-written bullet points.
SAVE YOUR DOCUMENTS IN PDF FILES
- Instantly download in PDF format or share a custom link.
VB
V Bins
Viva
Bins
73360 Reinger Meadow
Houston
TX
+1 (555) 223 4247
73360 Reinger Meadow
Houston
TX
Phone
p
+1 (555) 223 4247
Experience
Experience
Boston, MA
Junior Big Data Engineer
Boston, MA
Bradtke Group
Boston, MA
Junior Big Data Engineer
- Executes moderately complex functional work tracks for the team
- Work in an agile environment and continuously improve the agile processes
- Maintain existing ETL workflows, data management and data query components
- Develop automation and data collection frameworks
- Develops innovative solutions to Big Data issues and challenges within the team
- Known for being a smart, analytical thinker who approaches their work with logic and enthusiasm
- Drive the optimization, testing and tooling to improve data quality
Phoenix, AZ
Big Data Engineer
Phoenix, AZ
Nitzsche, O'Kon and Sauer
Phoenix, AZ
Big Data Engineer
- Work in a fast-paced agile development environment to quickly analyze, develop, and test potential use cases for the business
- Develops and builds frameworks/prototypes that integrate Big Data and advanced analytics to make business decisions
- Assist application development teams during application design and development for highly complex and critical data projects
- Work closely with development, test, documentation and product management teams to deliver high quality products and services in a fast paced environment
- Algorithm development on high-performance systems
- Create data management policies, procedures, and standards
- Working with the end-user to make sure the analytics transform data to knowledge in very focused and meaningful ways
present
San Francisco, CA
Lead Big Data Engineer
San Francisco, CA
Tillman LLC
present
San Francisco, CA
Lead Big Data Engineer
present
- Design and build data processing pipelines using tools and frameworks in the Hadoop ecosystem
- Design and build ETL pipelines to automate ingestion of structured and unstructured data
- Design and Build pipelines to facilitate data analysis
- Implement and configure big data technologies as well as tune processes for performance at scale
- Manage, mentor, and grow a team of big data engineers
- Proficiency in a programming language, ideally Python, Java, or Scala
- Proficiency and knowledge of best practices with the Hadoop (YARN, HDFS, MapReduce)
Education
Education
Bachelor’s Degree in Computer Science
Bachelor’s Degree in Computer Science
Harvard University
Bachelor’s Degree in Computer Science
Skills
Skills
- Strong technical skills in Python and good working knowledge of Java
- Expert knowledge of Java and Spring; Knowledge of CQL and XQuery a strong
- Strong attention to detail
- Good business management knowledge, including business / organisational and operational design principles, customer and stakeholder management
- Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time
- Ability to quickly learn new technology, business domains and processes
- High attention to detail and quality of work
- Proven ability to deliver high profile activities to tight timescales
- Strong technical capability in the area of Java, open source and big data
- Strong presentation and good communication abilities at senior level
15 Big Data Engineer resume templates
Read our complete resume writing guides
1
Principal Big Data Engineer Resume Examples & Samples
- Responsible for the building, deployment, and maintenance of mission critical analytic solutions that process data quickly at big data scales
- Contributes design, code, configurations, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, and loading across a broad portion of the existing Hadoop and MPP ecosystems
- Evaluates new and upcoming big data solutions and makes recommendations for adoption to assist with building our next generation platform
- Ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability
- Mentors junior members, provides code reviews, feedback, and enables professional growth
2
Big Data Engineer Summer Internship Resume Examples & Samples
- Support the with the UX Hardware/Packaging/Retail team, focusing on strategy and conception to bring the next generation of Comcast products to life by executing around future vision
- Experience in this group may lead to an entry level UX Designer Role
- Currently pursuing a Bachelor's or Master's degree United States-based college or university
- Major: in Computer Science, Information Systems, Software or Computer Engineering, Digital Media, Technology or a related field
- Demonstrates solid decision-making skills
- Proven analytical, organizational, and problem-solving ability
- Ability to work independently and in group settings
- Strong interest in the technology, telecommunications, cable and media industries
- Scripting (Linux/Unix Shell)
- GUI design, human computer interaction, information architecture
- Metadata analysis, parallel computing, distributed programing, video analytics
3
AM, Big Data Engineer Resume Examples & Samples
- 5+ years work experience in the use of advanced data analysis/machine learning methods in infrastructure/systems
- Knowledge of NoSQL data stores and Cassandra
- Experience working with Cassandra database in a clustered production environment
- Strong understanding of Design Patterns
- Strong java, scripting, Cassandra background
- Hands-on data mining / analytics experience with large datasets
- Experience in Infrastructure or Technology Operations
- Must be able to address multiple priorities in an extremely fast-paced environment
- Work long hours when required
- Experience with mapping business needs to engineering systems
- Substantial experience with the use of relational databases using SQL for data extraction, management , and queries (MSSQL, Oracle, Sybase)
- Minimum 5 years’ experience and a successful track record of technical leadership across a wide range of technologies including but not limited to
4
Big Data Engineer Resume Examples & Samples
- Desire to learn principles of big data and the HPCC programming language ECL
- Defines site objectives by analysing user requirements; envisioning system features and functionality
- Designs and develops web user interfaces to internet applications by setting expectations and features priorities throughout development life cycle; determining design methodologies and tool sets; completing programming using languages and software products; designing and conducting tests
- Recommends system solutions by comparing advantages and disadvantages of custom development and purchased alternatives
- Completes application development by coordinating requirements, schedules, and activities; contributing to team meetings; troubleshooting development and production problems across multiple environments and operating platforms
- Supports users by developing documentation and assistance tools
- Updates job knowledge by researching new internet technologies and software products
- Ensure delivery of projects by liaising with other departments. Ensuring necessary actions are undertaken
- C++ or Java experience
- SQL knowledge
- Concepts of working with big data in hugely scalable systems
- Test driven development
- Knowledge of Continuous Integration and Version Control
5
Big Data Engineer Resume Examples & Samples
- BS, MS, or PhD in Computer Science or related technical discipline (or equivalent)
- A solid foundation in computer science, with strong competencies in computer system internals, data structures, algorithms, and software design
- Strong problem solving skills and a quick learner on those emerging technology
- System software design and development experience with extensive knowledge of Unix/Linux is a plus
- Experienced on large-scale web, cloud and/or Big Data framework or applications (like Hadoop, Spark) is a big plus
- Active open source community contributor is a plus
- Fluency in English (reading and writing)
6
Log Management / Big Data Engineer Resume Examples & Samples
- Application development, configuration and 3rd level support of our products
- Contribution on conceptual / architectural design and review for Log Management solution
- Collaboration with product management (based in Switzerland) to elaborate roadmaps
- Close and successful collaboration with third party vendor
7
Big Data Engineer Resume Examples & Samples
- Identify system of record and define sourcing approach
- Design technical architecture and data load design
- Work with Information owners and Systems owners to gain consensus and approval of designs
- Gather metadata of systems and identify data quality issues
- Transform, develop derived attributes, merge data and ensure consistency of data-sets
- Model data for predictive modeling and analytics use
- Design integration points with consuming applications
- Deploy and integrate predictive models with consuming applications
- 7+ years hands-on with Java, Perl/Python and Unix shell scripting
- 7+ years in Informatica ETL or similar technology
- 3+ years in data load design and architecture
- 3+ years hands-on experience with Greenplum, Big Insights, Hadoop or other MPP environment, Greenplum experience a plus
- Previous experience in complete lifecycle of high-scale high-volume data loads
- Experience in the process of building predictive models using structured and unstructured data
- Team player exhibiting professional maturity, personal integrity, and excellent interpersonal skills
- Undergraduate or advanced degree in Computer Science, Operations Research or other engineering discipline or relevant work experience
8
Big Data Engineer Resume Examples & Samples
- Contribute to the architecting and engineering of a new program called Archiving as a Service on Citi’s Big Data Hadoop Platform
- Integrate Citi supported Hadoop based solutions with the platform for data ingestion, data management, data access, and analytics
- Engineer commercial archive product and integrate the archive product with Citi’s Big Data Hadoop Platform
- Actively engage with engineering, businesses, and operational teams to provide engineering support and architecture design for application POC and use case on-boarding
- Develop user interfaces, templates or utilities to enable self service and automate application on-boarding
- Replatforming of data from traditional RDBMS systems to Hadoop using ETL and native Hadoop tools such as Sqoop
- Provide design recommendations to project teams when analysing an RDBMS data model for migration to Hadoop, which may require flattening
- Provide L3 engineering support for archive projects
- Work with cross domain teams to develop Citi standards
- 10+ years’ industry experience in an information technology role
- 2+ years’ experience with Hadoop HDFS, MapReduce, Sqoop, Hive, HBase, Flume, Impala, Solr and Talend
- 2+ years’ experience with Cloudera Hadoop distribution (CDH) and Cloudera Manager
- 2+ years’ experience with data modelling and data management techniques
- 5+ years’ experience with application development on Red Hat Linux, UNIX Shell Scripting, Java, RDBMS, NoSQL, and ETL solutions
- 5+ years’ creating technical documents of high quality. Ability to work in a structured environment and follow procedures, processes and policies
9
Big Data Engineer Resume Examples & Samples
- Designing systems and fault tolerance algorithms for real-time redistribution of sensor data
- Developing low latency algorithms for processing data analytics
- Development, analysis, and optimization of compute and data intensive applications and workflow
- M.S. or Ph.D. in a relevant technical field
- 4+ years of related experience in HPC software design / architecture
- Ability to work both independently and as part of a distributed team
- A strong passion for empirical research
- Professional and effective verbal and written communications skills
- Experience with distributed, fault-tolerant systems
- Knowledge designing algorithms to support broadcast of rapidly changing, latency-sensitive data
- Experience with designing systems and algorithms for efficient use of bandwidth, CPU, and memory resources
- High-performance concurrency control in distributed and shared memory environments
- Ability to develop and/or debug software tools using systems-building languages like C++, Java, Scala
- Data mining analysis languages such as R, Pydata, Sql
- Familiar with batch processing/real time systems like Flume, Kafka, Storm
10
Big Data Engineer Internship Resume Examples & Samples
- Experience with big data and related data analytics and experience with R, SPSS, Python, MATLAB or similar statistics tools
- Ability to present and engage with various business partners and stakeholders
- Experience working with analytics and click-stream data
- Sports Fan
11
Big Data Engineer Internship Espn Spring Resume Examples & Samples
- Manage, own and deliver multiple advanced BI work streams, both on an ongoing and ad-hoc basis
- Use knowledge of predictive analytics, statistics and modeling techniques to develop and improve sophistication of Business Intelligence solutions
- Work with Engineering partners to help shape and drive the development of ESPN's BI infrastructure including Data Warehousing, reporting and analytics platforms
- NO-SQL Database experience (IE: Cassandra, HBase, Redis)
- Experience with commercial and emerging reporting tools and technologies (e.g. Tableau)
- Experience with Java, Amazon Web Services (IE: S3, Redshift)
12
Principal Big Data Engineer Resume Examples & Samples
- Work closely with product owners and engineers to design, implement, test and continually improve low latency and highly scalable web services running on open source stacks (Linux, Java)
- Develop products using agile methods and tools
- Develop commercial grade software that is user friendly and suitable for global audience
- Extensively use test automation or Test Driven Development during development process
- Continually create and update domain expertise documentation for internal and external customers
- Support production issues both directly and indirectly with customers
- Investigate, evaluate, and present new technologies for use with web services
- Drive design reviews, code reviews of your work and the work of your peer engineers
- Drive architecture and design efforts across multiple teams, across multiple HERE divisions
- Be a technical lead for key complex systems or services, working closely with other engineers and testers to deliver high quality software on time
- Mentor and assist other engineers in your areas of ownership and expertise
- B.S. in Computer Science or equivalent
- 9+ years software development experience building low latency and highly scalable commercial web services
- 7+ years programming with Java EE stack and related open source technologies (Spring, Hibernate, JAX-RS, JDBC, PostgreSQL, MySQL, Oracle, etc.)
- 7+ years experience with Linux shell, Maven, version control and continuous integration
- 5+ years developing software with test automation or Test Driven Development
- Exceptional OO design and programming principles
- Strong understanding of data modeling techniques using relational and non-relational techniques
- Exceptional ability to troubleshoot issues in a production environment
- Highly entrepreneurial, flexible and hard working – willing to go the extra mile or two to “get things done with high quality”
- In depth experience developing highly scalable production systems in AWS
- In depth experience with noSQL systems (Cassandra, HBase, MongoDB, Redis, DynamoDB, SimpleDB, etc)
13
Big Data Engineer Resume Examples & Samples
- Design and develop new source system integrations from a variety of formats including files, database extracts and APIs
- Design and develop highly scalable Data Pipelines that incorporate complex transformations and efficient code. Data will need to flow to and from unstructured and relational systems for analytic processing
- Design and develop solutions for delivering data that meets SLAs and is of high quality. to various WB divisions for marketing and reporting as well as external vendors
- Investigate problems and resolve as required, including working with various internal teams and vendors. Proactively monitor the data flows with a focus on continued performance improvements
14
Big Data Engineer Resume Examples & Samples
- Fluent in shell scripting
- Fluent in scripting languages like Python, Ruby, Perl
- Experience with Apache Hadoop and Pig
- Experience with Vertica or other column-oriented database
- Experience in Amazon EC2
15
Big Data Engineer Resume Examples & Samples
- Kafka, Flume, Spark
- Pivotal Big Data Suite, specifically Hawq and Greenplum
- ETL tools like Pentaho, Informatica BDE, Talend
- NoSQL and other databases like MongoDB, Cassandra, Neo4J
16
Lead Big Data Engineer Resume Examples & Samples
- Design and build data processing pipelines using tools and frameworks in the Hadoop ecosystem
- Design and build ETL pipelines to automate ingestion of structured and unstructured data
- Design and Build pipelines to facilitate data analysis
- Manage, mentor, and grow a team of big data engineers
- 5+ years of work experience of relevant work experience
- 3+ years of experience working with big data technologies
- Proficiency in a programming language, ideally Python, Java, or Scala
- Proficiency and knowledge of best practices with the Hadoop (YARN, HDFS, MapReduce)
- Experience and knowledge of best practices with big data interactive query technologies like Spark, Impala, or Hive
- Leadership experience with small and/or mid-size software development teams
- Experience sprint planning
- Experience leading code reviews
- Proficiency and knowledege of best practices in Spark
- Experience with a workflow management framework (Luigi, Oozie, Azkaban, etc)
- Experience leading in an agile environment
- Experience with version control (Git preferred)
- Experience with Jenkins
17
Big Data Engineer Resume Examples & Samples
- Transition traditional ETL/Data Warehousing solutions to solutions that leverage Big Data technologies, building the next-generation Big Data analytics framework leveraging transformational technologies
- Works on multiple projects as a technical team member driving user story analysis, design and development of software applications, testing and implementation
- Leads in creating, refining, managing and enforcing data management policies, procedures, conventions and standards
- Apply best practices for software development and documentation, assure designs meet requirements, and deliver high-quality work on tight schedules
- Implement complex analytical projects with a focus on collecting, managing, analyzing, and visualizing data
- DevOps experience
- Experience with ETL tools and processes
18
Big Data Engineer Resume Examples & Samples
- Build Capabilities – Develop a sense of the business problems that we are trying to solve for and build capabilities that scale and drive real impact
- Grow with us – Help us stay ahead of the curve by working closely with data management team, data engineers, data architects, our DevOps team, and analysts to design systems which can scale overnight in ways which make other groups jealous
- A minimum of 6 years of progressively complex related experience
- First-hand experience – Have 3+ years of professional experience focusing on building data movement and integration pipelines (especially ETL/ELT) in a large-scale data environment
- Query ninja – Know ANSI SQL like the back of your hand; you are also very handy with No-SQL systems such as HBase, MongoDB, or Cassandra
- Passion and creativity – Are passionate about data, technology, & creative innovationTeam player – Enjoy working collaboratively with a talented group of people to tackle challenging business problems so we all succeed (or fail fast) as a team
- Experience with Chef
- Proficiency with agile development methodologies.Proficiency with linux/unix based systems
- Bachelor’s degree or higher in Computer Science, Computer Engineering, or it’s equivalent
19
Big Data Engineer Resume Examples & Samples
- Three years of collective experience in data engineering, data analysis, data warehousing or data transformation, in a similarly sized organization
- Understanding of the Hadoop eco-system (e.g. HDFS, MapReduce, HBase, Pig, Scoop, Spark, Hive)
- The ability to work within a team environment
- Cloudera Administrator certification
20
Lead Big Data Engineer Resume Examples & Samples
- Select and integrate big data tools/frameworks required to provide requested capabilities
- Drive selection and deployment of key enabling technologies
- May supervise a team of data engineers
- Eight years of collective experience in data engineering, data analysis, data warehousing or data transformation, in a similarly sized organization
- Three years of ETL experience with Hive/Impala – Hue, ETL and advanced SQL programming
- Leadership skills and the ability to work within a team environment
- Problem-solving and communication skills
21
Big Data Engineer Resume Examples & Samples
- Architect and develop applications that scale up to a Billion events per day
- Solid software engineer with excellent analytical and troubleshooting skills
- 1+ years of experience building production large scale big data applications
- Proficiency in a programming language, ideally Scala, Python, or Java
- Experience with big data interactive query technologies like Spark, Hive, or Impala
- Experience with Spark and/or Kafka
- Experience with Scala
- Experience tuning Hadoop jobs for better performance
- Experience working in an Agile Environment
22
Big Data Engineer Resume Examples & Samples
- Apply semantic correlation, ontology structured data, and text analytics techniques and systems to analyze non-structured data and identify critical insights
- Apply Big Data technologies, such as Hadoop or Cassandra, with NoSQL data management and related programming languages, such as Jaql, HBase, Pig, or Hive
- Participate in all aspects of the software life cycle, including analysis, design, development, unit testing, production deployment and support
- Formulate approaches and gather data to solve business problems, develop conclusions and present solutions through formal deliverables
- Create Big Data accelerators to help deploy scalable solutions fast
- At least 3 years of experience in distributed systems design and development using Java/C++/Python/Perl/UNIX scripting
- At least 2 years of experience in Business Intelligence (BI) full lifecycle engagements
- At least 2 years of experience in data modeling in non-relational databases, such as Cassandra, HBase, MongoDB, etc
- At least 2 years of experience in technical consulting
- At least 1 year of experience in Big Data architectural concepts
- At least 1 year of experience in one or more of the following technologies: Hadoop, Spark, IBM BigInsights, Streams, etc
- At least 1 year of experience in demonstrating excellent written and oral communication skills
- At least 5 years of experience in distributed systems design and development using Java/C++/Python/Perl/UNIX scripting
- At least 5 years of experience in Business Intelligence (BI) full lifecycle engagements
- At least 4 years of experience in data modeling in non-relational databases, such as Cassandra, HBase, MongoDB, etc
- At least 3 years of experience in technical consulting
- At least 3 years of experience in Big Data architectural concepts
- At least 3 years of experience in one or more of the following technologies: Hadoop, Spark, IBM BigInsights, Streams, etc
- At least 3 years of experience in demonstrating excellent written and oral communication skills
- At least 1 year experience in Unstructured Information Management Architecture (UIMA)
23
Big Data Engineer Resume Examples & Samples
- Performing a key management and thought leadership role in the areas of advanced data techniques, including data modeling, data access, data integration, data visualization, text mining, data discovery, statistical methods, big data design and implementation
- Defining and achieving the strategy roadmap for the NBA Fan Data Platform; including data modeling, implementation and data management for our Hadoop data lake and advanced data analytics systems
- Setting the vision, gathering requirements, gaining business consensus, performing vendor and product evaluations, mentoring business and development resources, deliver solutions, training and documentation
- Establishing standards and guidelines for the design & development, tuning, deployment and maintenance of information, advanced data analytics, and text mining models and physical data persistence technologies
- Providing leadership in helping to establish analytic environments required for structured, semi-structured and unstructured data
- Proven track record to drive rapid prototyping and design for new projects and analytic R&D environments
- Ability to translate broader business initiatives into clear team objectives and concrete individual goals, aligning appropriately with other groups for efficient, coordinated action
- Defining and implementing the strategy roadmap for enterprise data, implementation and data management for new data sources, publicly available data, business-to-business partnerships and advanced Data Analytics systems
- Drawing conclusions and effectively communicate findings with both technical and non-technical team members
- Working with staff and customers to understand the business requirements and business processes, designing data warehouse ("DW") schema and assist in defining extract-translate-load ("ETL") and/or extract-load-translate ("ELT") processes for DW and Big Data environments
- Understanding of advanced data analysis, including statistical analysis, data mining techniques, and use of computational packages such as SAS, R or SPSS
- Understanding discover-access-distill ("DAD") strategies to bring significant value data understanding
- Developing specific metrics for quality and consistency reviews of data across the Big Data architecture
- Drawing conclusions and effectively communicates findings with both technical and non-technical team members, providing active leadership skills across project team and business community
- Experience in and understanding of a wide variety of analytical processes (governance, measurement, information security, etc.)
- A solid understanding of key BI trends
- Selecting and integrating Big Data tools and frameworks required to provide requested capabilities
- Defining data retention and security policies
- Proficient understanding of distributed computing principles
- Experience with vendor managed Hadoop environments (Cloudera)
- Proficiency with current versions of Hadoop, MapReduce, HDFS
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
- Experience with other NoSQL databases, such as HBase, Cassandra, MongoDB
- Experience with Cloudera Hadoop platform
- Minimum of Bachelor's Degree (Computer Science, Mathematics, Statistics, Industrial Engineering)
- Advanced degree in Applied Mathematics, Business Analytics, Statistics, Machine Learning, Computer Science or related fields is a plus
- A minimum of 10 years of progressively responsible experience in a directly related area, during which both professional and management capabilities have been clearly demonstrated
- Extensive experience in multidimensional data modeling, such as star schemas, snowflakes, normalized and de-normalized models, handling "slow-changing" dimensions/attributes
- Extremely strong analytical and problem-solving skills
- Outstanding verbal, written and visual presentation skills
- Strong negotiating and consensus building abilities
- Proven skills to work effectively across internal functional areas in ambiguous situations
- Structured thinker and effective communicator, comfortable with interacting with staff at all levels of the organization
- Ability to work independently, establishing strategic objectives, project plans and milestone goals
- Solid experience with Data Warehouse and BI systems; extensive experience for collecting business requirements from customers, and transform the requirements into DB data processes and data schema
- Knowledge of relational SQL databases and SQL in at least one of the following environments: Oracle. Microsoft SQL Server
- Deep Knowledge of non-relational data architectures like Hadoop
24
Big Data Engineer Resume Examples & Samples
- Organizing and managing the technical team
- Producing and maintaining requirements, system architecture, and design documents
- Interfacing and managing relationships with customers, contractors, and sub-contractors
- Maintaining an environment conducive to professional growth and staff development
- Managing a staff (directly and indirectly) and providing leadership, guidance, and mentoring
- Engaging in new technology research to facilitate innovative and leading edge approaches to internal and external opportunities
- Taking an active ‘hands-on’ role with all aspects of the program The ability and willingness to travel both domestically and internationally (as needed) is required
- Bachelor’s degree required and 12+ years of experience
- A minimum of 7 years of successfully performing in a role of major responsibility tied to large data intensive projects with tight deadlines
- A minimum of 5 years of professional experience as a Data Engineer / Data Architect / Solution Architect
- A minimum of 3 years of experience with designing solutions using big data tools and technologies
- Advance degree in Computer Science / Computer Engineering / Information Systems or related discipline; industry recognized certifications such as PMP, PMI-ACP or ITIL is preferred
- A demonstrated history of contributing to the execution of large-scale technical projects
- Experience providing solutions to government customers in either the Fed/Civ or DoD space
- Experience with advanced IT technologies: Big Data engineering and integration, Service-Oriented Architectures (SOA), datacenter architectures, or web services
- Hands-on expertise in the full data lifecycle areas of: data modeling, ETL, data warehousing, reporting, data exploration and visualization, analytics and data provisioning
- Detailed knowledge of the big data analytics/data science industry, including key technologies, vendors, and trends
- Hands on experience in the areas of Big Data design and implementation for data intensive problems (high Volume-Velocity-Variety scenarios) using Hadoop-based infrastructure and services (such as MapReduce, YARN, NoSQL stores, Apache Spark, Accumulo)
- Experience with sizing/scaling distributed computing clusters, deployment models (cloud-based / on premise), operations, management, and security
- Capable of synthesizing a reference architecture from a requirements specification, industry best practices, and vendor whitepapers / artifacts
- Extensive experience in IT-relevant industry areas and an understanding of the surrounding strategic issues that may impact the customer
- A recognized leader and mentor who can serve as a subject matter expert for technologies/tools
- Experience with managing 3rd party technology partners and vendor relationships
- Superior analytical and problem-resolution skills, with attention to details
- High energy, strong work ethic, and able to perform within condensed timeframes
- Professional demeanor and attitude
- Capable of applying Information Technology Infrastructure Library (ITIL) concepts and practices
- Experience with large scale infrastructure modernization efforts
- Experience desired in the areas of: data standards and compliance, statistical analysis, machine learning techniques
- Knowledge of typical data governance issues, strategies, and policies
- Intimate working knowledge of the full Systems Engineering life cycle (Requirements, Analysis, Design, Implementation, and Testing) as well as project management methodologies
- Experience in major areas of Federal Civilian or DoD customer set
- Broad knowledge of IT terminology, methods, principles, concepts, and theories
- Proven leadership traits to include sound business judgment, keen conceptual skills, intellectual discipline, self-confidence, well-developed management and problem resolution skills
- Data management/Data Science certifications and/or training
25
Big Data Engineer Resume Examples & Samples
- Make major contributions to the Big Data direction and strategy for the group in partnership with their peers
- Executes highly complex functional work tracks for the team. Drives the execution of operational/technical objectives for data analytic outputs and business solutions
- Is an active technical leader within the department
- Identifies new areas of data, research and Big Data technology that can solve business problems
- Utilize effective project planning techniques to break down complex projects into tasks, manage scope of projects, and ensure deadlines are kept
- Leverages, contributes and uses Big Data best practices / lessons learned to develop technical solutions used for descriptive analytics, ETL, predictive modeling, and prescriptive “real time decisions” analytics
- Supports Innovation; regularly provides new ideas to help people, process, and technology that interact with analytic ecosystem
- Develops and builds frameworks/prototypes that integrate Big Data and advanced analytics to make business decisions
- Implements new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems
- Works with QR&A peers to ensure efforts within owned tracks of work will meet their needs
- Drives multiple tracks of complex work within the research group
- Co-mingles data sources to lead work on data and problems across departments to drive improved business & technical results through designing,
- 3 - 5 Years of experience or equivalent skills & ability
- Master’s or PhD preferred in a quantitative or scientific field such as computer science, computer engineering or equivocal experience
- Experience in using software development to drive data science & analytic efforts
- Experience with database & ETL technologies
- Experience with various data types (e.g. Relational, Unstructured, Hierarchical, and Linked “Graph” Data)
- Experience in developing, managing, and manipulating large, complex datasets
- Proven ability to code and develop prototypes in languages such as Python, Perl, Java, C, R, SQL, and XSLT
26
Big Data Engineer Resume Examples & Samples
- Design and build scalable infrastructure and platform to collect and process very large amounts of data (structured and unstructured), including streaming real-time data from multiple data sources
- Work closely across an array of various teams and organizations in the company and industry (including partners, customers and researchers)
- Develops code necessary to complete the assigned project in the specified timeframe
- Designs and maintains Big Data analytical algorithms to operate on petabytes of data
- Writes, modifies, and debugs software largely focused in the back-end and data layer
- Identifies and reports problems in new and existing software
- Recreates reported software problems to facilitate solutions
- Assists in the preparation of internal software design documentation
- Architects and codes multi-environment system solutions utilizing various programming languages
- Employs best practices for design, development, unit testing and test plan development
- Supports completed software throughout the Software Development Life Cycle and in production
- Studies state-of-the-art development tools, programming techniques, and computing equipment
- Bachelor's degree in a technical field such as computer science, computer engineering or related field required
- 8+ years experience required
- Experience with a range of big data architectures, including OpenStack, Hadoop, Pig, Hive or other big data frameworks
- Broad understanding and experience of real-time analytics, NoSQL data stores, data modeling and data management, analytical tools, languages, or libraries (e.g. SAS, SPSS, R, Mahout)
- Strong understanding of data modeling
- Experience in the Financial Services Industry highly desired
- 10+ years of software development experience using multiple computer languages. Experience
- Building large scale distributed data processing systems/applications or large-scale internet systems (cloud computing)
- Strong foundational knowledge and experience with distributed systems and computing systems in general
- Hands-on engineering skills
- Ability to lead initiatives and people toward common goals
- Excellent oral and written communication, presentation, and analytical skills
- Bachelor's degree in Computer Science/Engineering, higher degrees preferred
27
Big Data Engineer Resume Examples & Samples
- Participate in collaborative software and system design and development of the new system
- Ensure conceptual and architectural integrity of the system
- Prototype and make informed decisions
- Experience in communicating decisions to team members
- Experience attending conferences / meet-ups
- Experience with graph databases
- Scala experience an advantage
- Experience in full development life cycle from design to production
- Have experience with JIRA, or other similar tools
28
Big Data Engineer Resume Examples & Samples
- Ask and answer interesting questions of structured and semi-structured data sources
- Deploy and automate different techniques to pick out unexpected lessons and relationships
- Build-out, automate, and deploy machine learning and statistical learning pipelines into real-time applications
- Creation and management of ETL or Oozie jobs that handle multiple data feeds or sources
- Implement statistical analyses and modeling techniques in Spark or H20
- Interface with technical and non-technical individuals
29
Big Data Engineer Resume Examples & Samples
- Engineer, design and build Big Data Analytics Platform for the software defined data center
- Build systems and software to deliver deep data science based insights
- Work on performance, scaling out and resiliency of distributed systems
- Work closely with development, test, documentation and product management teams to deliver high quality products and services in a fast paced environment
30
Big Data Engineer Resume Examples & Samples
- 5+ years of experience with using Java for back end data management
- 3+ years of experience with performing ETL using Hadoop
- 3+ years of experience with using Apache Pig for data ingest
- Experience with Apache Solr or Hadoop
- Experience with designing and developing automated analytic software, techniques, and algorithms
31
Big Data Engineer Resume Examples & Samples
- Ingest and Process data from various sources in raw, structured, semi-structured, and unstructured format into Big Data ecosystem
- Realtime data feed processing using Big Data ecosystem
- Design, review, implement and optimize data transformation processes in Big Data ecosystem
- Participate in overall test planning for the application integrations, functional areas and projects
- Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered
- 8-10 years of hands-on experience with enterprise scale applications and systems
- 5+ Year of expertise in Big Data technologies in Hadoop ecosystem
32
Big Data Engineer, Global Technology Resume Examples & Samples
- 5+ years experience working with the Linux operating system
- Experience using scripting languages such as Perl, Python and Shell
- Knowledge and experience working with Linux build environments (e.g. cobbler)
- Experience with Hardware/Software monitoring (e.g. Tivoli)
- Knowledge of Intel Hardware platform including out of band management
- Experienced in performance reporting and analysis
- Knowledge and experience with system automation would be a plus
- Ability to write operational documentation
33
Big Data Engineer Resume Examples & Samples
- Bachelor of Science in Computer Science, Engineering, or MIS, OR equivalent experience
- 5+ years of directly related experience
- Experience designing and developing within Big Data eco-systems
- Proficiency with RDBMS and NoSQL (Document, KV, and Column Family) data modeling
- Experience and proficiency in Java coding skills relevant for Big Data
- Experience in Spring XD, Kafaka and similar technology is a plus
- Experience in Pivotal Big Data Suite is a plus
34
Big Data Engineer Resume Examples & Samples
- Design and implement optimum data structures in the appropriate data management system within Hadoop, Teradata, Oracle, SQL Server to satisfy the data requirements
- Identify and select the optimum methods of access for each data source (real-time/streaming, delayed, static)
- Determine transformation requirements and develop processes to bring structured and unstructured data from the source to a new physical Data Model
- Develop ETL/ELT using Informatica, Wherescape RED, SSIS, and Apache NiFi within the company’s global warehousing environment
- Work closely with the TD Data Science team to implement strategies for cleaning and preparing data for analysis, to develop data imputation algorithms, and optimize performance of big data and machine learning systems
- Work closely with our Software Engineering team to integrate your amazing innovations and algorithms into our production systems
- Understand principles of Big Data Visualization tools (Tableau, SpotFire)
- Optimize table schema based on usage patterns. Actively participate in growing and develop the team of data engineers
- Follow best known security practices to ensure data quality and data integrity
- 4+ years’ experience in the following
- Develop ETL/ELT processes using Informatica, Wherescape RED, SSIS, and Apache NiFi for Corporate warehousing
- Experience and deep knowledge of Teradata architecture and tools
- Experience with big data processing and/or developing applications and data sources via Hadoop, Hive, HBASE, Spark, Sqoop, MapReduce
- Proficient with one or more high-level client, object-oriented languages listed: (C#, JAVA, Python, Scala)
- Relational databases and SQL
- Understanding Agile software development frameworks – Scrum, Kanban
- Previous work with statistical analysis
35
Big Data Engineer Resume Examples & Samples
- Developing analytics for large non-homogeneous data sets
- Being creative and innovative in how complex (and big) data is processed in distributed systems
- Working with the end-user to make sure the analytics transform data to knowledge in very focused and meaningful ways
- Ability to work with a fast moving team, interested in the best technical solution, and open to new ideas that make the product better for our customer
- Bachelors degree (or higher) in Computer Science, Engineering, or a Natural Science (Physics, Mathematics etc)
- You love solving new problems collaboratively and in creative ways
- Possess strong programming skills/knowledge in Java
- Interest in learning about analytic engines (Apache Storm, Apache Spark)
- Interest in learning about functional programming languages (Scala, Cloujure)
- Interest in learning about NoSQL Databases (Accumulo), distributed search (ElasticSearch, Solr), and distributed systems design
- Interest in learning about Git and other Configuration Management Tools
36
Big Data Engineer Resume Examples & Samples
- Being creative and innovative with how complex (and big) data is processed in distributed systems
- Working with the end-user to make sure the analytics transform data to knowledge in a very focused and meaningful way
- Working with a team that is fast moving, being interested in the best technical solution, and open to new ideas that make the product better for our customer
- Bachelor's degree (or higher) in Computer Science, Engineering, or a Natural Science (Physics, Mathematics etc)
- Five (5)+ years of similar experience
- Must possess strong programming skills in Java
- Experience with Git and other Configuration Management Tools
- Ability to solve problems collaboratively and in creative ways
37
Big Data Engineer Resume Examples & Samples
- Forming strategy for data collection, data processing, and efficient use of the Optum Security Big Data Lake
- Collecting, cleaning, processing, and analyzing data related to enterprise cybersecurity
- Collaborating with external vendors and product teams to identify opportunities for new technologies or expanded use of existing technology
- Working with source data teams to facilitate and optimize data flow into the Data Lake
- Developing and engineering solutions for technical users to access data from the Data Lake through a variety of means, including APIs, flat file extracts, or web applications
- Remaining up to date and on the cutting edge of Big Data technology
- Constantly learning and adapting to new techniques, languages, and technologies
- Positions in this function develop and implement information security policies, standards and procedures to secure and protect data residing on systems
- Work directly with user departments to implement procedures and systems for the protection, conservation and accountability of proprietary, personal or privileged electronic data
- Coordinates, supervises and is accountable for the daily activities of business support, technical or production team or unit
- 2+ years of implementing Big Data ingestion solutions including use of Flume, Kafka, Spark, and NiFi
- 1+ years of utilizing Big Data resource and job managers including Oozie and YARN
- 2+ years of following and implementing Apache Top-Level projects
- Interest in learning implementations of Flink, Zeppelin, Tez, and more
- Background experience with data engineering with an affinity for developing code from the command line
- Foundational knowledge in information technology, including hardware, networking, architecture, protocols, file systems and operating systems
- Familiar with Java or Python development in a large organization
- Undergraduate degree, or equivalent work experience
- Experience in the cyber security domain
- Petabyte-scale experience with Big Data engineering
- 5+ years of experience working with Big Data systems
- Development of end-user tools to access data systems
38
Big Data Engineer Resume Examples & Samples
- Design data processing architecture for analyzing massive amounts of data in scalable ways
- Work closely with the research team to implement scalable matching algorithms
- At least 2 years of industry experience in similar positions
- B.Sc. in Computer Science or 8200/Kehiliya alumni
- Experience and knowledge of Big Data technologies such as Hadoop, Map/Reduce, Pig/Hive, Cascading/Scalding, Spark, Giraph & GraphLab
39
Big Data Engineer Resume Examples & Samples
- Design and implement large scale data management solution to support business analysis and data science
- Capture functional requirements and develop technical requirements which leverage best in class tools. Create project plans and identify resource requirements
- Design data models for enterprise-wide data integration incorporating structured data e.g., real time internal data, third party data appends, etc. and unstructured web-based data
- Develop ETL and APIs to support data science needs incorporating internal, industry and unstructured data
- Create efficient production pipeline for cloud based solutions business analysis and machine learning needs
- Provide though leadership on technical solutions to leverage cloud-based big data capabilities
- Bachelor's Degree in Computer Science or a related technical field,
- 4+ years’ work experience with ETL, Data Modeling, and Big Data Management
- Expert in writing SQL scripts and working with large data warehouses
- NoSql and Linux proficiency
- Experience with AWS/Hadoop/Hive/Pig and Spark a strong plus
40
Lead Big Data Engineer Resume Examples & Samples
- Become the internal owner for this data-driven initiative, execute all project aspects in partnership with internal and external constituencies
- Drive all technical discussions with internal team and how the program integrates into Allstate systems
- Identification, evaluation and prioritization of data sources
- Lead team through agile process, as product manager you will own the user story map and manage the backlog and the sprint schedule
- Responsible for product quality, version management, etc
- Execute the product vision, value proposition, and positioning for the data platforms
- Process input from Customers and internal sources to provide clear product and project prioritization
- Develop technical documents that define market and product requirements
- Work with Development, Operations, and Project Management to ensure development and delivery of products on time and manage expectations
- Exhibit strong sense of ownership and ability to work in global team environment
- Work closely with Allstate’s Data and Analytics team as well as the Quantitative Research and Analytics team to deliver data sets that meets their analytical needs
- Support ongoing relationships and strategic alignment with product stakeholders for key customer accounts
- Bachelor’s degree required, preferable in Computer Science/Business/GIS/Data Science
- Advanced Analytic Data Sourcing and Content Management skills
- Computer Proficiency in Oracle, UNIX, Linux, SQL
- Experience with Dimensional Modeling, SAS, Tableau, Cloudera or other Hadoop distributions
- Strong technical skills or experience in a technically complex product development environment (e.g. database or software applications)
- Background in information systems, big data and analytics experience a must with close familiarity with contemporary tools and agile methodologies
- Exposure to Hadoop, Python, Hive, Spark, Java a big plus
- Experience driving business decisions –as key technical lead gathering and managing business requirements
- Ability to execute in a cross-functional team environment spread across many departments, while driving for desired outcomes
- Ability to communicate complex ideas in a clear, concise manner both verbally and in writing
- Results oriented as demonstrated by proven ability to meet short deadlines and execute against multiple competing priorities with little direct supervision
41
Big Data Engineer Resume Examples & Samples
- Strong knowledge in Java and distributed algorithms
- Proficiency in Java MapReduce/Spark (using Java) development and experience with Hadoop or other data processing technologies required
- Knowledge of Hadoop-related technologies such as Azkaban, Oozie, Impala, Hive and Pig
- Experience developing large-scale data warehousing , mining or analytic systems
- Excellent debugging, critical thinking, and communication skills
- 5+ years of programming experience, preferably in Java or C/C++
42
Big Data Engineer Resume Examples & Samples
- Design, build and support scalable, high performance data applications, repositories and data governance related applications
- Choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them
- Involved in the retrieval, processing, fusion, and analysis of data, the development of backend data architecture to support rapid analytics, and the development and deployment of data visualization tools and dashboards for monitoring key performance indicators
- Define data architecture standards and establish best practices for data management
- Drive Big Data POC's across the business, find use cases for Big Data technologies
- Master's degree or above in Computer Science, Systems Engineering, Applied Mathematics/Statistics, Operations Research, or other physical science/engineering fields
- 5+ years of overall experience with 2+ years with Big Data tools and technologies
- Fluency in Hadoop Ecosystem, e.g. MapReduce, Pig, Hive, and Spark
- Algorithm development on high-performance systems
- Experience in data pipeline management, ETL/ELT, and data/system architecture
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
- Good understanding of Lambda Architecture, along with its advantages and drawbacks
- Experience with Cloudera/MapR/Hortonworks/Databricks
- Experience with cloud providers – Amazon AWS, Microsoft Azure
- Experience in large scale machine learning
- Knowledge of data exploration and visualization tools (Tableau, Qlik, D3, etc)
- Big Data certifications – Hortonworks, Cloudera, Databricks
43
Big Data Engineer Resume Examples & Samples
- Create Entity Relationship (ER) Diagrams to the proposed database
- Create database objects such as tables, views etc. Expertise in writing SQL queries
- Ensure that the code is written keeping in mind any security issues
- Participate in development and creation of Data warehouses and experience in AWS Cloud
- Solid knowledge of Hadoop
- Experience with Hadoop and big data ecosystem that will help you hit the ground running
- Process unstructured data into a form suitable for analysis – and then do the analysis
- Strong analytical skills and creative problem solver
- · Programming experience, ideally in Python or Java, but we are open to other experience if you’re willing to learn the languages we use
- · Deep knowledge in data mining, machine learning, natural language processing, or information retrieval
- · Experience processing large amounts of structured and unstructured data. MapReduce experience is a plus
- · Enough programming knowledge to clean and scrub noisy datasets
44
Big Data Engineer Lead-intelligent Solutions Resume Examples & Samples
- Design, engineer and build data platform solutions using Big Data Technologies
- Own and establish a Reference Architecture for Big Data
- Lead innovation by exploring, investigating, recommending, benchmarking and implementing data centric technologies for the platform
- Be a proactive coding engineer
- Full stack engineer, being able to lead and run conference calls, document and execute an engineerure vision, ability to get on command prompt and troubleshoot / install
- 10+ years of professional experience with an established track record as an engineer
- Hands on development experience in an Agile environment
- Deep knowledge of component systems architecture including but not limited to distributed systems architecture and multiple programming languages supporting such architecture
- Knowledge of various Big Data components, vendors and technologies including Hadoop, Greenplum, Tableau, Gemfire, low latency solutions (networking / disk / etc)
- Proven leadership skills to participate as a senior technologist in JPMIS
- Prior experience with Hortonworks or Cloudera required
- College degree required
45
Big Data Engineer Resume Examples & Samples
- Develop consumer facing Big Data platform and products such as user behavior analytics, app/video/music/news recommendation, Ads DMP, etc
- Develop Proof-Of-Concept of new AI services using Machine Learning technologies
- Support data as service collaboration with partners
46
Big Data Engineer Resume Examples & Samples
- Design and development on Hadoop software ecosystem and development on MapReduce, HBase, Hive, PIG, Programming in Spark, Storm
- Programming in distributed messaging system: Kafka,Storm,Spark
- Development in PIG/Python
- Be the Senior Hadoop /Big Data Developer and Data Architect
47
Big Data Engineer Resume Examples & Samples
- LI-AG1
- Be part of a team that designs, develop and implement analytical solutions for our clients
- Help us develop the Media Analytics Big Data and Cloud strategy
- Work with Data Scientists to automate and implement advanced analytical models
- Work with Data Scientists, Analysts and Consultants to design, build, document, test and implement services on which we'll run our business
- Stay ahead of the curve on developing technologies
- Evolve with the needs of a fact-paced, rapidly growing business
48
Big Data Engineer Resume Examples & Samples
- Participate with team of technical staff and business managers or practitioners in the business unit to determine systems requirements and functionalities needed in large/complex development project
- Review coding done to advance application upgrade, extension, or other development. Analyze application for data integrity issues
- Develop test protocols or plan for testing revised application and review test results
- Serve as project lead or lead technical staff in course of application development project
- May mentor less experienced technical staff; may use high end development tools to assist or facilitate development process
- 6+ years of hands on experience and strong and deep knowledge of Java application development
- Experience processing large amounts of structured and unstructured data. MapReduce experience is a huge plus
- Experience building and coding applications leveraging Hadoop Components: HDFS, HBase, Hive, Sqoop, Kafka, Storm etc
- Experience coding in more than one of the following: Java, MapReduce, Python, Pig Programming, Hadoop Streaming, HiveQL
- Experience developing RESTful Web Services
- Agile/scrum experience
- Vendor management experience leveraging staff augmentation and/or outcome based project delivery models; statement of work planning and incremental demand forecasting
- Experience managing on-site and off-site staff and demonstrated ability to collaborate and influence others to ensure timely and effective completion of project tasks
49
Big Data Engineer Resume Examples & Samples
- Lead build of Hadoop Data Engineering Products
- Evaluate application risk analysis and complete all necessary documentation
- Work with Linux server admin team in administering the server hardware and operating system
- Perform PoC's to evaluate new products and enhancements to existing products deployed
- 4+ years experience with ETL with tools like Informatica, DataStage, Ab-initio, Nifi, Paxata, Talend preferred
- Exp. working with multiple Relational (Oracle, SQL Server, DB2 etc) and NoSQL Databases (MongoDB, Cassandra) preferred
- Experience with Data modeling, Data warehouse design, development
- Experience with BI Tools (Qlikview, Tableau) is preferred
- Exposure to Sentry, Cloudera Manager, Hive, HBase, Impala preferred
50
Big Data Engineer Resume Examples & Samples
- 3+ years of experience with Java or Python software engineering
- 2+ years of experience with Open Source Big Data technologies, such as Hadoop
- Experience with Cloud computing and NoSQL technologies, including Hbase
- Experience with designing and implementing solutions for ingesting data into Big Data environments
- Experience with the design, development, tuning, and creation of database environments
- Ability to exhibit flexibility, initiative, and innovation in dealing with ambiguous, fast-paced situations
- BA or BS degree in CS preferred
51
Big Data Engineer Resume Examples & Samples
- Bachelor's Degree in Computer Science or equivalent experience
- 7+ years of total experience
- Extensive hands-on experience with Hadoop (Cloudera), unstructured data sets and software development in Python, Scala, Java, and Spark
- Experience with data collection, curation, preparation and transformation
- Knowledgeable in data warehousing, reporting and business intelligence
- Experience with relational and non-relational databases
- Knowledge of data validation processes and software quality assurance
- Experience with working in a data driven business model
52
Big Data Engineer Resume Examples & Samples
- Responsible for the definition, design, construction, integration, testing, and support of reliable and reusable software solutions, addressing business opportunities
- Includes systems analysis, creation of specifications, coding, testing, and implementation of application programs and data interfaces
- Requires a previous domain of experience in applications development. Responsible for overall application design, including interfaces with other applications and systems
- Assures that application designs are consistent with industry best practices application attributes (including scalability, availability, maintainability, and flexibility)
- REQUIRED: Strong development expertise - Strong Java, Spark. Mongo DB, AWS preferred
- REQUIRED: Great communication and planning skills
- REQUIRED: Ability to build robust and scalable architecture using open source technologies
- 5+ years experience with Financial Services clients
- 3-5 years prior application development experience
- Demonstrated experience with large scale application integration efforts
- Have participated in more than 5 Agile projects
- Communication - Ability to communicate strategies and processes around data modeling and architecture to cross functional groups and senior levels
- Ability to influence multiple levels on highly technical issues and challenges
- Demonstrated experience to influence and coordinate third parties and suppliersSearch Jobs US
53
Big Data Engineer Resume Examples & Samples
- Over all understanding of computer systems, web application architecture and components
- Collaborate with other developers, share ideas, open for dialog and discussion, etc
- Data warehousing/OLAP knowledge
- Analytical and critical thinking
- Big Data technologies preferred
54
Junior Big Data Engineer Resume Examples & Samples
- Work and align with the existing GXL software infrastructure and collaborate closely with DEV teams
- Automate and optimize product deliveries by writing plug-in scripts for the existing automation infrastructure
- Participate in user acceptance testing of software components (business application layer)
- Support the proof-reading and updating of the documentation provided by the DEV teams
- Handover software components to the HUB factory with proper training and documentation
- Maintain existing ETL workflows, data management and data query components
- Work in an agile environment and continuously improve the agile processes
- Use algorithms for statistical analysis and participate in plausibility checks
- Draw on best practices and stay focused on quality
- Work with ticketing tools, e.g. JIRA
55
Big Data Engineer Resume Examples & Samples
- Establish and communicate fit for purpose analytical platforms for business prototypes
- Coach and mentor less experienced team members
- Designing Architectural processes and procedures
- Interfacing with vendors - manage POCs and RFPs
- Full stack engineer, being able to lead and run conference calls, document and execute an engineered vision, ability to get on command prompt and troubleshoot / install
- Knowledge of various Big Data components, vendors and technologies including Hadoop Tableau, Gemfire, low latency solutions (networking / disk / etc)
- Java 3+
- Hadoop 2+
- DBA 2+
- System Integration 3+
- Security Frameworks
56
Big Data Engineer Resume Examples & Samples
- Proficient in Oracle, Linux, and programming languages such as R, Python, Ruby or Java
- Familiarity with new advances in the data engineering space such as EMR and NoSQL, and technologies like Dynamo DB
- Demonstrated strong data modeling skills in areas such as data mining and machine learning
- Skilled in presenting findings, metrics and business information to a broad audience within multiple disciplines
- Solid experience in at least one business intelligence reporting tool, preferably Tableau
- Capable of investigating, familiarizing and mastering new datasets and technologies quickly
57
Big Data Engineer Resume Examples & Samples
- 5-10 years of experience of big-data
- IT background with strategic experience
- Familiarity with programming languages such as Matlab, R, Java, Ruby or alike
- Familiarity with Big-data tools like Hadoop or alike
- Familiarity with disciplines such as machine learning, computer vision and Predictive modelling
58
Big Data Engineer Resume Examples & Samples
- Data Storage: HDFS, HBase, HIVE
- Data Processing, Analysis & Integration: Spark, Map Reduce, Impala, Sqoop
- Experience in Agile development methodology
- Responsible for unstructured tasks and the issues addressed are less defined requiring new perspectives, creative approaches and with more interdependencies
- Apply attained experiences and knowledge in solving problems that are complex in scope requiring in-depth evaluation
- A minimum of 8 years of experience is required. 9 to 11 years of experience is preferred
- A Bachelor of Science Degree in Electrical Engineering or Computer Science, a Master Degree, or a PhD; or equivalent experience is required
59
Big Data Engineer Resume Examples & Samples
- Minimum of 2 years of experience building data systems focused on a Linux environment: installation, maintenance and operation
- Proficient with Hadoop Ecosystem, Data Mining and ETL
- Prior experience with Big Data Architecture and Operations support
- Experience building and maintaining Big Data environments
- Develop and sell customers on the value and benefits of Big Data changes, improvements
- Prior implementation of new Big Data initiatives and services
- Exhibit strong leaderships skills in a collaborative production environment
- Experience working with international teams/cultures
- Prior experience working in a consultative capacity: requirement gathering, needs definition and end-user products
- Have a passion for Big Data infrastructure/operations
- Be current with emerging technology and how it can be integrated into our existing environment
- Be self-starters who work well in a fast-paced, dynamic environment
- Be focused on the needs of your customer while having the flexibility to change direction as required
60
Big Data Engineer Resume Examples & Samples
- Design QA, Production, Staging and Performance Environments in AWS
- Day-to-day management of requests and issues for all environments including prioritization for the off-shore team
- Use and develop Hadoop for increased productivity, security, reliability, and performance
- Develop tools for performance monitoring, security monitoring, and AWS resource creation scripts
- Work with stakeholders and development teams to provision solution
61
Big Data Engineer / Run Lead Resume Examples & Samples
- Act as the subject matter expert for Big Data platforms and technologies
- Work across IT teams to ensure code quality, performance, and the scalability of deployed data products
- Familiarity with Pivotal Hadoop distribution and tools such as HAWQ and Spring XD
- Experience in Puppet and Chef for deployment and configuration management
- Familiarity with front end technologies (AJAX, .js) and UX best practices, or visualization and exploration tools like Tableau, Qlik, Spotfire, and Datameer
- Engaging personality with experience collaborating across teams of internal and external technical staff, business analysts, software support, and operations staff
- Excellent interpersonal skills in areas such as teamwork, facilitation, communication, and presentation to business users or management teams
62
Big Data Engineer Lead Resume Examples & Samples
- Minimum of 1+ years designing and building large scale data loading, manipulation, processing, analysis, blending and exploration solutions using Hadoop/NoSQL technologies (e.g. HDFS, Hive, Sqoop, Flume, Spark, Kafka, HBase, Cassandra, MongoDB etc.)
- Minimum 2 years designing and implementing relational data models working with RDBMS
- Minimum 2 years working with traditional as well as Big Data ETL
- Minimum 2 years of experience designing and building REST web services
- 2+ years of hands-on experience designing, implementing and operationalizing production data solutions using emerging technologies such as Hadoop Ecosystem (MapReduce, Hive, HBase, Spark, Sqoop, Flume, Pig, Kafka etc.), NoSQL(e.g. Cassandra, MongoDB), In-Memory Data Technologies, Data Munging Technologies
- Architecting large scale Hadoop/NoSQL operational environments for production deployments
- Designing and Building different data access patterns from Hadoop/NoSQL data stores
- Managing and Modeling data using Hadoop and NoSQL data stores
- Metadata management with Hadoop and NoSQL data in a hybrid environment
- Experience with data munging / data wrangling tools and technologies
63
MTS Big Data Engineer Resume Examples & Samples
- Passionate about data
- Strong analytical skills including the ability to define problems, collect data, establish facts, and draw valid conclusions
- Expertise in database programming
- Familiar with data movement techniques and best practices to handle large volume of data
- Experience with Agile, web services, unix and data mining systems
- Knowledge of NoSQL and Big Data solution
- ETL development experience is preferred
- Experience with data warehousing architecture and data modeling best practices
- Strong programming skills with understanding of performance, scalability, concurrency, scaling and extensibility
- Experience with File Systems, server architectures, and distributed systems
- Hadoop ecosystem committer is big adding
64
Big Data Engineer Resume Examples & Samples
- Develop additional capabilities for the CTO Data Lake and Big Data Analytics platform
- Design and implementation of the next generation data transport and transform engine for the Group CTO Data Lake
- Implement and support the CTO Relationship Store, a graph data store representing all the data in the CTO Data Lake for the goal of advance graph analytics
- Design the next generation log management platform on Big Data technologies, example Solr on Hadoop
- In-house logging, monitoring, and alerting solution for the CTO Data Lake
- Engage directly with stakeholders, develop relationships, clarify requirements, drive Big Data platform standards and data quality improvements
- Influence sound technical decisions in this young and dynamic technology landscape
65
Big Data Engineer Resume Examples & Samples
- Contribute to the design and architecture of the data warehousing system that can support analytical and real time reporting
- Design, develop and support the data pipeline integrating with the disparate source systems, optimal transformation code, and highly performant data models meant for storing rapidly evolving data
- Interface with business customers, and collaborate with BIEs and SDEs to deliver the complete data engineering solutions
- Contribute to the automation and optimization for all areas of DW/ETL maintenance and deployment (operational excellence)
- Training and mentoring
- Bachelor's degree in Computer Science or a related technical discipline
- Ability to write high quality, maintainable, and robust code, often in SQL
- Expertise in the design, creation and management of data warehousing data models and ETL
- Experience with Amazon Redshift
- Experience working in a UNIX/LINUX environment
- Experience with Big Data technologies
66
Big Data Engineer Resume Examples & Samples
- 3+ years of experience with Python required
- Experience with NiFi, Storm, and other ingestion technologies
- Experience with Hadoop and Accumulo
- Experience with REST layers, MongoDB, and ElasticSearch
- Ability to learn new programming languages and architectures quickly
- Active Top Secret clearance required
- Experience with TCRI and Cognition preferred
- Experience in a rapid prototyping environment
67
Big Data Engineer Resume Examples & Samples
- Expert on Data Integration Tools preferably Informatica PowerCenter, PowerExchange
- Hands-on experience with Big Data technology stack ( Hadoop, Hive, Pig, HBase, Impala , Spark )
- Good understanding of different design and architectural patterns for big data solutions
- Good RDBMS knowledge with strong SQL skills
- Hands-on experience with MPP Database systems like Teradata/Greenplum is preferred
- Good understanding of Data warehousing concepts
- Good Analytical and Problem solving skills
- Hands-on experience with Unix Shell Scripting
- Working experience with Java/Scala is good to have
- Good communication and inter-personal skills
68
Big Data Engineer Resume Examples & Samples
- Two plus years of industry experience in Scala or Functional Programming experience in general
- Comfortable in configuring and navigating multiple operating systems (Mac/Windows/Linux)
- A degree in Applied Mathematics, Computer Science, Engineering or equivalent experience
- Ability to design and execute on an acceptance criteria and success metrics requirements driven process
- Strong analytical and coding skills
- Experience in Python is preferred
- Two plus years of industry experience developing on medium to large sized Hadoop Cluster
- Hadoop experience should include Hive, Oozie and Yarn
- Spark, Pyspark Experience in a production setting is preferred
- Experience with writing Kafka producers and consumers is a huge plus
- Experience with Spark Streaming, Spark SQL in a production setting is a plus
- Ruby or Bash/Shell scripting is a plus
- Knowledge and/or experience with health care information domains a plus
69
Think Big Data Engineer Resume Examples & Samples
- 2+ years of programming in Java , Python or Scala
- Experience with Hadoop, Spark or any other distributed system
- Experience in working on one of the Hadoop distributions (Hortonworks, Cloudera or MapR)
- Linux / Unix administration experience
- Experience with SQL and shell scripting
- Passion and ability to learn about open source software
- Programming experience with any of the Hadoop tools (HDFS, Pig, Hive, Hbase, Scoop, Kafka, etc.)
- Experience with implementing a project in Hadoop area
- Experience in working with databases: Teradata, DB2, PostgreSQL, Oracle, MySQL
70
Junior Big Data Engineer Resume Examples & Samples
- Implement Hadoop and Teradata Big Data products in our Big Data benchmark lab
- Design, and develop automated test cases that verify solution feasibility and interoperability, including performance assessments
- Research and implement new technologies in the Big Data space
- Apply knowledge of emerging technologies to define new solutions
- Usage of DBQL and SQL on Teradata or other major DBMS
- Experience with a major Hadoop distribution
71
Think Big Data Engineer Resume Examples & Samples
- Experience programming in Java, Python, SQL or C/C++
- Experience with SQL, NoSQL, relational database design and methods for efficiently retrieving data
- Firm understanding of when to use interfaces vs abstract classes, subclassing, designing classes for re-use, static string constants rather than in-line constants, use of auto-closing resources of finally blocks
- Familiarity with JUnit, TestNG, Mockito, JMockit, etc
- Fluid understanding of aggregates, group by’s, inner joins, outer joins, etc
- Ability to create a java project, maven project or similar with multiple packages an referenced dependencies, ability to load a project into an IDE without submitting IDE config files to source control
- Basic knowledge on using fit, adding and committing, and branching
- Unix permissions (chmod, chgrp, sudio, butmask) basic unic commands (ls, top, find, xargs, grep, wc) unix processes (kill, ps)
- Ability to edit config files, enable syntax highlighting, display or hide line numbers, use vim keyboard combinations like navigating to the end of a line, copy/paste using vim keyboard combinations
- Ability to edit config files, enable syntax highlighting, display or hide line numbers, use emacs keyboard combinations like navigating to the end of a line, creating macros using emacs keyboard combinations
- Knowledge of MapReduce and ShuffleSort and basics of RN, Applications, Yarn, etc., key-value pair, input readers, and writables
72
Big Data Engineer Resume Examples & Samples
- Programming language -- Java, Python, SQL
- AWS - RDS, EC2, RedShift, S3 and others. AWS cloud Data migration, AWS Security
- Database – Oracle, complex SQL queries, performance tuning concepts, Backups, Recovery, DR, BCP
- ETL Tools Data Stage, Informatica
- Code/Build/Deployment -- git, svn, maven, sbt, jenkins, bamboo
- AWS Associate / Solution Architect certified
- Batch processing -- Hadoop MapReduce, Cascading/Scaling, Apache Spark, AWS EMR
- Stream processing -- Spark streaming, Apache Storm, Flink
73
Big Data Engineer Resume Examples & Samples
- Building large-scale data processing systems, he or she is an expert in data warehousing solutions and should be able to work with the latest (NoSQL) database technologies
- Implementing complex big data projects with a focus on collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into insights
- Embrace the challenge of dealing with petabyte or even exabytes of data on a daily basis. He or she understands how to apply technologies to solve big data problems and to develop innovative big data solutions
- Building data processing systems with Hadoop and Spark using Java, Python or Scala should be common knowledge to the big data engineer
- Use machine learning/deep learning to discover insights
74
Big Data Engineer Resume Examples & Samples
- 5+ years of experience in a professional work environment
- 3+ years of experience with Big Data technologies, including Hadoop
- Experience with at least two object-oriented and scripted languages, including Java, JavaScript, C++, Perl, Python, and Ruby within the last 7 years
- 3+ years of experience with requirements generation within the last 7 years
- Knowledge of software development, requirements analysis, object-oriented analysis, design, testing, configuration management, and quality control
75
Big Data Engineer Resume Examples & Samples
- 1) Hive
- 2) Pig
- 3) Python and/or Shell Script
76
Big Data Engineer Resume Examples & Samples
- Design, construct, install, test, and maintain robust, scalable, secure, and fault-tolerant data management systems
- Create data management policies, procedures, and standards
- Integrate existing enterprise and one-off data sources into platform
- Ensure business users and teammates have access to the appropriate data sources
- Extensive knowledge of various databases
- Research opportunities for data acquisition and new uses for existing data
- Closely collaborates with business users & subject matter experts (i.e. data architects, modelers and IT team members)
- Researches and recommends data management best practices
- Educates others on data management principles, best practices, policies, procedures, and standards
- Aggressively and continuously advances skill set
- Makes decisions and recommendations on project priorities, functional design changes, process improvements and problem resolution
- Ensures that accurate and thorough documentation is maintained
77
Big Data Engineer Resume Examples & Samples
- Work on large-scale, multi-tier big data platform engagements
- Bachelor's degree and fifteen years related work experience
- Five year’s experience working with Scala, Java, Python or other predictive modeling tools
- Five year’s experience in advanced math and statistics
78
Big Data Engineer Resume Examples & Samples
- Industry experience as a Data Engineer or related specialty (e.g., Software Engineer, Business Intelligence Engineer, Data Scientist) with a track record of manipulating, processing, and extracting value from large datasets
- Experience with Oracle, Redshift, Teradata, etc
- Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc)
- Experience using machine learning and statistical tools such as Python/Pandas, R etc
- Linux/UNIX including to process large data sets
79
MTS, Big Data Engineer Resume Examples & Samples
- Build world class big data platform to handle high-volume real-time data ingestion and analytics with prime focus at scalability, performance, stability and superior quality
- To flex muscle of big data open source software to build next gen data platform
- Research, develop, optimize and innovate frameworks and related components for enterprise scale data processing, analysis and computations
- Collaborate with cross-functional team to design and architect enterprise data solutions to leverage data at its best
- Own the end-to-end development life cycle with high quality of enterprise solution/code you develop and evangelize the test driven development - (tests, code coverage, etc.)
- Develop Data Adapters/processors to ingest/process large volume of Unstructured, Semi Structured and Structured data from various data sources and types
- Develop validation frameworks, proactive monitoring solutions to detect data ingestion failures in big data platform and take appropriate remedies
- Follow a customer centric approach, and ensure the solutions developed actually meet the customer requirements
- Collaborate with people working on traditional Data Warehouse technologies and ensure consistency for the data exposed through these different channels
- 12+ years of experience in requirements analysis, design, development and testing of distributed, enterprise-class applications/platforms with particular attention to scalability and high performance, with demonstrable experience
- Exceptional hands on Object Oriented programming experience (Java/J2EE preferred)
- Hands on with Hadoop, HDFS, Web HDFS, Spark, HIVE, PIG, ZooKeeper and Kafka
- Experience in design, architecting and delivering enterprise software solutions at scale
- Experience in capitalizing enterprise data
- Experience with NoSQL data bases : HBase, Mongo
- Experience with RDBMS, O-R mapping, and application of distributed caching technologies
- Experience in implementing high volume web applications or large transactional client-server systems in Java or other languages is a huge plus
- API and REST based Web services development is a plus
80
Big Data Engineer Resume Examples & Samples
- Provides technical leadership in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc..NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies
- Passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source technologies and software paradigms
- Provide/inspire innovations that fuel the growth of Fractal
- Linux environment and shell scripting
- Cloud computing platforms (AWS)
- Distributed and low latency (streaming) application architecture
81
Big Data Engineer Resume Examples & Samples
- 2+ years of experience with Java programming
- 1+ years of experience with Hadoop or HDFS
- 1+ years of experience with Map Reduce, Pig, or Hive
- 1+ years of experience in working with relational databases and SQL, including MySQL, PostgreSQL, or Oracle
- BS degree in CS, Computer Information Systems, Information Systems, or related field
82
Big Data Engineer Resume Examples & Samples
- You would be responsible for evaluating, developing, maintaining and testing big data solutions for advanced analytics projects
- The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into business insights
- The role would also involve testing various machine learning models on Big Data, and deploying learned models for ongoing scoring and prediction. An appreciation of the mechanics of complex machine learning algorithms would be a strong advantage
83
Big Data Engineer Resume Examples & Samples
- Responsible for producing RIAs outlining technical details & contributing to Stage 0/1 efforts by collaboratively working with Delivery Managers, designers/ Tech Leads, E2E designers and other O&T teams
- Hands-on/technical experience on one or more technologies in the platform (Hadoop, AbInitio, Informatica, WMB, Teradata)
- Prepare technical designs, service catalogues and present them for review in design review forums
- Maintain traceability of requirements & be accountable for the solution through all phases of project lifecycle (design, build testing through into live implementation)
- Ensure Designs comply to Barclays standards and policies (for Security, Data privacy, Accessibility)
- Design reusable solutions that adhere to pre-defined patterns as outlined by Portfolio Design Lead and/or Architecture teams. Reusability factor needs to be measured per design
- Design for Non functional requirements ensuring impacts on Infrastructure are highlighted. Implementation of Alerting/Monitoring solution for the project/service should be a part of the design
- Participate in the resolution of Production issues (including out of office hours for Sev 1 & 2 incidents) and collaborating with developers, functional designers, other technology and service management teams
- Work on one or more projects with primary focus on one or more technologies
- Contribute in the preparation of project plan & RAID log along with ADM, other functional designers & development teams. Active participation in daily scrum. Adhere to project timelines
- Design walkthroughs with development and test team. Handhold development team in the implementation of technical solutions, if required
- Work with ADM, development team and service management for AIS. Participate in KT sessions to support teams. Support the ADM on CAB calls where required
- Contribution in service improvement - Responsible to prepare designs & support the implementation of problems records / improvements identified by service management
- Demonstrates a comprehensive understanding of all systems within the Technology landscape and aware of Barclays System architecture
- Assist Portfolio Design Lead to present on EDA where required
- Delivery Capability
- Work closely with Business / Technology Office to manage project demand
- Participate in Portfolio Design Reviews
- Participate in Live Triage calls, RCA Reviews and Lead Defect Triage calls
- Drive reusability by ensuring Application catalogues are updated and referenced for all implementations
- Work closely with Portfolio Development teams
- Define innovative solutions, constantly exploring & challenging product capabilities, liaise with product experts from within the organisation or with vendor partners
- Support development teams in creating project plan, contribute to PIR Reviews
- Participate in Design & Engagement reviews and provide reports on design status
- Contribute to designs and implementations across clusters where applicable
- Should be able to work in an agile environment with evolving demands and increasing expectations
- Experience in planning and estimation
- Encourages others in working towards the organization goals; takes prompt and effective action to rectify problems within the team
- Inspires confidence by making and honoring commitments, demonstrates initiative and competence
- Coaches and fosters a professional, personal development environment
- Provides clear direction to staff in line with Barclays Group values: Recognizing success, Focus on value, Valuing ambition and helping to push personal boundaries
- Facilitates effective team interaction
- Continuously enhance technical capabilities via trainings/certifications
- Effectively utilizes each team member to his/her fullest potential
- Values learning by creating a climate of effective feedback, coaching, mentoring and personal development
- Candidate Profile
- Excellent written and verbal English language skills
- Well versed in ETL and/or Hadoop technologies
- Ability to quickly learn new technology, business domains and processes
- Proven ability to multi-task, be flexible and work hard, both independently and in a team environment, in a high pressured environment with changing priorities
- Willingness to work occasionally outside of normal business hours
- Adaptable and able to pick up new techniques
- High attention to detail and quality of work
- Able to create clear Design specifications from business requirements
- Risk and Control:All Barclays colleagues have to ensure that all activities and duties are carried out in full compliance with regulatory requirements, Enterprise Wide Risk Management Framework and internal Barclays Policies and Policy Standards
- Basic Qualifications
- Excellent understanding of ETL concepts and able to bridge the gap between Business and System requirements
- Understanding of Informatica, additionally knowledge of Abinitio will be helpful
- Awareness of other key technologies in the platform like WMB, JAVA, Teradata, Hadoop will be helpful
- Good knowledge of regulatory, legal, group Risk focussed implementations will be preferred
- Experience of designing strategic reusable solutions with focus on functional and non-functional requirements
- Keeps in touch with evolving technologies and strategies
- Represent Barclaycard Data Technologies and capabilities in required forums to drive an integrated solution
- Working with Portfolio Design Lead and Architects to define standards and patterns for implementation of reusable solutions; driving creation of assets for the platform
- Good understanding of business and operational processes
- Experience of leading problem / issue resolution
- Risk and issue management techniques and experience
- Ability to handle stressful situations with perseverance and professionalism
- Delivers engaging, informative, well-organized presentations
- Resolves and/or escalates issues in a timely fashion to line manager
- Understands how to communicate difficult/sensitive information tactfully
- Experience
- Has experience of building and maintaining effective relationships across teams
- Demonstrates presentation skill and the ability to communicate confidently and clearly with seniors, peers and subordinates
- Builds a collaborative culture and knows that collaboration is essential to maximize successful delivery
- Work effectively with global teams
- Communicates within the realms of responsibility within the team and to senior stakeholders
84
Big Data Engineer Resume Examples & Samples
- Define data sourcing options and strategies, specifically the identification of data availability gaps, and identify the preferred data sourcing approach for each data requirement
- Extracting, transforming and loading data to support prototyping and trialling of new services
- Develop profiles of data within the Single Customer View and other systems to provide visibility of data usability and to identify data quality issues for remediation
- Support and drive data assessments as part of establishing data suitability and data sourcing strategies
85
Big Data Engineer Resume Examples & Samples
- Install and Configure Cloudera
- Setting up of Ecosystem
- Developing solutions and applications to load data
- Setup monitoring on Cloudera Platform
- Setup Impala ,create Impala tables and develop scripts for data ingest
- Overall experience of 8 years
- 3 years’ Experience in Installing and Configuring Cloudera
- 3 years’ Experience in Healthcare
- Hands-on experience in MongoDB and Hadoop
- Develop Analytical queries to be run in Impala, Hive
- Develop Map-reduce programs
- Schedule OOZIE jobs
- Design Oozie workflows
- Develop Data archival and analysis scripts
86
Big Data Engineer Resume Examples & Samples
- Integrating any Big Data tools and frameworks required to provide requested capabilities
- Advise and implement Data lake security using Kerberos/Knox/Ranger/SSL etc
- Proficiency with Hadoop v2, MapReduce, HDFS,YARN,Tez
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming, ni-fi
- Good knowledge of Big Data querying tools, such as Pig, Hive, Oozie and Impala
- Working knowledge of Apache Spark
- Completion of any MOOCS will be an advantage
87
Big Data Engineer Resume Examples & Samples
- Architecture and development of large scale big data solution to be used in a very large production environment
- Gather, analyze and maintain large data sets to provide answers to address hurdles and create innovative solutions in large-scale data infrastructures
- Design, build/develop, maintain, test and assess big data solutions
- Focus on the development of tools and technologies that are at the core of the company’s capabilities to manage, monitor and hunt for cyber security incidents
- System, network and application troubleshooting
- Provide engineering support for cyber security products developed
- 3+ years experience in a big data environment specific to engineering, IT architecture and/or software development for a large production environment
- Strong research, analytical and problem solving skills required to work with petabyte or even exabytes of data
- Proven experience in a Hadoop ecosystem: Hadoop, Map/Reduce, YARN, Spark/h2o, Hive/Pig, Impala/Drill, etc
- Proven hands-on experience 2 or more of the following areas
- Experience with SIEM products: Qradar, Arcsight, Splunk, etc
- Knowledge in RIA: HTML5, node.js, bootstrap, angular, extJS, etc
88
Big Data Engineer Resume Examples & Samples
- 3+ years of proven industry experience working on the backend services or infrastructure for a large scale, highly distributed web site or web service
- Solid foundation in computer science fundamentals with sound knowledge of data structures, algorithms, and design
- Familiarity with JVM profiling and GC tuning. Experience with tools like YourKit, JMH, statsd-jvm-profiler or equivalents a plus
- Experience designing and deploying large scale distributed systems, either serving online traffic or for offline computation. Experience in concurrency, multithreading and synchronization
- Bonus points for experience with Hadoop, MongoDB, Finagle, Kafka, ZooKeeper, Graphite (or other time series metrics stores), JVM profiling, Grafana, Linux system administration, Chef (or equivalent experience with Puppet, Ansible, etc.), Aurora (or other cluster management frameworks like Marathon or Kubernetes)
- Comfortable in a small and fast-paced startup environment
- Bachelors Degree or higher in Computer Science, Electrical Engineering or related field
89
Big Data Engineer Resume Examples & Samples
- Around 3-5 years of experience in Hadoop Stack (MapReduce Framework, HDFS, HBase/Hive)
- 3-5 years of experience in Scripting Language (Linux, SQL, Python). Should be proficient in shell scripting
- 3+ years of experience in Applications / Administrative Support (data integration, ETL, BI operations, Analytics support) engagements on large scale distributed data platforms for e.g. Teradata, DB2, Oracle, etc
- Basic understanding of Hadoop Connectors, Oozie, Flume, Sqoop, Thrift, Avro, Zookeeper
- Demonstrate a keen interest in, and fair understanding of, "big data" technology and the business trends that are driving the adoption of this technology
- Demonstrate analytical and problem solving skills; particularly those that apply to a Big Data environment
- Experience may include (but is not limited to) build and support including design, configuration, installation (upgrade), monitoring and performance tuning of any of the Hadoop distributions
- Development, implementation or deployment experience in the Hadoop ecosystem
- Experience with ANY ONE of the following
- Proficiency in Hive internals (including HCatalog), SQOOP, Pig, Oozie and Flume/Kafka
- Proficiency with at least one of the following: Java, Python, Perl, Ruby, C or Web-related development
- Development or administration on NoSQL technologies like Hbase, MongoDB, Cassandra, Accumulo, etc
- Development or administration on Web or cloud platforms like Amazon S3, EC2, Redshift, Rackspace, OpenShift, etc
- Development/scripting experience on Configuration management and provisioning tools e.g. Puppet, Chef
- Web/Application Server & SOA administration (Tomcat, JBoss, etc.)
- Handle deployment methodologies, code and data movement between Dev., QA and Prod Environments (deployment groups / folder copy/ data-copy etc.)
- Should be able to articulate and discuss the principles of performance tuning on Hadoop
- Develop and produce daily/ weekly operations reports and metrics as required by IT management
- Analysis and optimization of workloads, performance monitoring and tuning, and automation
- Addressing challenges of query execution across a distributed database platform on modern hardware architectures
- Experience on any of the following will be an added advantage
- Hadoop integration with large scale distributed DBMSs like Teradata, Teradata aster, Vertica, Greenplum, Netezza, DB2, Oracle, etc
- Data Modeling or ability to understand data models
- Knowledge of Business Intelligence and/or Data Integration (ETL) solution delivery techniques, models, processes, methodologies
- Exposure to tools data acquisition, transformation & integration tools like Talend, Informatica, etc. & BI tools like Tableau, Pentaho, etc
90
Big Data Engineer Resume Examples & Samples
- 8+ years of strong Programming experience in Python or C++; 6+ years of strong SQL experience, including designing data warehouse schemas and tuning performance of very complex SQL queries
- Experience processing large amounts of structured and unstructured data at scale (including writing scripts, web scraping, calling APIs, writing SQL queries, etc.)
- Ability to clean and scrub noisy datasets
- Ability to build custom software tools
- Experience with AWS/EMR and other web services
- Experience with statistical analysis, data mining, machine learning, natural language processing, or information retrieval
- Experience with Spark and MapReduce
- Experience with Data Visualization (Tableau)
91
Big Data Engineer Resume Examples & Samples
- Develop real time/batch data streaming systems, using the latest technologies
- Provide innovative architectural solutions for complex data issues
- Handle the entire development cycle – architecture, design, development, deployment and monitoring
92
Big Data Engineer Resume Examples & Samples
- As a Big Data Engineer you will utilize programming tools such as Spark and other Hadoop ecosystem tools (e.g. Hive, Pig, Sqoop, flume, MapReduce,) to bring together a diverse and massive set of data sources and making them easily accessible and useful for further analysis
- Extract data from multiple structured and unstructured feeds by building and maintaining scalable ETL pipelines on distributed systems
- Collaborate closely with business SME’s and data scientists to create dashboards, visualizations and transfer prototypes into large scale and efficient solutions
- Optimize, tune, and scale the Hadoop ecosystem, working with Architects, to meet SLA requirements
- Explore emerging technologies in Big Data for consideration and implementation
- Perform troubleshooting and in-depth analysis of issues and provide clear, permanent solutions
- Work with administrators to own and manage critical enterprise Hadoop infrastructure components
- Work with business and engineering team to understand data, structures, define information needs and develop prototype/solutions that supports desired business and technical capabilities
- You are curious, have a research mindset, love bringing logic and structure to loosely defined unstructured problems and ideas
- Ability to effectively work independently/motivated; ability to handle multiple priorities
- Strong knowledge of Big Data and ecosystem tools
- 1-2+ years of hands on experience with data processing in Hadoop environment using Spark, or Pig, or Map-Reduce or any other relevant scripts
- 5-10 + years of overall experience in enterprise landscape
- Strong expertise in Enterprise Reporting, Data Warehousing and analytics architecture
- Deep expertise in writing complex SQL and ETL batch processes
- Functional experience in manufacturing/supply chain/operations is plus
- Amazon web services: Hadoop on EC2, S3, and Redshift experience preferred
- Excellent oral/written/presentation skills
93
Big Data Engineer Resume Examples & Samples
- As a Big Data Engineer, you will be an integral member of our threat intelligence service, i.e. auto focus, team responsible for architecture, design and development
- Ability to communicate with research teams and data scientists, finding bottlenecks and resolving them
- Design and implement different architectural models for our scalable data processing, as well as scalable data storage
- Build tools for proper data ingestion from multiple heterogeneous sources
- 2+ years of experience in design and implementation in an environment with hundreds of terabytes of data
- 4+ years of experience with large data processing tools such as: Hadoop, HBase, Elastic Search, etc
- 2+ years experience with Java
- Passion for doing research on large data sets containing ill-formatted data
- Excellent inter-personal and teamwork skills
- BS in Computer Science/Engineering, or equivalent experience
94
Big Data Engineer Resume Examples & Samples
- B.E./B.Tech in Computer Engineering or MCA or an equivalent education
- Good hands on experience working in Agile (Scrum)
- Excellent oral and written communication skills in English
- Technical Skills : PL/SQL , Linux Shell Scripting, RDBMS concepts, Teradata developer, Business Objects reporting
- Developing data analytics, data mining and reporting solutions using Teradata and Business Objects
- Developing Design/functional and Application information Document for the solution
- Working on projects that provide real-time and historical analysis, decision support and reporting services
- Leveraging SQL data platforms, data mining & visualization
- Developing reporting solution and designs/ develop semantic layer
- Work in a fast-paced agile development environment to quickly analyze, develop, and test potential use cases for the business
- Take valid use cases from ideation through development to production
- Write efficient code to extract, transform, load, and query very large datasets
- Provide timely communication to business partners on use cases and project status
95
Big Data Engineer Resume Examples & Samples
- Technical Skills : PL/SQL , Linux Shell Scripting, RDBMS concepts, Java , Hadoop developer (Hive, HDFS, Kafka, Spark, Python)
- Developing data analytics, data mining and reporting solutions using Teradata, Hortonworks Hadoop, Business Objects / Tableau
- Working on projects that provide real-time and historical analysis, decision support, predictive analytics, and reporting services
- Leveraging Aster/Hadoop and SQL data platforms, data mining, visualization and machine learning
- Write efficient code to extract, transform, load, and query very large datasets, including unstructured data
- Develop standards and new design patterns for Big Data applications
- Understand MapReduce concepts and master the tools and technology components within the Hadoop and Aster environments
- Mentor and assist users in accessing the data
96
Big Data Engineer Resume Examples & Samples
- From idea to delivery take lead on bespoke development solutions to meet specific requirements
- As the lead developer, make sure to develop and maintain Bespoke Development solutions for Global Services
- Responsible for day to day mentoring, guidance and prioritizing for other developers working on Bespoke Development solutions
- Responsible with other developers for development and maintenance of the Travelwire product suite, including Mid-Office and other Travelwire modules
- Deliver innovative solutions which remove cost, deliver value and enable business competitiveness for our customers – both on Bespoke Development solutions and Travelwire Mid-Office
- Instrumental in developing software products that serve our customer's needs in the most effective manner
- Ensure solutions are properly documented and tracked for future reference with proper source code and version control
- Contribute to all aspects of technical systems design, performance, deployment, systems management and systems maintenance
- Provide, present and justify the detailed architectural designs and in adherence with internal project delivery processes
- Maintain an up to date knowledge of technical standards, development methodologies, technical trends and innovations in the industry
- Participate in demos, workshops or other internal/external meetings on special project
- Master Post Graduate in Computer Science, Application, Mathematics or a numerate discipline
- 3-5 years of architecture and design leadership
- Demonstrate substantial in-depth of knowledge and experience in a specific area of Big Data and Java development
- Proven J2EE and/or .NET web services development experience
- Proven track record of developing technical architectures on time and within budget
- Detailed knowledge of database structures
- Ability to meet specific targets and plan own work to deliver according to priorities
- Able to think strategically and challenge current technical thinking
- Experience of data manipulation and presentation technologies, preferably in a travel systems environment
- Strong Object Orientated Design experience and capability
- Proven knowledge and understanding of networking technologies
- Excellent command of verbal and written English, and preferably one of the Scandinavian languages
- Ability to multitask and prioritize within a continuously changing environment
- Dynamic, agile and self-motivated with high level of energy and enthusiasm
- Solid commercial understanding because we work close with customers and customer team
- Good planning and project management skills
- Management skills to plan and prioritize development work and guide other developers
97
Big Data Engineer Resume Examples & Samples
- Bachelor's degree in Computer Science, Management Information Systems or equivalent experience
- Minimum 5+ years experience with distributed, highly-scalable, multi-node environment
- Experience with Big Data Technologies, understands the concepts and technology ecosystem around both real-time and batch processing in Hadoop is required
98
Gfg-big Data Engineer Resume Examples & Samples
- 3-4+ years of experience with high-traffic, high volume, high scalable distributed systems and client-server architectures (clustering, partitioning, sharding, etc)
- Some experience working with Data Scientists and finding solutions for them to work efficiently while manipulating high volume of data and be able to work with them and the teams to bring their algorithms at scale
- Strong operational experience with AWS, container approaches.
99
Big Data Engineer Resume Examples & Samples
- Designing and architecting big data solutions
- Commissioning and installing new applications and COTS products
- Monitoring performance and managing parameters
- Configuration management
- Controlling access permissions and privileges
- Ensuring that storage, archiving, back-up and recovery procedures are functioning correctly
- Developing, managing, and testing back-up and recovery plans
- Collaborating with IT project managers, database engineers and application programmers
- Communicating regularly with technical, applications and operational staff
- Experience with applying big data technologies at scale in bare metal and cloud infrastructure
- Experience with designing big data architectures
- Experience with Linux or Unix system administration
- Experience with DevOps processes and technologies
- Experience with Hadoop setup, configuration, benchmarking, and management of a multi-node cluster
- Experience with Cloudera Manager
- Experience with installing CDH on servers
- Experience with Hadoop technologies like Pig, Hive and HBase
- Experience with Kerberos and Securing Hadoop Clusters
- Experience with systems monitoring tools to tune, configure, and administer Hadoop clusters
- Experience with automation and configuration management platforms such as Ansible, Salt, Puppet, or Chef
- Experience with cloud platforms such as AWS, Openstack, Azure, and/or GCE, along with cloud storage technologies such as S3, Swift, and/or Ceph
- Experience with Ruby, Python, Java, shell scripting, Spark, and/or Kafka
- Knowledge of the software development life cycles
- Demonstrated customer service and interpersonal skills
- Technical and analytical problem solving skills
- An interest and capacity to learn new skill sets
100
Big Data Engineer Resume Examples & Samples
- Management of Hadoop cluster, with all included services
- Proficiency with Hadoop v2 and Hadoop-ecosystem including MapReduce, HDFS, Spark, Knox, Yarn
- Experience with Linux Operating System
- Experience with NoSQL databases, such as ElasticSearch, Cassandra, MongoDB, Redis, Neo4j, Couchbase
- Excellent Knowledge of DevOps tools such as Puppet, Ansible, Jenkins, GIT, Kibana
- Experience with Cloud specifically with vCloud Director, Openstack
- Excellent communication, facilitation and customer facing skills
- Excellent understanding of IT & networking fundamentals
- Good understanding of ITIL service support processes
- Very good understanding of project management
- Very good technical reports writing skills
- One relevant certification from ITIL, PMP, PRINCE2, Green Belt, ITI Diploma
- Fluent English speaking
- Fluent French speaking is a plus
- Bachelor's or Master's degree in computer science or software engineering
- 5 + years’ experience in the Information Systems industry
- 3 years of strong experience in Big Data domain in managing Production support
- Ability to possess logical and systematic approach to problem resolution across a broad spectrum of technologies in applications support environment
- Good inter-personal skills, able to deal with all levels within an organization and relieve potential conflicts
- Able to work accurately and clearly explain technical matters to non-technical users in both written and verbal forms
- Ability to work under pressure and deal with multiple tasks
- Strong ability to work with International Customers
101
Big Data Engineer Resume Examples & Samples
- Collaboration with our Quants and global technology teams to create data pipelines for a new data analytics platform and build interfaces for data to be extracted and analysed
- Data Warehousing, ETL development and testing experience in Big Data technologies is essential whilst experience working with relational databases such as Oracle 11g or higher will be beneficial
- The candidate will be versatile with an appetite to learn given the potential to cross-train in other technologies including Business Intelligence Visualisation, Web Frameworks and Java
- Familiarity with web, ftp, api, sql and related ETL technologies
- A knowledge of modern NoSQL data stores
- Experience in resolving maintenance issues, data issues and bug fixes
- Performance tuning and optimization techniques
- Experience working directly on the command line / shell scripting experience
- Data system operational knowledge, such as scheduler, query performance, security/encryption, etc
102
Big Data Engineer Resume Examples & Samples
- BS Computer Science or other relevant technical degree and/or related experience
- Java or similar language development experience
- Deployment automation experience with scripting, chef, puppet, etc
- Linux and RPM packaging experience
- Experience with Hadoop, Spark or related projects
- Experience with virtualized environments and cloud services such as AWS
- Ability to communicate comfortably, at different levels, with different stakeholders
- Worked on applications or web services deployed in at scale
- Experience with Apache Hadoop ecosystem applications: Hadoop, Hive, Oozie, Presto, Hue, Spark, Zeppelin and more!
- Commits or contribution via code or technical guidance to Apache Hadoop, Spark or related big data projects
103
Big Data Engineer Resume Examples & Samples
- 5 years of experience
- Minimum of 1 year experience on Spark. Exposure to Spark Streaming and MLLib preferred
- Minimum of 2 years of experience on Hadoop AND MapReduce AND Oozie AND Hive AND Pig
- Minimum of 2 years of experience on core Java OR Scala
- Exposure to Python OR iPython OR any other Scripting language
- Experience with NoSQL databases, such as HBase OR Cassandra OR MongoDB
- Exposure to Big Data Exploration, Profiling, Quality and Transformation
- Proficient in designing efficient and robust ETL/ELT workflows, schedulers, and event-based triggers
- Exposure to Data Mining preferred
- Scrum methodologies
- Insurance Knowledge
104
IT Big Data Engineer Resume Examples & Samples
- Proficiency with development tools (Git/SVN, Artifactory, Maven, Jenkins)
- Minimum 2 years’ experience developing with Java, Python, Scala
- Minimum 2 years’ experience with Hadoop ecosystem (Spark, Hive, HBase, Pig)
- Experience using data streaming stacks (Ni-Fi, Kafka, Spark Streaming, Storm)
- Experience with relational databases (Teradata, MS SQL, Oracle, MySQL)
- Experience building web applications (Django, Bootstrap, React, Angular)
- Experience with R / Shiny, MLlib, SciPy, NumPy, etc
- Ability for minimal, international travel
105
Big Data Engineer Summer Intern Resume Examples & Samples
- Relocation is not provided; local candidates preferred
- You have the ability to obtain a U.S. Security Clearance
- You are working on a Bachelor’s Degree (or higher) in Computer Science, Engineering, or a Natural Science (Physics, Mathematics, etc.)
- You are a Junior or above in College with a minimum GPA of 3.0 (Please upload Transcript.)
- You have a solid foundation in computer science, with strong competencies in algorithms, data structures, and software design
- You are familiar with JavaScript, CSS3, HTML5, XML, JSON
- You possess solid sensibilities in Web and User Experience design
- You have exposure to Real Time Web enabling technologies (Websockets, Comet, SSE) and MVC frameworks (e.g. AngularJS) or want to learn more
- You have exposure to analytic UI development (Kibana) or want to learn more about them
- You are comfortable with Git and other Configuration Management Tools
- You love solving new problems in creative ways
- You work best solving problems collaboratively
106
Big Data Engineer Resume Examples & Samples
- 3-7 years of Technical Business experience – ETL experience
- 3-5 years of solid Hadoop experience – Scoop, Pig, Hive…
- Bachelor’s Degree in Computer Science or related area, or equivalent experience
- Intermediate experience with Teradata, SQL, IBM DataStage, Hadoop, Unix Scripting
- Technical experience analyzing and understanding large data sets
- Ability to clearly articulate pros and cons of various data and technologies
- Ability to document data use cases, solutions and recommendations
- Ability to support program and project managers in the planning, estimation and implementation of projects
- Ability to quickly develop business acumen and data subject matter expertise
- Ability to independently perform detailed analysis of business problems and technical environments
- Ability to work creatively and analytically in a fast paced and agile environment
- A self-starter with the ability to work in cross-functional teams
- Have a passion for data understanding, use, manipulation, delivery and documentation
107
Big Data Engineer Resume Examples & Samples
- Write code supporting the development of big data oriented solutions solving complete data integration or analytic use cases
- Interpret customer needs and requirements on a detailed level and matching these back to proposed services or solutions
- Participate in the analysis and design of a solution, that may include selecting technologies, or estimating level of effort
- Understand all elements of a client's technical computing environment, and work with vendor IT teams to integrate various solution components as required (e.g. networking, security, compute, storage, etc)
- Support the installation, configuration, and tuning of various technologies comprising a solution including creation of appropriate operational and design documentation
- Assist clients with trouble-shooting of existing solutions related to performance, scalability, maintenance, or cost of ownership
- Ability to travel to client locations within Europe as necessary
108
Junior Big Data Engineer Resume Examples & Samples
- Executes moderately complex functional work tracks for the team
- Partners with the D3 (Data, Discovery & Decision Science) teams on Big Data efforts
- Partners closely with team members on Big Data solutions for our data science community and analytic users
- Leverages and uses Big Data best practices / lessons learned to develop technical solutions
- Contributes to the development of moderately complex technical solutions using Big Data techniques in data & analytics processes
- Develops innovative solutions to Big Data issues and challenges within the team
- Contributes to the development of moderately complex prototypes and department applications that integrate big data and advanced analytics to make business decisions
- Uses new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems
- Understands the Big Data related problems and requirements to identify the correct technical approach
- New Grad or 1-2 years of experience of equivalent skills & ability
- Bachelor's or Master’s degree in a quantitative or scientific field such as computer science, computer engineering or equivocal experience
- Understanding of using software development to drive data science & analytic efforts
- Experience with database & ETL concepts
- Experience in developing, managing, and manipulating complex datasets
- Experience in working with statistical software such as SAS, SPSS, MatLab, R, CART, etc
- Ability to communicate and present advanced technical topics to technical audiences
109
Think Big Data Engineer Resume Examples & Samples
- Fluid understanding of DI, standards-based javax. Inject annotations, constructor injection vs field-level injections, providers, etc
- Understanding of various analytic and visualization utilities available in R
- Ability to configure source control plugins, fine tune JVM run options, usage of coverage tools and collaboration tooling for the teams IDE
- Bash scripting, ssh port forwarding, proxying, Gnu Screen or Tmux, Unix networking (netstat, lsof, ifconfig), and Unix Piping
- Basic admin functionality, cloudera Navigator and Ambari and log analysis
- Submit job to cluster. Understand an RDD and can make basic transformations like map () or collect () etc
- Ability to store and read data efficiently from a NoSWL OLTP data store such as HBase, Cassandra, and CouchDb
110
Big Data Engineer Resume Examples & Samples
- Implementing ETL process
- Develop automation and management capabilities of Hadoop cluster, with all included services
- Ability to troubleshoot and solve any ongoing issues with operating the cluster
- Proficiency with Hadoop v2, MapReduce, HDFS
- Experience with Cloudera/MapR/Hortonworks
- Knowledge of CEPH filesystem
- Demonstrated ability to conceive, manage, and complete software deliverables
- Linux systems administration skills, across distributions, and especially in a cloud or virtualized environment
- Understanding of IP networking and traffic scaling
- Experience with agile development methodologies, rapid application development, and project management
- Proven ability to design and present understandable and practical solutions to complex problems
- Demonstrated leadership skills in a fast-paced, team-driven environment
- Strong verbal and written communication skills, including visual presentation skills
- Demonstrated experience in research data collection, analysis, and presentation
- Experience with intellectual property portfolio management, especially patents and trademarks
- Ability to work effectively across internal and external organizations
- Ability to travel when needed; expected travel is 5-25%
- Ability to promote technologies to large audiences or top level executives
111
Big Data Engineer Resume Examples & Samples
- Can identify the specific functions and responsibilities and key customers and relationships of own IT department/function
- Can describe rationale for major IT initiatives and identify major IT issues
- Able to interpret and apply policies and standards
- Contributes to the development and implementation of standards and procedures
- Has a working knowledge of one or more of the components in the technology strategy
- Can identify the technologies in all the architecture patterns
- Has participated in the evaluation and implementation of new technologies
- Can engage with heads of relevant business area
- Can support where appropriate senior business stakeholders
- Demonstrates a good end to end understanding of the systems processing for the business area & relationships
- Working knowledge of the main features of prototyping
- Has defined and produced relevant models and is able to interpret and explain client's existing models and associated business processes
- Familiar with the syntax, structure, features and facilities of at least one language
- Can define, document and interpret an application system design and program specifications
- Can develop structured programming specifications
- Has led a wide variety of complex or multiple application development initiatives using a structured life-cycle methodology
- Experienced with the use of a specific application development toolkits
- Can define, deliver and interpret and validate test data and scripts
- Has worked with preparation, administration and validation of tests
- Experienced with deployment of new or enhanced applications into production
- Experience in diagnosing and resolving root cause of performance problems
- Monitors performance of major elements on an on-going and historical basis
- Some experience in maintaining and supporting multiple major or critical applications
- Has supported software quality assurance reviews and monitoring activities
- Knowledge of major functions and features of workflow analysis tools
- Resolves major problems and fluently applies escalation and notification procedures for incidents
- Can describe specific techniques for isolating a problem and defining resolution approaches
- Experienced with most of the major development and delivery phases and activities
- Has participated in most of the delivery activities on multiple development projects
- Experience managing or leading a specific administration or support function
- Develops and maintains standards and guidelines for systems administration sub-function
- Has experience of incident & problem management disciplines
- Has experience of maintaining configuration items, raising changes and planning releases
- Has good operational knowledge of the service desk and incident systems
- Integrates configuration management into daily procedures
- Has experience in developing and maintaining technical reference documents
- Familiar with technical documentation standards, guidelines and best practices
- Working knowledge of scripting/utility tool component, features and facilities
- Experiences with tools and techniques for building audit and control into an application
- Familiarity with check pointing, back-up and recovery
- Familiar with existing policies and practices
- Experienced with a security process within an application and an operating system
- Has an understanding of the operational issues and considerations for securing information
- Can describe local hardware, software and telecommunications components
- Familiar with concepts of open architecture
- Familiar with existing interfaces as well as integration and migration plans within own area
- Aware of major issues and considerations for a successful system integration
- Familiar with current and planned integration initiatives
- Has led, and participated in, technical design reviews
- Can describe tasks, activities, deliverables and key concerns of technical design
- Has a wide network within the organisation and shows integrity while addressing challenging situations; experienced at supporting or implementing function-wide risk management processes and tools
- Experienced with planning, estimating, staffing, organising, and managing multiple application development initiatives; had monitored and dealt with critical paths and risks areas
- Contributes to, and encourages ideas and builds on the suggestions of others
- Enlists others in working towards the organisation goals; takes prompt and effective action to rectify problems within the team
- Applies feedback and changes behaviours accordingly; encourages knowledge sharing
- Justifies training requests in terms of expected benefits for the individual and the organisation
112
Think Big Data Engineer Resume Examples & Samples
- Excellent interpersonal skills. Strong verbal and written communication, with good exposure to working in a cross-cultural environment. You may be requested to communicate and present some topics to small audiences. Experienced in writing documents that communicate complex technical topics in an accessible manner
- Experience with data architectures. Knowledge and experience of structured, semi-structured and unstructured data. Knowledge and experience of Big Data and/or Analytics technologies and tools, as the Hadoop ecosystem, Apache Hive, and Spark, among others. Sound knowledge of data governance and security, and data-related methodologies
- A strong background on software development, continuous integration, tooling and software architectures is needed, either in in enterprise environments, system integration, or science-related ones. You are proficient in either Java, Scala, C, Python, SQL, Ruby, Clojure, etc. You may also have some experience with Tableau, Shiny, R, Javascript, and so forth
- Three years or more experience in relevant roles
113
Big Data Engineer Resume Examples & Samples
- Partner with data scientists, software engineers and bioinformaticians to explore structured and unstructured data
- Responsible for data and schema modeling, data quality, ETL and data integration
- Evaluate potential latest open-source and commercial technologies to meet growing feature demands on the platform, and be ready to defend technical recommendations with quantifiable arguments
- Brainstorm new ideas, features, and applications to be built on proprietary Platform and take initiative to prototype Proof-of-Concept solutions
- 8+ years of experience with data driven applications
- Solid experience building scalable data integration/ETL pipelines with focus on maintaining data quality and version control
- Strong data modeling and schema design skills
- Experience using big data technologies handling very large data sets solving various problems (Hadoop, Redshift, Cassandra, ElasticSearch or similar)
- Experience with one or more ETL tools like CloverETL, Pentaho, Informatica or similar
- Experience with AWS services in an enterprise
- Familiarity with genomic and phenotypic data is a huge plus
- Deep understanding of crunching/SQL over very large data sets, as well as related technologies
- Have experience with algorithm optimization
- Bachelor's Degree in Engineering, Computer Science, or another related discipline
- ·Master's degree or four-year degree within either business or industry related field of study: Computer Science, Engineering, Electronics, MIS, Telecommunications, IT disciplines or Business Administration
- Design & Architecting Software product experience
- Experience with Hive, Spark, Shark and other Big Data technologies
- NoSQL databases (ElasticSearch, MongoDb)
- Graph databases (Neo4j, Titan)
- Graph Processing (Giraph, GraphX)
- Machine learning and scientific programming frameworks
- Familiarity with genomic and phenotypic data
114
iXp Intern, Big Data Engineer Resume Examples & Samples
- Work with data scientists and other developers and cross-functionally with product managers and other engineering teams to deliver predictive models from concept to product
- Model Building Cycle – pull, cleanse, and validate data for analysis and modeling
- Create and implement predictive models for various business process like payment risk, relationship discovery, categorization (unsupervised learning), catalog optimization, etc
- Work with software engineers to put predictive models in production
- Determine the tracking necessary to enable analytics of our products and features by working closely with product and engineering partners
- Currently be enrolled in a Masters (CS) degree program
- Good conceptual of CS fundamentals like Data Structures, Algorithms, OS, and DBMS
- Proficiency in SQL and at least one programming language
- Hands on experience with Python and/or R is a plus!
- Excellent problem solving skills and troubleshooting abilities
- Experience in establishing and sustaining excellent relationships with the extended team
- Excellent verbal and writing skills
- Must be able to work onsite in Palo Alto, CA during summer 2017
115
Junior Big Data Engineer Resume Examples & Samples
- Responsible for creating technical design, building Transformation rules, Unit testing, review checklist
- Work closely with Solution Architects to learn new technologies to upskill
- Ability to partner with data delivery teams to delivery solution based on priority
- 4 years of programming experience demonstrating a comprehensive application of programming principles, methodologies, tools, and techniques; demonstrated aptitude for performing system-level technical designs
- Experience working (development/administration) with Hadoop, Cloudera, HIVE, NoSql data platforms (Cassandra, MongoDB), Pub/sub messaging (Rendezvous, AMPS, Kafka, etc.), Stream processing (Storm, Spark Streaming, etc.)
- Strong knowledge on SQL and different DBMS systems and architecture like Hierarchical, Network, Relational and MPP systems
- Sound knowledge of Hadoop and Spark Architecture
- Proficient in Java or Scala along with proficiency in Scripting languages like Python and Unix
- Good understanding of Data Structures and SDLC process
116
Big Data Engineer Resume Examples & Samples
- More than 8+ years of Professional Services (customer facing) experience architecting large scale storage, data center and/or globally distributed solutions plus 2+ years designing and deploying 3-tier architectures or large-scale Hadoop solutions
- Ability to understand and translate customer requirements into technical requirements
- Strong Experience with Java, Frameworks, Integrations
- Strong experience with DW environment
- Experience implementing data transformation and processing solutions using Hadoop ecosystem of tools
- Experiencing designing data queries against data in the HDFS environment using tools such as Apache Hive
- Experience implementing MapReduce jobs
- Experience setting up and managing multi-node Hadoop clusters
- Strong experience implementing software and/or solutions in the enterprise Linux or Unix environments
117
Big Data Engineer Resume Examples & Samples
- Bachelor's degree in Computer Science, Engineering, Technical Science or 5 years of IT/Programming experience
- Minimum 2+ years of designing, building and operationalizing large scale applications using Hadoop and NoSQL components - HDFS, HBase, Hive, Sqoop, Flume, Spark, MapReduce, Kafka, Cassandra, MongoDB etc. in production
- Minimum 2+ years of organizing and architecting data at scale for Hadoop/NoSQL data stores
- Minimum 1+ year of MapReduce coding, including Java, Python, Pig programming, Hadoop Streaming, HiveQL for data analysis of production applications
- 2+ years of hands-on experience designing and implementing data applications in production using emerging data technologies such as Hadoop Ecosystem (MapReduce, Hive, HBase, Spark, Sqoop, Flume, Pig, Kafka etc.), NoSQL (e.g. Cassandra, MongoDB), In-Memory Data Technologies, Data Munging Technologies,
- Minimum 2+ year of experience implementing large scale cloud data solutions using AWS data services e.g. EMR, Redshift
- Minimum 2+ years working with traditional as well as Big Data ETL tools
- Minimum 2+ years designing and implementing relational data models working with RDBMS
- Minimum 2+ years of experience designing and building REST web services
- Minimum 2+ years of building and deploying Java apps to production
- Minimum 1+ years of administering and managing large production Hadoop/NoSQL clusters
- Responsibilities include the following
118
Big Data Engineer Resume Examples & Samples
- 2+ years of experience with Hadoop Spark Ecosystem & Distributed Data Storage technologies
- Prior and recent experience building stream-processing systems, using solutions such as TCP, Kafka or Spark-Streaming
- Ability to develop software, scripts and/or processes to integrate data from multiple data sources
- Experience with NoSQL databases, such as MongoDB or Elastic
- Expertise with relational data bases (T-SQL a plus)
- Good understanding of Lambda Architecture
- Knowledge of or interest in new technologies
- Creative thinker able to provide an idea and execute it from inception to launch
- Good communications and experience as part of a development team
- Ability to be resourceful and creative in a fast-paced daily release environment
119
Big Data Engineer Resume Examples & Samples
- Design and implement data management for Hadoop/NoSQL in a hybrid environment
- Design and implement large scale data architectures using Hadoop/NoSQL in a hybrid environment
- Data profiling and data analysis using emerging data technologies
- Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience
- Minimum 2+ years of building and deploying applications java applications in a Linux/Unix environment
- Minimum 1+ years of architecting and organizing data at scale for a Hadoop/NoSQL data stores
- Minimum 1+ years of coding with MapReduce Java, Spark, Pig, Hadoop Streaming, HiveQL, Perl/Python/PHP for data analysis of production Hadoop/NoSQL applications
- Minimum 2 years working with traditional as well as Big Data ETL tools
120
Technical Lead-big Data Engineer Resume Examples & Samples
- Develop highly scalable and extensible Big data platform solutions enabling collection, Storage, modeling, analysis and processing of large datasets
- Extensive knowledge and understanding of Big data eco system including Spark, Hadoop, Nosql Databases, kafka and Map reduce
- Provide Solution Architecture for complex issues and large datasets
- Experience with Ingestion and storage patterns, best practices and available technologies and limitations
- Continuously evaluate new technologies and apply industry standards to enable rapid deployment
121
Big Data Engineer Resume Examples & Samples
- Designing and implementing modern, scalable data pipelines for our clients leveraging Hadoop, NoSQL, Apache open source and emerging technologies, covering on-premise and cloud-based deployment patterns
- Providing advisory services and thought leadership on the selection and deployment of commercial and open source tools to process streaming, micro-batch and low latency workloads
- Designing and implementing data access patterns and data pipelines on Hadoop and NoSQL platforms leveraging agile, DevOps, continuous integration and continuous delivery approaches
- Provide innovative design and deployment approaches that leverage the best of innovations in in-memory processing, agile delivery, automated testing, containerisation etc. to enhance the speed and flexibility of testing analytics hypotheses and apply machine learning at scale
- Working closely with technology partners, Accenture Technology Labs and Innovation centres to incubate emerging technologies and build prototypes/demos to enhance our data engineering codebases and frameworks
- Supporting hackathons which allow us to work with publicly available data and open source tools to create new data engineering patterns and codebases
- Mentoring the next wave of data engineers and providing your skills and experience to enable clients to modernise their existing data pipelines
- Processing frameworks & programming tools: Spark (Scala/Python/Java), Kafka, Flink
- Hadoop platforms & distributions: Cloudera, Hortonworks, BigInsights, MapR, EMR
- NoSQL: HBase, Cassandra, MongoDB, CouchDB, Memcached, DynamoDB, Druid, BigTable
- Search: SOLR, ElasticSearch
- Data modelling & data pipeline design: iterative data pipeline development from raw, curated, integrated to published data, with fit for use data modelling on Hadoop and NoSQL platforms
- Design skills: data product design thinking, features definition, prototyping, usability testing and data visualisation literacy
- Relational DBs: Teradata, Oracle, Netezza, SQL Server
- Client facing skills: ability to build trusted relationships with client stakeholders and act as a trusted adviser
- Agile and DevOps delivery practices: familiarity with agile and DevOps delivery and deployment methodologies, experience with continuous integration, automated code reviews and regression testing using tools such as Atlassian Jira, Confluence, Cloudbees Jenkins, Selenium Grid, SonarQube and Docker
- Data wrangling: Trifacta, Paxata, Datameer, Tamr, Alteryx etc
- APIs & Datatypes: experience working with RESTful APIs (including Cortana, Watson, TensorFlow), JSON, XML, unstructured data
- Machine Learning tools, interfaces & Libraries: R, R-Studio, Spark R, sparklyr, MLlib, H2O etc
- Cloud platforms: AWS, Azure, GCP
- Enterprise data integration, BI and analytics platforms: Informatica, Talend, InfoSphere, SAS, RevoR, QlikView, Qlik Sense, Tableau, Spotfire, D3.js
- Other tools, databases and Apache projects: Google BigQuery, Presto, Drill, Kylin, OpenTSDB etc
- Solution architecture: end to end analytics solution architecture design and delivery estimation
- End to end ML and data engineering pipeline development, performance tuning and testing with Spark (preferably Scala or Python)
- Experience with Docker and Mesos
- Executing proof of concepts to assess the value of Big Data / Machine Learning use cases
- Proven ability to apply analytical and creative thought
- Proven ability to deliver high profile activities to tight timescales
- Proven success in contributing to a team-oriented environment
- Experience delivering projects within an agile environment
- Keenness to learn and try new things
- Ideally, educated to degree level
122
Big Data Engineer Resume Examples & Samples
- Bachelor's degree in a computer related field or equivalent professional experience is required
- 5 years’ experience
- Real project experience as a Data Wrangler/Engineer across design, development, testing, and production implementation for a Big Data Project
- Knowledge about the Sales, Service and/or Claims applications is a plus
123
Big Data Engineer Resume Examples & Samples
- Debugging (Java/networklevel/Ruby/Javascript/
- Experience in system administration for Linux (Redhat flavor) – console level
- Familiar with big data environments and streaming processing (Cloudera/Hortonworks Hadoop stack, Spark, Kafka, Yarn, Hive, Hbase, Cassandra)
124
Big Data Engineer Resume Examples & Samples
- Pursuing (or completion of) a degree in a technical discipline e.g. Computer Science, Security, Software Development, etc
- Extensive coding experience (Python, Java, C++, SQL)
- Understanding of open source systems, data models, ETL workflows, and job scheduling
- Experience with big data technologies (Hadoop) preferred
- Experience with could systems preferred
- Able to thrive when working as a part of a small, collaborative team
125
Lead Big Data Engineer Resume Examples & Samples
- Responsible for the implementation and on-going administration of Hadoop infrastructure including the installation, configuration and upgrading of Cloudera distribution of Hadoop
- File system, cluster monitoring, and performance tuning of Hadoop ecosystem
- Resolve issues involving map reduce, yarn, sqoop job failures; Analyze multi-tenancy job execution issues and resolve
- Design and manage backup and disaster recovery solution for Hadoop clusters
- Work on Unix operating systems to efficiently handle system administration tasks related to Hadoop clusters
- Manage the Apache Kafka and Apache NIFI environments
- Participate and manage the data lakes data movements involving Hadoop, NO-SQL databases like HBase, Cassandra and Mongodb
- Work with data delivery teams to setup new Hadoop users. Includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and Map Reduce access for the new users. Configure Hadoop security aspects including Kerberos setup and RBAC authorization using Apache Sentry
- Create and document best practices for Hadoop and big data environment
- Participate in new data product or new technology evaluations; manage the certification process and evaluate and implement new initiatives in technology and process improvements
- Interact with Security Engineering to design solutions, tools, testing and validation for controls
- Evaluate the database administration and operational practices, and evolve automation procedures (Using scripting languages such as Shell, Python, Chef, Puppet, CFEngine, Ruby etc.)
- Advance thecloud architecture for data stores; Work with TIAA Cloud engineering team with automation; Help operationalize Cloud usage for databases and for the Hadoop platform
- Engage vendors for feasibility of new tools, concepts and features, understand their pros and cons and prepare the team for rollout
- Analyze vendor suggestions/recommendations for applicability to TIAA’s environment and design implementation details
- Perform short and long term system/database planning and analysis as well as capacity planning
- Integrate/collaborate with application development and support teams on various IT projects
- Ten or more years of overall IT/DBMS/Data Store experience, preferably with a background in Oracle database engineering
- Three or more years of experience in, big data, data caching, data federation and data virtualization management with experience in leveraging Hadoop and/or No-SQL preferred
- Two or more years of expertise and in-depth knowledge of SAN, system administration, VmWare, backups, restores, data partitioning, database clustering and performance management
- Experience writing shell scripts, and automating tasks. Exposure to Chef or/and Puppet is preferred
- Experience in the implementation details of Hadoop Clusters, Impala, and HBase and other emerging data techniques
- Experience with monitoring technologies for databases
- Experience with orchestration techniques, infrastructure automation and cloud deployments
- Understating of Linux, Windows, Dockers / containers
- Familiarity with “IaaS” and “DBaaS” Service oriented concepts preferred
- Familiarity of Cloud Architecture (Public and Private clouds) – AWS , AZURE preferred
- Working knowledge of VMware and VMware vCloud Automation Center (vCAC) preferred
- Proficiency in using Microsoft Office (Word, Excel, PowerPoint) to document, present, communicate and articulate idea/s and concepts
- Strong communication skills and the ability to collaborate and work in teams with other engineers, working in a fast paced and ever changing technical environment
- Application development experience – database programming, scripting, setting up web sites and dashboards
126
Big Data Engineer Resume Examples & Samples
- Expert in writing code that meets standards and delivers desired functionality using the technology selected for the project with high quality
- Responsible for programming a component, feature or feature set
- Contribute to design discussions
- Skilled in breaking down problems, analyze problem statements and estimating efforts
- Code Reviews across the team
- Skilled in core data structures and algorithms and implements them using appropriate chosen language
127
Big Data Engineer Resume Examples & Samples
- Bachelor’s degree required; a degree in a quantitative discipline: statistics, applied mathematics, computer science, data mining, machine learning, or some other empirical science
- Documented experience in a business intelligence or analytic development role on a variety of large scale projects (3 years minimum)
- Expertise in Hadoop and related technologies
- Excellent Data analysis skills
- Knowledge of Apache Hadoop, Apache Spark (including pyspark), Spark streaming, Kafka, Scala, Python, MapReduce, Yarn, Hive, Oozie, SQL, Impala, HBase
- Experience with distribution vendor like Cloudera
- Good knowledge of Python
- Knowledge of Microservice Architecture is a plus
- Ability to rapidly prototype and storyboard/wireframe development as part of application design
- Comfortable in configuring and using multiple operating systems (Mac/Windows/*nix)
- Design expertise with Hadoop, Teradata, Looker, and Tableau along with Cognos tools is a plus
- Knowledge of BI tools and statisctical packages such as SAS, R or SciPy/NumPy
- Experience with disciplined software development lifecycle
- Ability to communicate at various levels within large organizations
- Knowledge of Big Data tool performance/tuning/measurement and usage criteria to drive appropriate tool selections and use
- Knowledge and/or experience with Health care information domains a plus
- Documented experience working on advanced Big Data solutions is a plus
- Some expertise in designing business intelligence systems, dashboard reporting, and analytical reporting is also a plus
128
Platform / Big Data Engineer Resume Examples & Samples
- Establish project environments
- Maintain and continually improve project infrastructure
- Automatize routine and regular activities
- Handle platform configurations
- Perform unit testing of software components and make sure of stability the platform
- Implement core Big Data platform capabilities
- Adjust platform components and apply hotfixes
- Control and monitor the platform performance
- 1+ years of experience (hands-on) with Big Data
- Hadoop inc. Cloudera (must have)
- Scala
- Unix Red Hat scripting (must have)
- Ability to write complex SQL queries
- Batch management software (Control-M, rundeck)
- Jupyter Hub/Notebooks
129
Big Data Engineer Resume Examples & Samples
- Eagerness to learn and gain experience with the following platforms: Hadoop – Hortonworks, Cloudera, Mapr (HDFS, Hive, YARN, HBase, Spark, Flume etc), Vertica, MySQL, XtraDB, MongoDB, PostgreSQL and similar
- Public cloud platforms – AWS and Azure
- Kerberos platform administration & integration (IPA, AD)
- Good Linux skills (as required for Hadoop/DB platform administration)
- Networking concepts understanding
- Backup/Recovery, Data Replication and Disaster Failover concepts and implementations
- Autonomous, strong ownership and drive to get things done
- Eagerness to learn and adaptability to changes
130
Big Data Engineer Resume Examples & Samples
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, MarkLogic
- Experience with Spark; Experience with building stream-processing systems using solutions such as Storm or Spark-Streaming a strong plus
- Expert knowledge of Java and Spring; Knowledge of CQL and XQuery a strong plus
- Master degree in computer science or similar
- 5 years + experience in the big data field
131
Big Data Engineer Resume Examples & Samples
- Strong English language communication skills; ready to face off with finance and business subject matter experts on challenging problems
- Work with truly cutting edge ideas and technology: learn and apply the latest in systems research to solving very large scale problems
- Programming, adhering to standards and best practices
- Unit and peer testing of software code
- Problem investigation, code analysis, debugging and resolution
- Bachelor’s or Masters degree in Computer Science, Electrical Engineering, Information Systems, Mathematics, or a related field
- Experience working in Apache Hadoop, Pig, Spark, Cassandra, HBase, Storm, Kafka ecosystem
- 1+ years of experience in designing and developing data processing pipelines using distributed computing technologies such as Hive, Spark, and Pig
- 2+ years of Experience with programming languages such as Java, Python, Scala
- Background and experience in applying machine learning, text analytics, NLP and data mining with a good understanding of unsupervised and supervised learning methods
- Experience with Batch/Stream processing on Hadoop MapReduce, Cascading/Scalding, Spark, Strom
- Experience building high quality interactive web based applications using Angular, React, Highcharts
- Experince with Code/Build/Deployment tools suchs as git, maven, Jenkins
132
Big Data Engineer Resume Examples & Samples
- Manage a small team of developers
- 2+ years of experience in designing and developing data processing pipelines using distributed computing technologies such as Hive, Spark, and Pig
- 4+ years of Experience with programming languages such as Java, Python, Scala
133
Big Data Engineer Lead-intelligent Solutions Resume Examples & Samples
- Install Hadoop Data Engineering Products
- Support product through all development phases until the product is handed over to Production Operations
- Evaluate Data Engineering product, working closely with vendor
- Assist with planning infrastructure forecasts and demand management for applications on a ongoing basis
- 4+ years of IT experience required with 2+ years’ experience in Hadoop preferred
- Experience working with multiple Relational (Oracle, SQL Server, DB2 etc) and NoSQL Databases (MongoDB, Cassandra) preferred
- Experience with Infrastructure Management, Project Management and Demand Management preferred
134
Big Data Engineer Resume Examples & Samples
- Design, implement and deliver complete analytic solutions for customers
- Architect, build and maintain high performing ETL processes, including data quality and testing
- Keep up to date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets
- Develop and improve the data architecture for Redshift and Hadoop/EMR Cluster
- Develop analytics with a mind toward accuracy, scalability and high performance
- Provide technical guidance and thought leadership to other programmer analysts in the team
- Bachelor’s Degree in Computer Science or equivalent degree
- 5+ years hands-on experience with designing and developing on distributed architecture systems up to Tera/petabyte data handling using Open Source software
- 2+ Years knowledge in modern distributed architectures and compute / data analytics / storage technologies on AWS or related technologies
- Knowledge of a programming language such as Java/Python/Scala
- Understanding of architectural principles and design patterns / styles using parallel large-scale distributed frameworks such as Hadoop / Spark
- Deep knowledge of RDBMS (MySQL, PostgreSQL, SQL Server) and NoSQL databases such as HBase, Vertica, MongoDB, Dynamo DB, Cassandra
- Demonstrates broad knowledge of technical solutions, design patterns, and code for medium/complex applications deployed in Hadoop
- Knowledge of working in UNIX environment with fair amount of shell scripting and python experience. Knowledge in spring, Java, MapReduce is expected
- Hands on experience designing, developing, and maintaining software solutions in Hadoop Production cluster
- Experience in architecting and building data warehouse systems and BI systems including ETL
- Experience in performance troubleshooting, SQL optimization, and benchmarking. Strong architectural experience in context of deploying cloud-based data solutions
- Thorough understanding of service-oriented architectures and data processing in high-volume applications. Full SDLC experience (requirements gathering through production deployment)
- Outstanding analytical skills, excellent team player and delivery mindset
- AWS Redshift experience a plus
- Alteryx , Datameer
135
Lead Big Data Engineer Resume Examples & Samples
- 3+ years hand on experience in Extract data from multiple structured and unstructured feeds by building and maintaining scalable ETL pipelines on distributed software systems
- 3+ years of hands-on implementation experience with designing and developing high performance and scalable applications using NoSQL stores( like MongoDB, Cassandra)
- 5 years experience with one of the following: Java/Python/Scala
136
Big Data Engineer Resume Examples & Samples
- Interface with PMs, business customers, and software developers to understand requirements and implement solutions
- Collaborate with both Retail Finance and central FP&A teams to understand the interdependencies and deliverables
- Experience building large-scale applications and services with big data technologies
- 4+ years of experience in designing and developing analytical systems
- Expertise in SQL, DB Internals, SQL tuning, and ETL development
- Experience with scripting languages such as Python, Perl, etc
- Experience with full software development life cycle, including coding standards, code reviews, source control management, build processes, and testing
- Experience with programming languages such as Java, C++, Scala, etc
137
Big Data Engineer Resume Examples & Samples
- BS or MS degree in an Engineering or Technical discipline or equivalent experience
- 4+ years of Software Development experience
- Ability to effectively interpret technical and business objectives and challenges and articulate solutions
- Willingness to learn new technologies and exploit them to their optimal potential
- Familiarity with Agile Practices
- Hands-On Big Data experience in Software Development, Application Development or Data Management with
138
Big Data Engineer Senior Manager Resume Examples & Samples
- Bachelor's degree in Computer Science, Engineering, Technical Science or 8 years of IT/Programming experience
- Minimum 3+ years of designing, building and operationalizing large scale applications using Hadoop and NoSQL components - HDFS, HBase, Hive, Sqoop, Flume, Spark, MapReduce, Kafka, Cassandra, MongoDB etc. in production
- Minimum 3+ years of organizing and architecting data at scale for Hadoop/NoSQL data stores
- Minimum 2+ year of MapReduce coding, including Java, Python, Pig programming, Hadoop Streaming, HiveQL for data analysis of production applications
- Minimum 3+ years designing and implementing relational data models working with RDBMS move to preferred
- Minimum 3+ years working with traditional as well as Big Data ETL tools move to preferred
- Minimum 3+ years of experience implementing large scale cloud data solutions using AWS data services e.g. EMR, Redshift
- Minimum 3+ years of experience designing and building REST web services
- Minimum 3+ years of building and deploying Java apps to production
- Minimum 2+ years of administering and managing large production Hadoop/NoSQL clusters
139
Big Data Engineer Resume Examples & Samples
- Maintain and support Big Data platform for Micron
- Develop or identify, evaluate, recommend and maintain Big Data reporting and visualization applications for Micron
- Design, develop, and maintain data ingest solution for Big Data platform
- Work with Data Science within Micron to automate and maintain reliable data analytic and mining solutions for Big Data platform
- Ability to assess current IT environments and make recommendations to increase capacity needs
- Participate in 24x7 oncall rotation for operation support of Big Data platforms and solutions
- Communicate, collaborate and coordinate on Big Data related activities to various level of stakeholders and senior management
140
Big Data Engineer / Developer Senior Resume Examples & Samples
- Relevant Experience or Degree in: Degree in a related field and/or the equivalent in training and experience
- Typically Minimum 6 Years Relevant Exp
- System design and development in a client server environment
- Computer Science, Mathematics, Information Systems or Engineering
- Experience in an analytical, research, or project management environment in the credit card or software industry; programming experience with .Net development, including C# and C++
- Skills / Knowledge - Having wide-ranging experience, uses professional concepts and company objectives to resolve complex issues in creative and effective ways. Some barriers to entry exist at this level (e.g., dept./peer review)
- Job Complexity - Works on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors. Exercises judgment in selecting methods, techniques and evaluation criteria for obtaining results. Networks with key contacts outside own area of expertise
- Supervision - Determines methods and procedures on new assignments and may coordinate activities of other personnel (Team Lead)
- .Net - C/C++,SQL, Java
- Lotus Notes - Lotus Notes Designer, Lotus Notes Scripting
- Web Application Technology - Atlassian JIRA, Maven, Groovy Scripts, Javascripts, JQuery, Subversion, Collabnet, VisualSVN
141
CFS QR Big Data Engineer Resume Examples & Samples
- Exposure to Global Custody or the Fund Services business or like areas is a plus
- Development and management of bringing in various data sources into a global Data Lake
- Collaborating with Quants to convert their models into strategic software whilst enabling the quants to adjust and adapt their models at will. Working within IT to align as closely as possible with the rest of the business, whilst still ensuring that the quants are able to innovate, research, and act with agility is essential
- The successful candidate will have experience of effectively modelling data within an HBase environment. They will also possess good working knowledge of how to performance tune solutions
- Define and manage best practice in configuration and management of the data lake
- Strong technical skills across the Hadoop stack (HDFS, HBase, Phoenix, Hive, Pig) and SQL
- Strong technical skills in Python and good working knowledge of Java
- Databases knowledge should extend to PL SQL, SQL and Transact-SQL. Oracle is a plus
- Experience of handling data in various file types; flat files, XML, Parquet, data frames etc
- Production support and application maintenance knowledge
- Experience in Web UI/ visualisation technologies such as JavaScript, React.js, HTML5 and Angular.js. Haskell for portraying large and complex data sets will be very useful
- Working knowledge of Qlikview or other BI tools
- Worked with Atlassian tools such as Jira, Confluence, Fisheye, Crucible and Stash
- Exposure to broader web technologies such as RESTful API's
142
Big Data Engineer Resume Examples & Samples
- Understanding concepts of big data and parallel computing, and develop scalable solutions using new technologies in big data
- Maintain and scale the data science cluster
- Build tooling to aid data scientists
- Work in an environment with a significant number of unknowns – both technically and functionally inherent in such a new venture
- BS or MS degree or equivalent experience relevant to functional area
- 5 years of software engineering or related experience
- 3 years’ experience with relational databases and/or NoSQL database technology and big data technologies such as Spark, Hbase, Kafka
- Experience with Ansible and Docker
- Experience with REST WebServices
- Good knowledge of Linux internals, system performance, and troubleshooting
- Exposure to data science and python a plus
143
Big Data Engineer Resume Examples & Samples
- As a member of the Core Data Services team, participate in development and optimization of data pipelines
- Stay on top of evolving technology (streaming, etc.) to suggest, prototype and implement improvements to the data architecture
- Assure data quality and consistency of produced data shapts
- Hands-on experience in building scalable data pipelines at multi-terabyte scale using Spark with scala/python
- Expert in performance tuning of processes in hadoop based ecosystem
- Expert in scala/python/sql. Java and C++ are a plus
- Experience with streaming and queuing technologies
- Commitment to best software engineering practices (unit testing, code reviews etc.) and agile process
- Passion for data quality
- Great communication skills – this position will be in a middle of discussions with product owner, other engineers, QA and devops, therefore ability to understand requirements and articulate solutions is super critical
- Nice to have: Experience in media/advertising business. Experience with statistical analysis, data mining, machine learning
144
Big Data Engineer Resume Examples & Samples
- Proficiency in Spark, HBase, Java MapReduce development and experience with Hadoop or other data processing technologies required
- Knowledge of Hadoop-related technologies such as Azkaban, Oozie, Hive and Pig is a plus
- 5+ years of programming experience, preferably in Java, Scala or C/C++
145
Big Data Engineer Resume Examples & Samples
- Develop ETL processes to populate a Hadoop data warehouse with large datasets from a variety of sources, and integrate Hadoop within an SQL Server data warehousing environment
- Create MapReduce programs in Java or Python, and leverage tools like Pig and Hive to transform and query large datasets
- Assist with Hadoop administration to ensure the health and reliability of the cluster
- Monitor and troubleshoot performance issues on a Hadoop cluster
- Follow the design principles and best practices defined by the team for data warehousing techniques and architecture
146
Big Data Engineer Resume Examples & Samples
- Experience building platforms and deploying cloud based tools and solutions with technologies like AWS EMR, RDS, Kinesis
- Automate environment configuration and deployment with tools like Chef and Cloud Foundry
- 5+ years’ work experience
- 1+ years JVM-targeted development (Scala / Java)
- 1+ years’ experience with various tools and frameworks that enable capabilities within the big data ecosystem (Hadoop, Kafka, NIFI, Hive, YARN, HBase, NoSQL)
- 1+ years’ experience building or supporting AWS-based solutions
147
Big Data Engineer Resume Examples & Samples
- Build data pipelines using Oracle, Hadoop, Pig, Hive, HBase, Spark, and Salesforce API
- Identify incomplete data, improve quality of data, and integrate data from several data sources
- Design and develop tailored data structures in database and Hadoop
- Query Hadoop/Hive and Oracle EDW to provide data-sets for data scientists
- Program data movement, harmonization, integration & loading workloads in system relevant programming language like Java, Python,etc. and utilize big data tools like Talend, Informatica, etc
- Quickly create functioning ETL prototypes to address quickly changing business needs
- Revamp prototypes to create production-ready data flows
- Support Data Science research by designing, developing, and maintaining all parts of the Big Data pipeline for reporting, statistical and machine learning, and computational requirements
- Perform data profiling, complex sampling, statistical testing, and testing of reliability on data
- Endorse open source tools & techniques in development, management, and release process, utilizing in-house software effectively
- Harness operational excellence & continuous improvement with a can do attitude
148
Big Data Engineer Resume Examples & Samples
- Engineering Solutions - Ability to plan, define, develop and launch small engineering systems in support of core organizational functions and business processes
- Requirements Engineering - Solid understanding of the process of determining that the product or process being designed is suitable for the purpose as defined by the stakeholders of the product or process
- Develops Systems And Processes - Identifies and implements effective processes and procedures for accomplishing work. Solid understanding of how to develop systems and process. Applies systems and processes to improve and complete work at a team/departmental level. Works to eliminate system and process inefficiencies and roadblocks. Demonstrates consistent use of structured work documentation (e.g. communication channels, work steps, procedures, checklists, or flow charts)
- Analyze Issues - Solid understanding of how to analyze issues. Demonstrates use of analysis skills to learn and analyze information in a timely way. Understands complex concepts and problems and identifies how they relate to key processes. Applies accurate logic in solving problems. Differentiates what is critical and what is important while not getting bogged down in details
- Communication - Written And Verbal - Is able to effectively and clearly communicate in both written and verbal means
- 1-3 years experience with Big Data open source tools or equivalent College project work
- Position is accountable for development and maintenance of a Hadoop, "Big Data" distributed computing cluster on a cloud environment
- Candidates are expected to be proficient in this environment and its tools to effectively and efficiently process, store and make data available to analysts and other consumers
- Candidates must have a strong desire to grow expertise in Big Data tools and methods, and stay up to date on emerging software and technologies
- Document/Define and Develop for Machine Data
149
Big Data Engineer Resume Examples & Samples
- Design, develop, maintain, and test big data solutions
- Build large-scale data processing systems using cloud computing technologies
- Complex big data application with focus on collecting, parsing, managing, and analyzing large sets of data to turn information into insights
- Self-starter, able to learn new technologies and systems on your own
- Experience with Impala, Athena, Elasticsearch, DynamoDB, and/or comparable technologies
150
Big Data Engineer Resume Examples & Samples
- Drive architecture of data services built on top of Penske’s big data platform and other data sources
- Design and implement scalable Data-as-a-Service(Daas) solutions for various application needs. This includes conceptualization, storyboarding, documentation of use cases, platform selection, information architecture, service design, development, testing, and deployment of the proposed solution
- Analyze multiple sources of structured and unstructured data to propose and design data service solutions for scalability, high availability and fault tolerance
- Develop conceptual, logical and physical design for various data types and large volumes
- Clearly articulate the pros and cons of various technologies and architectural options
- Implement security, encryption best practices for Data services
- Architect, design and implement high performance large volume data integration processes and other back-end services
- Work closely with internal customers, vendors and partners at a technical and user level, to design and produce solutions
- Develop expertise in the machine data coming out of Penske’s fleet of trucks, onboard devices and other IoT sub-systems
- Other projects as assigned by the supervisor
- 5-7 years of experience as a technology leader designing and developing data services
- 2 years of experience in designing and developing high-volume mission critical data integration solutions in big data environment
- 2 years of hands-on technical experience in working with big data architectures, including Hadoop, Map/Reduce or other big data frameworks
- Expertise in programming languages like Java and Python
- Experience with implementing data services in the cloud. Experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services
- Deep expertise in developing and supporting data services using Spring Framework and RESTful API
- Broad understanding and experience of real-time analytics, NOSQL technologies(e.g. Gemfire, Hbase, Cassandra, MongoDB), data modelling, data management, and analytical tools (e.g. SAS, R )
- Fluency / expertise in the following are preferred
- Pivotal Big Data Suite – Greenplum, Gemfire, Spring XD and Rabbit MQ
- Environments: Amazon Web Services, Cloud Foundry
- Big Data Technologies – Hadoop, Kafka, ZooKeeper, Hbase, Hive etc
- Languages: Python, Ruby,Ruby on Rails, Groovy and shell scripts
- Middleware Platforms: ESB, API Management, and WebSphere
- Methodologies: Agile - Kanban, Test Driven Development
- Tools: Git, Maven, Continuous Integration, SVN, Ant, Eclipse, IntelliJ IDEA and JIRA
- Architectures/Frameworks: Angular JS, Bootstrap, jQuery, jQuery Mobile, Reactive Web Design, Micro- Service Architecture, RESTful Web Services, JUnit, Swagger, OAuth, Single Sign On, SAML, Spring Framework
- Bachelor's degree in Computer Science/Engineering required, higher degrees preferred
- Regular, predictable, full attendance is an essential function of the job
- Willingness to travel as necessary, work the required schedule, work at the specific location required, complete Penske employment application, submit to a background investigation (to include past employment, education, and criminal history) and drug screening are required
151
Big Data Engineer Resume Examples & Samples
- Minimum 2-3 years of experience in Data engineering
- Minimum 1-2 years of Java/Scala, Mapreduce, Sqoop Pig programming, and/or HiveQL experience
- Minimum 1-2 years implementing dat engineerings olutions in large scale, distributed environments
- Minimum 1-2 years working with traditional ETL tools and other streaming tools such as Kafka, Storm, Spark Streaming or Informatica
- Minimum 1-2 yrs working with hadoop distribution such as cloudera Hortonworks and R, MapR
- Ability to meet travel requirements, when applicable
- Eligibility for reliability clearance
- Math or Engineering Background
- Experience with Greenplum, Exadata, Cassandra, Spark, HBase, graphDBs, key-value stores, or NoSQL systems
- Experience using search tools such as Elasticsearch or Solr
- Experience using advanced analytic tools such as Mahout, Scikit-learn, MLLib, or other related toolkits
- Experience with authentication security protocols (Kerberos, Knox or Ranger)
- Data Visualization (Tableau, Qlikview, Matplotlib)
152
Lead Big Data Engineer Resume Examples & Samples
- Experience building software systems according to SOLID principles and following design patterns
- Experience with Bigdata echo system development i.e. Spark Scala, MapReduce, SOLR, Pig, Hive, Kafka and HBase
- Experience in Business Intelligence tools Qlikview, Tableau, and Cognos
- Experience with Services Oriented Architecture and RESTful web services development
- Experience with both relational and NOSQL data technologies
- Experience in Insurance domain
- MS in Computer Science or related field with 8+ years practical software development experience, preference for full-stack experience
- Experience with Bigdata echo system development i.e. Spark Scala, MapReduce, SOLR, Pig, Hive, Flume, Kafka and HBase
- Experience with Agile development and practices (daily stand ups, Kanban board, retrospectives, etc.)
- Experience in high volume, distributed, event-driven architecture
- Experience with frameworks like Spring, Spring Boot, and Netflix OSS
- Knowledge in Graph databases like Titan or Neo4J
- Experience with tools like Docker, and Bamboo
- Experience with Domain-Driven Design
- Experience with Continuous Integration / Continuous Delivery
153
Big Data Engineer Resume Examples & Samples
- Excellent communication skills and the ability to work well in a team
- Highly analytical, troubleshooting and problem-solving skills
- Familiarity with common statistical methods and tools
- Familiarity with machine learning methodologies
154
Big Data Engineer Resume Examples & Samples
- Perform architecture design, data modeling, and implementation of CVS Big Data platform and analytic applications
- Translates complex functional and technical requirements into detailed architecture, design, and development
- Work on multiple projects as a technical lead driving user story analysis and elaboration, design and development of software applications, testing, and builds automation tools
- Hands-on experience with “big data” platforms and tools including data ingestion (batch & real time), transformation and delivery in Hadoop ecosystem (such as HIVE, Python, R)
- Individual contributions on strategic initiatives and business critical initiatives
- Proficient in designing efficient and robust Hadoop solutions to improve performance and end-user experience
- Experience in evolving/managing technologies/tools in a rapidly changing environment to support business needs and capabilities
- Experience in Hadoop ecosystem implementation/administration, install software patches & upgrades and configuration
- Experience in conducting performance tuning of Hadoop clusters
- Monitor and manage Hadoop cluster job performance, capacity planning, and security
- Able to perform detailed analysis of business problems and technical environments and use this in designing the solution Define and maintain data architecture, with a focus on creating strategy, researching emerging technology, and applying technology to enable business solutions
- Define compute (storage & cpu) estimations formula for ELT & Data consumption workloads from reporting tools/ Adhoc users
- Analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings, adopt and implement these insights and best practices
155
Think Big-data Engineer Resume Examples & Samples
- Work with consultant teams on specific customer deliverables as and when required
- Designing and Implement Data Lake
- Monitoring performance and advising any necessary configurations & infrastructure changes
- Debugging and Resolving Hadoop (YARN/Map Reduce/Spark etc.) issues
- Management of Hadoop cluster, with all included services using Apache Ambari, Cloudera Manager, MapR control system
- Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
- Experience with any of the following Hadoop distributions: Cloudera/MapR/Hortonworks
- Training/Certification on any Hadoop distribution will be a plus
156
Big Data Engineer Resume Examples & Samples
- Responsible for technical ecosystem (software, data, interfaces, integration) and strategic roadmaps
- Responsible for designing and developing modern, cross-browser compatible user interfaces
- Researching and evaluation of new tools and technologies to solve business problems
- Translates business concepts to technical implementations to drive alignment and decision making
- Work on an geographically dispersed team embracing Agile and DevOps strategies for themselves and others while driving adoption to enable greater technology and business value
- Effective and efficient utilization of programming tools and techniques
- Mentors others and continually develops themselves
- Familiarity with web services (JAX-RS, JAX-WS, REST, JSON, XML, HTML5, CSS, JavaScript)
- Experience developing with Node.JS
- Open to new ideas and technologies with a strong desire to learn
- Experience with Agile development methodologies and tools to iterate quickly on product changes, developing user stories and working through backlog (XP, Continuous Integration and JIRA a plus)
- Ability to engage subject matter experts and translate business goals into actionable solutions
- Ability to work effectively with business and technical teams
- Ability to identify and drive aligned technical direction with all stakeholders
- Ability to meet deadlines, goals and objectives
157
Big Data Engineer Resume Examples & Samples
- Working in a cross-functional team – alongside talented Engineers and Data Scientists
- Building scalable and high-performant code
- Mentoring less experienced colleagues within the team
- Implementing ETL process – including cohorts building and ETL routines customisation
- Monitoring cluster (Spark/Hadoop) performance
- Working in an Agile Environment
- Refactoring and moving our current libraries and scripts to Scala/Java
- Enforcing coding standards and best practices
- Working in a geographically dispersed team
- Working in an environment with a significant number of unknowns – both technically and functionally
158
R&D Big Data Engineer Resume Examples & Samples
- Very flexible working schedule
- Ticket restaurants and a big canteen with daily menu
- Free parking and sports facilities at the Site
- Private health and life insurance
- Childcare provisions
- Training and Development
159
Big Data Engineer Resume Examples & Samples
- Be able to work with under direction and handle several competing priorities simultaneously
- Have excellent communication (both verbal and written), organizational and time-management skills
- Has a Master's Degree in Computer Science, Information Systems/Technology, Software Engineering, or Analytics
- Collaborate with end users and ecosystem partners to deploy Big Data/Analytics solutions in early adopter and production environments
- Create custom software components (e.g., specialized UDFs) and analytics applications
- Extract, transform and load data from various sources with minimal oversight
- Bachelor's Degree in Business Administration or Computer, Natural or Social Science
- 5+ years of technical development experience including 2+ years in a combination of relevant Enterprise data warehouse/Big Data/Analytics area
- Understand and be able to explain the appropriateness, merits and tradeoff of different distributed systems for data capture, processing and analysis
- Extensive knowledge in different programming or scripting languages like Java, Linux, C++, PHP, Ruby, Python and/or R
- Expert knowledge in different (NoSQL or RDBMS) databases such as Oracle, MongoDB, or Cassandra
- Experience in working with ETL tools such as Informatica, Talend and/or Pentaho
160
Big Data Engineer Resume Examples & Samples
- The successful candidate will be ‘hands-on’ to create and maintain an apache stack data platform in AWS, connecting to high volume datasets
- The person in this position will work with and integrate technologies including Hadoop services and infrastructure, NoSQL and relational data stores, SQL and NoSQL query, and open source software for data mining and machine learning
- Manage a cluster of compute nodes, and provide the connectivity to allow analytics code to run seamlessly across a mesos compute cluster. The successful candidate will be experienced in the Apache big data stack and be responsible for the following
- Producing and maintaining requirements, system architecture, and design documents, in both word and in iPython notebooks
- Working in AWS, including server monitoring and management
- Work on a small data science team having diverse software, big data, visualization and analytics skills
- Engaging in new technology development to facilitate innovative and leading edge approaches
- Optimizing NoSQL storage for advanced analytics
- Interfacing with customers and sub-contractors
- Taking an active ‘hands-on’ role with all aspects of the program
- Data Engineering/Data Management/Data Science certifications and/or training
- Experience with advanced IT technologies: Big Data engineering and software interfaces
- Hands on experience in the areas of Big Data design and implementation for data intensive problems (high Volume-Velocity-Variety scenarios) using Hadoop-based infrastructure and services (such as MapReduce, YARN, Mesos, NoSQL stores, Apache Spark, Accumulo)
- Willingness to focus on results and not just technology
- Excellent presentation, written, and verbal communication skills at the engineering level
- Excellent analytical and problem- resolution skills, with attention to details
- The ability and willingness to travel is required (25-50%)
- Experience in the full data lifecycle areas of: data modeling, ETL, data warehousing, reporting, data exploration and visualization, analytics and data provisioning
- Fourteen years or more of related experience (twelve years with Master’s Degree, or nine years with a PhD)
- Experience desired in the areas of: data models, data standards and compliance
- Working knowledge of the full Systems Engineering life cycle (Requirements, Analysis, Design, Implementation, and Testing) as well as Analytics life cycle (Collect, Curate, Analyze, Act)
- Knowledge of IT terminology, methods, principles, concepts, and theories
- U.S. Citizenship preferred
161
Big Data Engineer Resume Examples & Samples
- Our vision is to consistently offer a world-class marketing effectiveness proposition on a global scale. At its core the purpose of this role is to
- Design, build, optimize, launch and support new and existing data models and ETL processes in production
- Work with internal stakeholders to understand business requirements, work with cross-functional data and products teams and build efficient and scalable data solutions
- Work across multiple teams in high visibility roles and own the solution end-to-end
- Key Accountabilities
- Be an expert in cloud and lead the design and development of big data ETL process
- Utilise the recent but proven data processing cloud platforms to keep evolving the infrastructure
- Set up the big data platform where by that allows to build different analytics solutions on top of it
- Involve in cross cloud architecture designing
- Transform large quantities of data into client insights & actionable recommendations
- Maintain active communications with key contacts working on projects, managing expectations as you go along
- Personal Specification
- Experience in development experience (Java, Python preferred)
- Hands-on experience in SQL and NoSQL
- Hands-on experience in the data warehouse space, custom ETL
- Hands on experience in LAMP stack development experience
- Any experience of Amazon Web Services (AWS) would be an advantage
- Knowledge of the media, direct and digital industry would be an advantage
162
Junior Big Data Engineer Resume Examples & Samples
- Continuously design, develop, and test data-driven solutions
- Partner with global engineering, product and operations teams to further incorporate collective innovations
- Follow updates in different FreeWheel backend components and contribute to the long-term roadmap for FreeWheel's data strategy
- Improve performance, availability and scalability of our backend systems
163
Big Data Engineer Resume Examples & Samples
- Manage and monitor large-scale ETL processes for financial account data
- Build and enhance the ETL codebase for added efficiency and capacity
- Work with the Data Extraction and Data Science engineers on normalization and analytics processes
- Monitor key infrastructure components such as databases, SFTP servers, and other parts of the stack
- Perform data transformations using Hive and Denodo to optimize data retrieval process and performance
- Experience with ETL (Synsort DMX-h preferred) and/or other “Big Data” processes
- Experience coding in Python, R (ideally in the context of data processing or data science)
- Strong SQL experience, ideally with MSSQL or PostgreSQL and batch loading processes
- 5+ years of engineering experience in a professional environment, but skill fit is a higher priority to us than just work experience
- B.S. or M.S. in Computer Science, or equivalent work experience
- Excellent communication and collaboration skills. Able to work across multiple teams and discuss technical concepts to business development, operations, and other engineers
164
OCT Big Data Engineer Resume Examples & Samples
- Understanding business needs and strategy to develop data science solutions
- Collaborating with other data engineers and business process experts to access existing data in data warehouse and big data environments
- Creating intuitive user interface for interactive data visualization to explain insights from data
- Preparing and delivering powerful presentations with rich data visualizations and meaningful business conclusions
- Traveling and participating in various internal forums for strategy building and to build solutions in collaboration with various manufacturing sites
- B.S degree or M.S. degree with 2 years’ experience in Computer Engineering, Industrial Engineering, or any other discipline with extensive programming or machine learning work
- Minimum 2 years of experience working in big data and data science projects and teams
- Experience with building analytical web applications and data visualization technologies (Django, Javascript, Bootstrap, D3, etc.) is a plus
- Good grasp of data science concepts with emphasis on machine learning techniques is a plus
- Proficiency with collaborative source code management and documentation tools. (GIT, JIRA, Confluence, etc.)
- Strong communication skills (written, verbal and presentation)
- Willing to do international travel
165
Big Data Engineer Resume Examples & Samples
- Play a central role in the technical architecture, development and delivery of features that would be used by Wireless and Wireline Network engineers
- The ideal candidate will have a passion for creating high-quality user experiences working closely with data scientists, data architects, Network engineers, RF planners and IT teams
- You should have solid experience building dynamic, responsive web pages using Javascript, CSS3, HTML5 using libraries such as Jquery and React
- The ideal candidate will have experience working with both Java and Node.js in high-volume, production environments
- Build efficient and reusable front-end systems and abstractions
- Find and address performance issues
- Participate in design and code reviews
- Identify and communicate front-end best practices
166
Big Data Engineer Resume Examples & Samples
- 5+ total years of experience in Big Data; Experience with Apache Spark using python or scala is preferred
- Strong data analysis skills - ability to identify and analyze complex patterns
- Deep and thorough understanding of relational database design concepts
- Master of SQL and data-frames processing using Apache Spark
- Deep Experience with Apache Spark/MapReduce/Hadoop/Oracle/Teradata
- Experience with NoSQL (MarkLogic, Cassandra, Elastic or MongoDB)
- Deep understanding of SQL optimization and performance tuning in Big data
- Extensive programming experience in Python or Scala
167
Big Data Engineer Resume Examples & Samples
- 5+ years of experience with data warehouse technical architectures
- Experience working in IoT a must. Ideally Industrial IoT
- ETL/ELT, analytic tools and data structures
- Experience with relational and star schema data modeling concepts
- Familiarity with big data technologies, like Hadoop and AWS
- Familiarity with advanced analytic tools, such as Anaconda and Spark
- Strong programming skills in Java, Python, Ruby or similar
- Familiarity with reporting tools like OBIEE or Tableau
168
Big Data Engineer Resume Examples & Samples
- Assist with creating system designs and functional specifications for new projects
- Collect and analyze user requirements, create technical designs, and write technical design documentation of proposed solutions
- Develop data processing pipelines, including batch load, microbatch, live connection, and near-real time stream analytics
- Assist with the development of data computing platform architectures that are tuned for low latency and high availability
- Demonstrate concepts by developing code and piloting new applications
- Perform proof-of-concept projects
- Participate on the SGS development team to build-out, demonstrate and commission new computing platforms. Collaborate with data architects, data scientists, other data engineers, QA/QC, data governance and project management specialists in a workgroup environment
- Investigate technical issues and provide design solutions for software applications to meet changing user requirements
- Research new technologies and strive to continuously update and improve our analytics and data management systems
- Follow internal standards for source code control and documentation
- Promote team building and innovation
- MS in Computer Science, Mathematics or a related field, plus 2 years of relevant experience preferred; or BS in Computer Science, Mathematics or a related field, plus 4 years of relevant experience
- Minimum 3 years programming experience with Java and Scala
- Experience with Hadoop, including HDFS, Yarn, Hive, HBase, Oozie strongly preferred
- Experience with Spark and Spark data frames a plus
- Experience with Python and/or R a plus
- Experience with relational database management systems, SQL preferred
- Familiarity with software source control tools such as Git, Mercurial and Subversion
- Experience with Microsoft Office products
- Technical experience with systems networking, databases, Client-Server, and multi-tier applications development desired
- Large / enterprise application development experience a plus
169
Big Data Engineer Resume Examples & Samples
- Http://www.ibm.com/ibm/responsibility/initiatives.html
- At least 3 years of experience in distributed systems design and development using Python/Java/C++/Perl/UNIX scripting
- At least 1 year of experience in Spark/Scala
- At least 1 year of experience in Streaming (Spark preferable) with Spark MLlib
- At least 1 year of experience in one or more of the following platforms: Hadoop (such as Hortonworks, Cloudera, AWS/EMR)
- At least 5 years of experience in data modeling in non-relational databases, such as Cassandra, HBase, MongoDB, etc
- At least 2 years of experience in Big Data architectural concepts
- At least 2 years of experience in one or more of the following technologies: Hadoop, Spark, Streams, etc
- At least 5 years of experience in demonstrating excellent written and oral communication skills
- At least 3 years of experience in in programming on the following platforms: Hortonworks, Cloudera, AWS/EMR
170
Big Data Engineer Resume Examples & Samples
- Work as part of a team to design and develop code, scripts, and data pipelines that leverage structured and unstructured data integrated from multiple sources
- Develop and implement the technical design and ensure the end result fulfils the customer’s requirements
- Develop and implement solutions for disparate source data ingestion, transformation, and database loading
- Develop and implement solutions for data quality
- Develop and implement solutions to support “Data as a Service (DaaS)” tools and third party applications
- Recommend and establish security policies and procedures for the Hadoop environment
- Develop and implement various strategic initiatives
- Contribute to the development of Architecture Policies, Standards and Governance for the Hadoop and Big Data environment
- Lead the data architecture design and review processes, including planning and monitoring efforts, reviewing deliverables, and communicating to management
- Look to leverage reusable code modules to solve problems across the team, including Data Preparation and Transformation and Data export and synchronization
- Design and develop automated test cases that verify solution feasibility and interoperability, to include performance assessments
- Act as a liaison with Infrastructure, security, application development and testing team
- Help drive cross team design / development via technical leadership / mentoring
- Good understanding of data models in a manufacturing environment including Financials, Supply Chain, Sales Operations, Quality, Manufacturing, Service, and Logistics
- Hands on experience with data integration technologies and methodologies (middleware, API driven, ETL tools driven, streaming etc.) in order to bring in disparate data sources (SaaS, SAP, Oracle, SMS, etc) into Hadoop
- Scripting with Python, BASH, and PERL
- Substantial understanding of reporting and analytics tools
- Be very comfortable with Agile methodologies in order to be able to arrive at difficult engineering decisions quickly
- Experience with creating solutions or solution concepts and defending these with technology councils / architect groups
- 8+ years’ experience in software development with minimum 3 years Java experience
- 8+ years’ experience in leading large scale enterprise data warehouse solutions
- 3+ years’ experience in wide array of tools in the Big Data domain which include: HDFS, Hadoop, Hive, Talend, Impala, Sqoop, Kafka, Hue or Spark, Kafka, Zookeeper, Cassandra, and Spark
- 5+ years’ DBA and/or Data Modeling experience
- 2+ years’ expertise in setting up, configuration and management of data security (Cloudera Sentry a plus)
171
Principal Big Data Engineer Resume Examples & Samples
- As a Principal Big Data Engineer, you will be an integral member of our data ingestion and processing platform team responsible for architecture, design and development
- Having the dynamic ability to adapt to conventional big-data frameworks and tools with the use-cases required by the project
- Ability to communicate with research and development teams and data scientists, finding bottlenecks and resolving them
- 3+ years of experience in design and implementation in an environment with hundreds of terabytes of data
- 4+ years of experience with large data processing tools such as: Hadoop, HBase, Elastic Search,
- 3+ years of experience with Java
- Can-do attitude on problem solving, quality and ability to execute
172
Lead Big Data Engineer Resume Examples & Samples
- Lead engineering team to design, develop, and test data-driven solutions
- Work with different teams to design tech solution
- Drive the optimization, testing and tooling to improve data quality
173
Big Data Engineer Resume Examples & Samples
- Perform hands-on architecture, design, and development of systems
- Serve as a senior member of ProKarma’s Delivery Team that designs and develops business intelligence applications
- Own all technical aspects of software development for assigned applications
- Split time between writing code and testing, in support of product/platform release sprints
- Collaborates with technical product managers contributing to blueprints, and assisting with annual planning of feature sets
- Liaises with technical product owner to help prioritize items in backlog for ongoing sprints
- Develops and documents technical and functional specifications and analyzes software and system processing flows
- Bachelor’s degree in Engineering or Computer Science or Master’s degree in Computer Applications
- 5+ years of total experience required
- Strong experience with Java, SQL, and Scripting languages like Shell, Perl, Python, etc
- Experience in the Big Data Technologies such as MapReduce, Hadoop, Hive, Pig, Sqoop, Hbase, Flume, YARN, J Meter is required
- Should have good understanding on TD performance tuning concept
- Experience with Web Services design and implementation using REST and SOAP
- Experience with multi-tiered systems, algorithms and relational databases
- Experience with Java, J2EE, JavaScript, JQuery, Servlets, JSP, JDBC, XML, Mongo, Spring, Struts, Jenkins,Tomcat, JBOSS/WebSphere and automated testing
- Experience with Agile rapid application development methods
174
Lead Big Data Engineer Resume Examples & Samples
- Advance thecloud architecture for data stores; Work with Client Cloud engineering team with automation; Help operationalize Cloud usage for databases and for the Hadoop platform
- Analyze vendor suggestions/recommendations for applicability to Client's environment and design implementation details
- Bachelor's degree; Preferably in Computer Science or Information Systems
- Ten or more years of overall IT/DBMS/Data Store experience
- Three or more years of experience in, big data, data caching, data federation and data virtualization management including experience in leveraging Hadoop
- Familiarity with "IaaS” and "DBaaS” Service oriented concepts preferred
175
Big Data Engineer Resume Examples & Samples
- 5+ years in infrastructure design across a broad range of technologies (middleware, database, web, load balancers, firewalls, etc)
- 5+ years in Linux - Must be capable of installing Linux, understanding RAID options, diagnosing network, I/O, memory, and CPU bottlenecks
- 7+ years in large scale systems design and engineering (supporting 10k+ users and 1MM+ daily transactions with extremely high availability/resiliency)
- 2+ years in Solr/Lucene
- 2+ years in Hadoop ecosystem including Map-Reduce, Hive, and Pig
- 2+ years in NoSQL Big Data product and information architecture including products such as Cassandra and Mongo (or other similar key-value stores)
- 3+ years in at least one of Python/Bash/Perl/Ruby scripting environments
- 3+ years in Java development and related frameworks (Spring, Struts, SOAP, XML, REST, etc)
- 3+ years in Data Modeling & Data Migration/ETL (Extract-Transform-Load) functions for large data stores (Relational and/or Unstructured data)
- 1+ years in BI tools and reporting software (Pentaho, Cognos, etc)
- Deep understanding of Relational vs NoSQL distributed database architectures (for example - explain fundamental differences between Oracle RAC and MongoDB)
- Ability to explain CAP theorem and its applicability to different problems
- Previous role that included operational/production support of a critical environment (customer impact if down)
- Contributors to open-source projects will be given high consideration (show us your work in GitHub)
- Cassandra, MongoDB, or Hadoop (in order of preference)
176
Lead Big Data Engineer Resume Examples & Samples
- Builds end to end data processes from sourcing to loading of data modeled tables. This can include: locating need source data, creating data extraction processes, data profiling, creating tables and files for storing data, defining and building data cleansing and imputation routines, mapping multiple sources to a common format, transforming data using programming and business rule frameworks, validating data changes using SQL; and creating and loading tables and files
- Designs application solutions for simple to medium complex data processing requirements
- Completes production deployment tasks including code promotion and documentation
- Supports and trouble shoots production process when errors or failures occur
- Lead data analyst in a business department using SQLServer or Access with end to end responsibilities described above
- *Preferred -- Data or software engineer in an analytics or data warehouse team with Unix programming skills
- Hadoop fundamentals and architecture; usage of Hadoop products like Atlas, Ambari, etc. to perform job duties
- Unix commands and scripting
- SQL programming in Hive and Spark
- Meta data driven code development for re-usable code patterns (code function is created from metadata, configuration files, etc.)
- Bachelor's Degree or Two Year Technical Program with a Specialization In Programming
- B.S. preferred in Computer Science, Information Systems, or other related field
- Analytics experience implementing dimensional model concepts, analytical master data management processes, and data prep work for data scientists
- Experience with creating tables and loading data into relational database platforms (Oracle, etc.)
- Experience using or providing useful data to dashboard and reporting tools
177
Big Data Engineer Resume Examples & Samples
- Lead, design, develop, document, and test big data solutions
- Deliver solutions for using an Agile Development model
- Create quality deliverables to communicate technical solutions to appropriate audiences
- Understand issues, problem solving and design/architect solutions
- Install and integrate various big data applications
- Configure and troubleshoot issues in big data framework
- Build and collaborate with business and technical teams to deliver software
- Learn continuously, leveraging training resources and self-directed training, sharing knowledge and skills with others
- Provide mentoring and leadership to more junior resources
- Passion for technology and willingness to learn is required
- Have ability to work in a fast paced and dynamic work environment and be able to produce efficient and robust solution
- High energy, confidence, and agility to drive a team
- Candid and direct communication
- A creative thinker who can bring in new ideas and innovations to the company
- Bachelors in information technology, computer science, application development, programming, or related degree and/or equivalent work experience
- 6+ years of Java development experience in large scale enterprise development
- 2+ years of Scala and Python programming
- 2+ years of development experience in the field of big data on a PetaByte Scale environment (e.g. Hadoop, KafKa, Couch Base, Cassandra)
- 2+ years of Hadoop, Map-Reduce, HDFS and Hive
- 2+ years of Spark, Spark Streaming and Spark SQL
- Strong with Real time analytics using stream processing frameworks such like Spark / Storm - Mandatory
- Expert knowledge of Big Data querying tools, such as Spark, Pig, Hive, and Impala
- Advanced knowledge in Data Structure, OOP/OOD, SQL/NoSQL Database
- Strong in multiple threading development and advanced performance tuning
- Knowledge of how to assess the performance of data solutions, how to diagnose performance problems, and tools used to monitor and tune performance
- Excellent communication skills with both Technical and Business audience
- Travel or hospitality industry experience
- Experience in the following: C, C++, Perl, or PHP
- Experience in working with Kafka messaging system
- Knowledge in Sqoop, Flume, or nifi
- Experience in graphical interface design and development
178
Big Data Engineer Resume Examples & Samples
- Designing and developing software applications, testing, and building automation tools
- Designing efficient and robust Hadoop solutions for performance improvement and end-user experiences
- Working in a Hadoop ecosystem implementation/administration, installing software patches along with system upgrades and configuration
- Conducting performance tuning of Hadoop clusters while monitoring and managing Hadoop cluster job performance, capacity forecasting, and security
- Defining compute (Storage & CPU) estimations formula for ELT & Data consumption workloads from reporting tools and Ad-hoc users
- Analyzing Big Data Analytic technologies and applications in both business intelligence analysis and new service offerings, adopting and implementing these insights and standard methodologies