This job has expired, please see additional jobs below
Database Engineer
iHeartMedia
San Antonio, TX, United States
Job Details - this job has expired, please see similar jobs below
Job Summary:
Position Summary:
iHeartMedia is rapidly evolving its technology platform towards Distributed Data Technology and Microservices based architecture. We are actively seeking highly skilled and motived data engineering professionals to be part of our product development and delivery teams – located in New York, NY and San Antonio, TX – to deliver highly scalable, open-source based products and platforms on the Cloud.
The Senior Data Engineer will be a highly impactful individual contributor who will play an integral part of the technology delivery team in designing and developing data products and platforms – both on premise in our internal data center and on the Cloud [Cloud vendor TBD].
This entails a) developing data extraction, data transformation and data load routines on a combination of RDBMS (MS SQL-Server) and Big Data environments and b) interfacing and providing relevant and appropriate data to the Services layer and c) enabling enterprise wide data science, business-intelligence, and reporting capabilities.
Responsibilities
• Collaborate with product managers and architects to create the solution roadmap, requirements, and design.
• Design and develop data acquisition, transformations, and data integration schemas for large scale media consumption and advertising analytics.
• Develop by utilizing commercial and open source software to interface big data and relational solutions.
• Design and implement solutions for metadata, data quality, privacy management.
• Collaborate with subject-matter-experts to design and enable ad hoc data analysis and a robust data consumption platform
• Support analytics team on data presentation and reporting
• Design, develop and deploy repeatable processes to enable end-to-end automation – with emphasis on Continuous Integration and Deployment (CI-CD).
• Collaborate with architects and engineers to understand technology solution roadmap, and technical requirements with focus on business outcomes
• Contribute towards the long term vision for IaaS/PaaS/SaaS options on Cloud vendors (AWS, Google, Azure)
Qualifications
• Minimum 8 years of hands-on development in commercial data warehouse technologies (Microsoft, Oracle, Netezza, Teradata et al).
• Must have hands-on experience for a minimum of 2 years on Hadoop and associated Apache Open Source ecosystem (Hive, Pig, MapReduce, HBase, Sqoop, Spark et al)
• Must have 5 or more years of hands-on experience in coding SQL (various flavors) and object-oriented programming languages like Python and Java – experience in Scala programming is a big plus.
• Must have working knowledge of NoSQL data stores – columnar data stores like Vertica, Cassandra, ParAccel (RedShift) and document stores like MongoDB, Couch et al.
• Must have deep proficiency in data management principles – traditional RDBMS (Oracle, MS SQL et al) and MPP appliances (Netezza, Teradata et al) and open source DBMS (MySQL, PostgresSQL et al)
• Deep expertise in data modeling techniques
• Expertise in Cloud provisioning, development and deployment (AWS preferred)
• Experience in developing packages in continuous-integration and deployment (CI-CD) environment is a big plus
• Working knowledge of Services Layer – Java, Groovy/Grails, RESTful, JSON, Maven, Jenkins, Git, Subversion, JIRA, Eclipse et al.
• Ability to wear multiple hats spanning the software-development-life-cycle across Requirements, Design, Code Development, QA, Testing and Deployment – experience working in an Agile/Scrum methodology a big plus
• Excellent Problem Solving Skills
• Passionate about usability and presentation of complex, high-volume analytics
• Excellent oral and written communication skills
Education
Bachelors degree – advance degree in Computer Science preferred
Certification
None Required