This job has expired, please see additional jobs below
Hadoop, Storm, Spark Developer
Yahoo!
Lockport, NY, United States
Job Details - this job has expired, please see similar jobs below
A Little About Us
Hadoop is the de-facto operating system for the worldwide cloud computing industry. Yahoo! is the birthplace of Hadoop and has the largest Hadoop install in the world. The server count has grown from a few hundreds to over 40,000 in the past three years. We have over 400,000 cores and that process nearly 300PB of data spread over 25 clusters. We are now working on the next generation cluster design with over 10,000 nodes, 100,000 cores and 160PB of data in a single cluster.
A Lot About You
We are looking for experienced and motivated software engineers to help us build highly scalable and robust next generation Hadoop stack that is needed for such a cluster. The qualified engineer would work on one or more components of the stack including Pig, HCatalog, Oozie, Hive, Hbase, and overall Hadoop performance.
If you have strong distributed systems background, love to solve complex and challenging problems, can work independently, and want to participate in some of the most exciting open source projects, we want to hear from you!. If you want to learn about Hadoop and get a deep understanding of cloud computing, this is the job for you.
Your Day
• Help Yahoo! design the next generation cloud
• Understand all aspects of Hadoop stack and learn specific components in threadbare detailBe a leader in open source (Apache Software Foundation) projects
• Design massively distributed technology and develop lead edge cloud computing software
• Work closely with service engineering, operations, Hadoop users to figure out solution that works for Yahoo! and leads the open source community
You Must Have
• BS/MS in Computer Science (or equivalent)
• Strong in Java or C++
• Deep understanding of Algorithms, Data Structures, and Performance Optimization Techniques
• 5+ years of professional experience
• Working knowledge of Sql
• Experience with large distributed data and systems
• Experience in the Unix environment