Ebay上海招大数据工程师!



  • Ebay上海招大数据工程师。这个部门Ebay DataWarehouse将抛弃传统ETL,全面转向开源技术。应聘者需要有丰富的Hadoop,Spark,Kalfka,Storm等经验,待遇丰厚,给manager的level。有意者请把简历发至jinshi@ebay.com。以下是JD。

    Data Services & Solutions (DSS):

    DSS designs, builds, and maintains services and solutions for processing, synthesizing, governing, and exposing our most critical asset – our data. DSS works in partnership with analytics, product, platform, and business organizations across eBay.
    Our deep data expertise, state-of-the-art data management processes and easy-to-use self-service tools make a wealth of trusted information accessible at the fingertips of eBay’s business users everywhere. DSS puts the “Big” in Big Data. Our Enterprise Data Platform operates on a world leading scale. On a daily basis, DSS employees around the world leverage a variety of technologies to manage data at a scale that is incomparable in the industry.

    Role & Responsibility – Big Data Engineer

    Building large-scale data processing systems, he or she is an expert in data warehousing solutions and should be able to work with the latest (NoSQL) database technologies.
    Implementing complex big data projects with a focus on collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into insights using multiple platforms.
    Embrace the challenge of dealing with petabyte or even exabytes of data on a daily basis. He or she understands how to apply technologies to solve big data problems and to develop innovative big data solutions.
    Building data processing systems with Hadoop and Spark using Java, Python or Scala should be common knowledge to the big data engineer.
    Should have sufficient experience in software engineering. The candidate should also have the capability to architect highly scalable distributed systems, using different open source tools. He or she should understand how algorithms work and have experience building high-performance algorithms.

    Qualifications

    Enjoy being challenged and to solve complex problems on a daily basis;
    Be proficient in designing efficient and robust ETL workflows;
    Be experience with building stream-processing systems, using such as Storm or Spark Streaming
    Be experience with NoSQL databases, such as HBase, Cassandra, MongoDB
    Have good knowledge of Big data querying tools, such as Hive and Impala
    Be Proficiency with Hadoop, MapReduce, HDFS
    Be able to work with cloud computing environments
    Be able to work in teams and collaborate with others to clarify requirements
    Be able to assist in documenting requirements as well as resolve conflicts or ambiguities
    Be able to tune Hadoop solutions to improve performance and end-user experience


登录后回复
 

与 BitTiger Community 的连接断开,我们正在尝试重连,请耐心等待