Find præcis det, du leder efter

  • Vælg venligst din placering og dit sprog

  • Lokalitet
    Warsaw, Masovia, Polen
    Karriere niveau
    Faglærte
    Ansættelsestype, Arbejdstype
    Full time
    Udgivelsesdato, ID-nr.
    , 345735

    Dine opgaver

    • Analyze data and design, code, test, debug, automate, document, and maintain data solutions
    • Support building, improving, and maintaining a cloud native data lake platform
    • Integrate machine learning and operations research solutions into the DB Schenker system landscape
    • Support and interact with data scientist, operations research specialists and business consultants in all their data-related activities

    Krav

    • Experience working with distributed computing tools especially Spark using Databricks and streaming technologies (Kafka, Spark Structured Streaming…etc.)
    • Knowledge of cloud platforms (ideally Azure, alternatively AWS or GCP) and respective Data and Machine Learning related service associated models
    • Fluency in at least one programming language such as Python or Scala
    • Experience with relational databases (Oracle, PostgreSQL) and good knowledge of SQL
    • Fluency in POSIX systems (e.g., Linux, MacOS X) and the command-line terminal
    • Experience with orchestration / data pipelining tools like Argo, Azure Data Factory, Airflow etc.
    • Experience in delivering software and of the software development life cycle: source code repositories (Git) and versioning/branching/peer reviewing, continuous integration (e.g., GitLab CI, Azure DevOps), deployment/release (e.g., artifact building and repositories), maintenance

    Nice-to-have knowledge:

    • Experience with Machine Learning Operations processes (data versioning, model versioning, ...) and tools (MLFlow, Azure Machine Learning, SageMaker)
    • Familiarity with Cloud Data Warehouse and/or Data Lakehouse Approaches (Databricks, Azure Synapse, Snowflake)
    • Knowledge of front-end analysis tools (e.g. MS PowerBI, Tableau)
    • Good knowledge of an analysis framework such as Python Pandas, Spark Data Frames
    • Experience working with advanced Data Acquisition (Change Data Capture, Streams, APIs)
    • Experience in Metadata Management and Data Cataloging

    Vores tilbud

    The Global Data & AI department is on a mission to turn DB Schenker into a data-driven company. Our Data Engineering team focuses on designing advanced analytics solutions to solve potential business use cases in logistics using Machine Learning, AI techniques, Data Visualization tools and Big Data technologies in cooperation with our Software Engineering team, internal business units and global IT. We are looking for a talented Data Engineer who can contribute to our projects with solid data intuition, hands-on problem-solving skills, engineering mindset and eagerness to learn about our logistics business data.

    Kontakt

    DB Schenker is acting as an Employment Agency in relation to this vacancy.

    Samtykke til at bruge cookies og indsamle data

    Vi bruger cookies for at optimere vores hjemmeside og konstant forbedre den. Til dette bruger vi blandt andet Adobe Analytics. Ved at fortsætte med at bruge denne side, godkender du vores brug af cookies. Du kan finde mere information om cookies og indstillinger i vores privacy policy.

  • Cookies og sporing

    Vi vil gerne give dig muligheden for at træffe en informeret beslutning for eller imod brugen af cookies - som ikke er obligatorisk for de tekniske funktioner på hjemmesiden. Cookies er små tekstfiler, hvor personlige data kan gemmes.

    Vores Private Policy er beregnet til at sikre, at du er fuldt opmærksom på indsamling og behandling af data, herunder ved brug af cookies via vores websteder, og at du kan træffe en beslutning på baggrund af korrekt information. Du kan når som helst ændre dine cookieindstillinger.

    Find mere information under vores privacy policy.