Lead Data Engineer (all genders)
Lead Data Engineer (all genders)
Lead Data Engineer (all genders)
Lead Data Engineer (all genders)
Beiersdorf AG
Konsumgüter, Handel
Hamburg
- Art der Beschäftigung: Vollzeit
- 75.000 € – 98.000 € (von XING geschätzt)
- Vor Ort
- Zu den Ersten gehören
Lead Data Engineer (all genders)
Über diesen Job
Your Tasks
As a Lead Data Engineer you concept, design, implement and support data pipelines and databases within our Azure/Databricks Data & Analytics platform. You will always find the balance between individual requirements and stable solutions. You are open to new technologies and see them as an opportunity to enable even deeper insights into data. You will work closely with our project leads, reporting consultants, data scientist and cloud engineers. We work together on global projects in mixed project teams. It is important to us that everyone can contribute their experiences and concerns to discussions.
Your main tasks are:
- Design and develop data pipelines within Azure and Databricks tech stack
- Code advanced data processing solutions within Azure and Databricks tech stack
- Concept and implement relational and NoSQL database solutions within Azure and Databricks
- Development/code management using Azure DevOps
- Manage our Data Lake House which stores structured, semi-structured data and unstructured data
- Act as primary technical liaison to the Architecture Team, ensuring alignment with standards and guidelines
- Establish and maintain a productive feedback loop between architectural design and implementation
- Mentor technically (no people management) mid-level Data Engineers in the team
Your Profile
- Bachelor’s or Master’s degree in related field
- Several years of experience in Data Engineering or Software Development within cloud native environments.
- Deep operational expertise across the Azure ecosystem (Azure Data Factory, Databricks, Synapse, ADLS Gen2, Azure SQL).
- Strong knowledge of Azure serverless and integration services (Functions, Cosmos DB, Event Hubs/Service Bus).
- Solid implementation of observability practices (logging, metrics, distributed tracing) and secure secrets management.
- Expert-level proficiency in Python with strong PySpark experience.
- Proficiency in at least one compiled/typed language (Scala, Java, or Go).
- Ability to write clean, modular, and performance optimized code.
- Demonstrated capability in technology scouting and evaluating emerging trends for practical business value.
- Multi-cloud exposure (AWS/GCP) and experience with streaming architectures (Kafka, Spark Structured Streaming).
- Strong understanding of data governance concepts and CI/CD automation principles.
Additional information
If you have any questions, please contact our recruiter Carina Huesmann.
