DevOps and Data Engineer (f/m/d)
Your tasks and responsibilities:
- Bridge the gap between data scientists, bioinformaticians and IT experts to develop and deploy practical solutions for clinical applications and biomedical research projects
- Contribute to the data life-cycle of a state-of-the art drug screening platform and both standardized and exploratory biomarker research projects
- Design and deploy SQL databases and object stores on on-prem and cloud infrastructure
- Develop, standardize and execute data processing pipelines (ETL) for biomedical data
- Design and implement basic web applications for data access and visualization for both in-house as well as scientific and industry partners
- Deploy and maintain docker containers for various data science related storage and computational services
- Setting up CI/CD pipelines to turn existing web app prototypes into tested, easy-to-deploy MVPs
- Work together with laboratory scientists
- Work together with IT team to automate company processes
Expected skills and abilities:
- You have a PhD or MSc (with at least 5 years of working experience in the area) in computer science or related fields
- You feel at home when working with the Linux command line
- You are experienced with Docker (experience with container orchestration tools is a plus)
- You have hands-on experience with structured databases (user requirements, design, implementation, maintenance, and setting up ETL pipelines)
- You have experience with collaborative working tools (git, ticketing systems)
- You can navigate and configure cloud and on-premise compute resources (e.g. AWS, GCP)
- You have hands-on experience with cloud and on-premise object stores (e.g. AWS S3, MinIO)
- You have experience with python or equivalent for data visualization and web development
Expected soft-skills:
- Make-it-work-mentality : you have strong skills in finding practical yet sustainable solutions
- Documentation-mentality : you understand the value of writing concise and effective documentation for colleagues
- Good listening, communication, and personal skills to cooperate with non-technical staff
- Very good command of English, written and spoken
Optional and beneficial skills include experience with:
- GraphQL
- Kubernetes
- Biomedical data (e.g., pathology images, flow cytometry, NGS, metabolomics, microbiome)
- Image analysis and computer vision (including machine learning and conventional methods)
R programming language
- Working in an ISO certified environment
- German language skills are a plus, but not required to thrive in this position
What we offer:
- Flexible workhours with possibility of home office
- Family-friendly environment
- Jobticket
- You will have the opportunity to work in an internationally connected company, within a young, dynamic, and international team
- Your work on improving and maintaining our data pipelines will directly impact patient well-being, both in the short-term through drug screening approaches and in the long-term through state-of-the-art biomarker research
- Opportunity to learn new tools and shape your own work experience by using the technology and tools of your choice
- We offer an unlimited contract with a minimum annual salary from € 58 000 gross based on 40 hours/week. Overpayment is possible, depending on qualifications and work experience.