Postdoc, research in LLM engineering i.e
Postdoc, research in LLM engineering i.e
Postdoc, research in LLM engineering i.e
Postdoc, research in LLM engineering i.e
Eidgenössische Technische Hochschule Lausanne, EPFL
Pharma, Medizintechnik
Lausanne
- Art der Anstellung: Vollzeit
- 88.000 CHF – 107.500 CHF (von XING geschätzt)
- Vor Ort
Postdoc, research in LLM engineering i.e
Über diesen Job
Postdoc, research in LLM engineering i.e
Arbeitsort | Lausanne , Genfersee Region , Schweiz |
Kategorie | Gesundheit
|
Forschungsmanagement
|
Funktion |
Senior Wissenschaftler.in / Postdoc
|
Erschienen |
EPFL, the Swiss Federal Institute of Technology in Lausanne, is one of the most dynamic university campuses in Europe and ranks among the top 20 universities worldwide. The EPFL employs more than 6,500 people supporting the three main missions of the institutions: education, research and innovation. The EPFL campus offers an exceptional working environment at the heart of a community of more than 18,500 people, including over 14,000 students and 4,000 researchers from more than 120 different countries.
postdoc, research in LLM engineering i.e.
Mission
We’re hiring a technically exceptional and impact-focused Postdoctoral Researcher in LLM engineering to lead research and development within the Meditron initiative --a suite of evolving open-source medical LLMs and multimodal foundation models.
You will contribute to model training, data curation, safety alignment, and global health deployments. You’ll join a fast-moving, interdisciplinary team of researchers, engineers, and students advancing trustworthy AI for medicine and humanitarian response.
Main duties and responsibilities
Design scalable LLM pipelines using FSDP, DeepSpeed, HuggingFace Accelerate
Lead model development (e.g., LLaMA, Mistral, Phi, Gemma) using LoRA, FlashAttention-2, MoE
Contribute to safety alignment (RLHF, DPO, red-teaming, rejection sampling, calibration)
Support data pipeline audits (tokenizer design, deduplication, privacy, synthetic supervision)
Benchmark across general and medical tasks (lm-eval-harness, HELM, guideline adherence)
Publish and present research at top venues (e.g., NeurIPS, ICLR, ML4H)
Collaborate closely with clinicians and humanitarian partners to ensure safety and usability
Profile
- Proven experience training 1B parameter models with distributed infrastructure
- Expertise in PyTorch, HuggingFace Transformers, DeepSpeed, FSDP
Deep understanding of transformer internals and optimization strategies
Strong Python engineering skills (tests, containers, CI/CD, reproducibility)
Clear scientific communication and publication experience
Preferred
Familiarity with clinical LLMs or decision support systems
Experience with safety-critical evaluation (e.g., hallucination detection, benchmark leakage)
Contributions to open-source projects
Passion for equity-centered deployment and global health
Our Stack
PyTorch, HuggingFace, DeepSpeed, FSDP, WANDB, Hydra, MLFlow, WebDataset, Slurm, Docker, lm-eval-harness, OpenCompass, MedQA, SwissAlps (CSCS)
We offer
At LiGHT, we combine scientific rigor with purpose-driven action. We value creativity, humility, collaboration, and principled research that leads to tangible health impact. Our team culture embraces diverse backgrounds and sustained commitment to excellence, equity, and integrity.
Informations
Only applications submitted through the online platform are considered. You are asked to supply:
- A brief cover letter (pdf, up to 2 pages).
And in one PDF:
- A CV with a publication list.
- A research statement (pdf, up to 3 pages).
- Contact details for 3 referees.
For any further information, please contact: [XXX]
Contract Start Date :
Activity Rate : 100.00