MLOps, model monitoring
MLOps, model monitoring
- Art der Anstellung: Vollzeit
- 52.500 € – 76.000 € (von XING geschätzt)
- Vor Ort
MLOps, model monitoring
Über diesen Job
Job Summary
We are seeking a highly skilled and forward-thinking AI Tech Lead to architect, build, and lead the implementation of cutting-edge AI solutions using state-of-the-art platforms including Palantir Foundry , Anthropic (Claude) , Google Gemini , and OpenAI (GPT-4/5) . You will lead technical initiatives, guide cross-functional teams, and drive the strategic adoption of LLMs and generative AI within enterprise environments. Familiarity with AWS Cloud is essential for infrastructure deployment and scalability.
Key Responsibilities
Lead the design and development of enterprise AI solutions using LLMs from OpenAI, Anthropic, and Google Gemini, integrated into scalable cloud environments.
Leverage Palantir Foundry to model, manage, and operationalize data for AI-driven decision-making.
Develop and fine-tune LLM pipelines for summarization, RAG (retrieval-augmented generation), classification, agents, and copilots.
Collaborate with stakeholders to translate business problems into robust, AI-powered technical solutions.
Drive the evaluation and benchmarking of foundation models across platforms (Claude, Gemini, GPT).
Architect and deploy scalable solutions on AWS Cloud , leveraging services like SageMaker, Lambda, S3, ECS, and Bedrock.
Provide technical leadership and mentoring to engineers and data scientists.
Establish best practices for MLOps, model monitoring , prompt engineering, and generative AI governance.
Contribute to strategic AI roadmaps , vendor evaluations, and technical partnerships with leading LLM providers.
Required Qualifications
Bachelor's or Master’s degree in Computer Science, Engineering, Data Science, or related field.
6+ years of experience in AI/ML with 2+ years in a lead or principal engineer role.
Hands-on experience working with LLMs from OpenAI, Anthropic (Claude), and Google Gemini .
Solid understanding of Palantir Foundry workflows, ontology management, and operationalizing machine learning within Foundry.
Proficiency in Python , API integration, and AI frameworks (LangChain, LlamaIndex, Transformers, etc.).
Strong experience deploying AI applications on AWS and familiarity with services like Bedrock, SageMaker, or ECS.
Experience with prompt engineering , model fine-tuning, vector databases (e.g., FAISS, Pinecone), and retrieval systems.
Demonstrated success in leading AI projects from concept to production in enterprise settings.