Senior Data Scientist/ML Engineer/AI Engineer (f/m/x)
Senior Data Scientist/ML Engineer/AI Engineer (f/m/x)
Senior Data Scientist/ML Engineer/AI Engineer (f/m/x)
Senior Data Scientist/ML Engineer/AI Engineer (f/m/x)
apsa personnel concepts gmbh
Personaldienstleistungen und -beratung
Wien
- Art der Anstellung: Vollzeit
- 66.500 € – 74.500 € (von XING geschätzt)
- Vor Ort
Senior Data Scientist/ML Engineer/AI Engineer (f/m/x)
Über diesen Job
Senior Data Scientist/ML Engineer/AI Engineer (f/m/x)
Vienna
Jahresgehalt
Industry: Artificial Intelligence & ML
Location: Vienna Hybrid
Duration: Long-term
Language: English
These are your responsibilities:
- End-to-end responsibility for ML, AI, and agent-based AI use cases: from problem definition to data preparation, modeling, orchestration, evaluation, and production deployment.
- Design of agent architectures (e.g., planner–executor, multi-agent collaboration) with features like memory, reflection, and tool use, taking into account robustness, transparency, and controllability.
- Implementation of the Model Context Protocol (MCP) for standardized and secure tool integration as well as capability detection across internal and external services.
- Orchestration of LLMs and tools using agent frameworks (e.g., LangChain, LlamaIndex, Semantic Kernel, OpenAI Assistants), including function/tool calls, fallbacks, and guardrails.
- Development and optimization of RAG pipelines (Retrieval-Augmented Generation): data ingestion, chunking, embedding, vector storage, retrieval strategies, caching, and evaluation in terms of precision, recall, and latency.
- Establishment of robust LLMOps/MLOps practices : experiment tracking, prompt and data versioning, CI/CD, model and artifact management, monitoring, and incident response.
- Promotion of reliability, security, and compliance : protection against prompt injection, content filtering, policy enforcement, red-teaming, and measurable quality controls.
- Conducting rigorous offline/online evaluations : backtesting, time series cross-validation, A/B and shadow deployments, canary releases, drift and impact monitoring.
- Optimization of performance and cost : latency, throughput, rate limits, batching, streaming, caching, and development of monitoring dashboards for usage and cost.
- Close collaboration with product owners, engineers, and business stakeholders ; implementation of scalable solutions on Databricks (AWS) and integration into banking systems.
- Creation of clear documentation, templates, and playbooks ; mentoring of colleagues and contribution to best practices in the community.
You bring:
- 3+ years of professional experience in developing and operating software solutions.
- You are a team player , enjoy collaborative work, but are also capable of solving problems independently.
- 2+ years of hands-on experience with cloud computing on AWS , especially with AWS Linux and AWS Lambda.
- 2+ years of experience working with various database technologies and ideally with data lakes (e.g., AWS S3).
- 2+ years of experience in a product-driven environment .
- Programming skills in Python (ideally with Django) and experience in building RESTful APIs.
- Proven experience in setting up processes for data transformation, data structuring, metadata management , and workload management .
- Basic knowledge of ETL tools .
- Familiarity with agile working methods .
Soft skills:
- Initiative, curiosity, sense of responsibility as well as creative ideas & self-confidence .
- Structured approach to work and strong problem-solving skills .
- Fluent English spoken and written; German or another CEE language is a plus, but not required.
JobNr.:##3379##KAM## stefan.hilgner@apsa.at ##REC## anja.aberer@apsa.at ##EN##17636559522180090212RCh##
Anja Aberer
