AI Research Intern - Foundation Models & Multimodal Intelligence
AI Research Intern - Foundation Models & Multimodal Intelligence
AI Research Intern - Foundation Models & Multimodal Intelligence
AI Research Intern - Foundation Models & Multimodal Intelligence
Apple Inc
Computer-Hardware
Zürich
- Art der Anstellung: Vollzeit
- Vor Ort
- Zu den Ersten gehören
AI Research Intern - Foundation Models & Multimodal Intelligence
Über diesen Job
Summary
Posted:
Role Number:200622966-4170
Imagine building the next generation of AI-powered experiences at Apple. We are advancing the state of the art in foundation models, applying them across language, vision, and multimodal understanding to power features used by millions of people worldwide.
Description
As part of the Multimodal Intelligence Team (MINT), with a track record of delivering innovations from the Apple Foundation Model to real-world applications like Visual Intelligence, you will tackle the practical challenges of scaling, optimizing, for building large models as well as integrating such models and agents into Apple products. You’ll collaborate with world-class engineers and scientists to push the boundaries of foundation models and agentic systems while delivering real-world impact
Responsibilities
- You will work on advancing the post-training capabilities of multimodal foundation models for agentic applications. This includes researching and developing methods to improve how these models understand, reason about, and interact with complex environments through techniques like supervised fine-tuning and reinforcement learning from various reward signals.
- You will design evaluation frameworks to assess agent performance on realistic tasks and experiment with training strategies that enhance the model's ability to perceive multimodal inputs, understand intent, and execute complex autonomous behaviors. A key part of your role will be staying current with emerging research in foundation model agents, identifying techniques that could advance the state-of-the-art in autonomous systems.
- Your work will involve large-scale multimodal datasets combining visual, textual, and interaction data to push the boundaries of agent capabilities. A primary goal of this internship is to produce novel research suitable for publication at a top-tier conference. You will collaborate with researchers and engineers passionate about creating more capable autonomous systems, contributing to cutting-edge research in this rapidly evolving field.
Minimum Qualifications
- Currently pursuing a PhD degree or equivalent experience in Machine Learning, Computer Vision, Natural Language Processing, Data Science, Statistics or related areas.
- Experience with large language models or vision language models and their application in agentic systems.
- Proficient programming skills in Python and experience with at least one modern deep learning framework (PyTorch, JAX, or TensorFlow).
Preferred Qualifications
- Demonstrated publication record in relevant conferences (e.g. NeurIPS, ICML, ICLR, CVPR, etc).
- Experience with foundation models (language, vision-language, or multimodal).
- Experience post training (SFT or RL) for optimizing large models for agentic systems.
- Available for 6-12 months for internship.
Unternehmens-Details
Apple Inc
Computer-Hardware