Data Catalog Implementation Specialist (m/w/d)
Data Catalog Implementation Specialist (m/w/d)
Data Catalog Implementation Specialist (m/w/d)
Data Catalog Implementation Specialist (m/w/d)
Michael Page
Maschinenbau, Betriebstechnik
Düsseldorf
- Art der Beschäftigung: Selbstständig
- Hybrid
- Zu den Ersten gehören
Data Catalog Implementation Specialist (m/w/d)
Über diesen Job
Intro
Herausforderungzeitnaher Start
Firmenprofil
Start: asap
Project Duration: 03 Months +
Workload: 5 days / per week
Location: Remote (95%) Düsseldorf
Industry: Sales
Project language: English/German (nice to have)
Aufgabengebiet
We are seeking a hands-on Data Catalog Expert to lead a critical pilot project focused on democratizing data access for our Data Science team.
The engagement involves setting up an initial data catalog instance and integrating it with our Databricks environment to register key source datasets.
The objective is to demonstrate the value of a centralized metadata repository by transforming raw table lists into a searchable, context-rich asset library that accelerates model development and analytics.
Anforderungsprofil
Key Responsibilities:
- Pilot Configuration:
- Deploy and configure the initial instance of the data catalog (e.g., Atlan or DataHub) to support a Proof of Concept (PoC) scope. Work with the DataHub open-source framework within our ecosystem.
- Needs Assessment:
- Consult on how effectively the data catalog meets organizational requirements and identify gaps.
- Databricks Integration:
- Establish secure connectivity between the data catalog and Databricks Unity Catalog/Delta Lake to automate ingestion of schemas, tables, and views.
- Metadata Enrichment:
- Execute automated metadata extraction and implement tagging strategies to classify sensitive data (PII) and add business context (descriptions, owners).
- User Onboarding:
- Design a streamlined workflow enabling Data Scientists to search, query, and request access to datasets directly through the catalog interface.
Vergütungspaket
Required Technical Skills:
- Data Cataloging:
- Proven experience configuring modern data governance platforms, specifically Atlan and/or DataHub.
- DataHub Open Source Framework:
- Hands-on knowledge of the DataHub open-source catalog framework.
- Data Platforms:
- Strong proficiency with Databricks (Lakehouse architecture, Delta Lake, Unity Catalog) and Spark SQL.
- Metadata Management:
- Expertise in metadata ingestion frameworks, API-based integration (REST/GraphQL), and automated classification policies.
- Scripting:
- Proficiency in Python or SQL for custom connector configuration and metadata manipulation.
- Governance:
- Familiarity with data stewardship principles, including ownership assignments, glossary creation, and certification workflows.
Pilot Deliverables:
- Functional connection between Databricks and the data catalog (Atlan/DataHub).
- Registration of high-priority source datasets with complete metadata (descriptions/tags).
- Demonstrable "Search-to-Query" workflow for the Data Science team.
- Final recommendation report on catalog scalability and long-term architecture.