Business Intelligence Info ← Alle vacatures

IT Data Engineer - R&D IT

NXP
Geplaatst 25 Mar 2026 (2d geleden)
Data Science Corporate Senior
SQL Python AWS Databricks ETL CI/CD
AI Samenvatting

Als Data Engineer ontwikkel je geavanceerde ETL-pijplijnen en bouw je een cloud-gebaseerd R&D data lake met behulp van SQL, Python, en AWS, om de efficiëntie van productintroducties te verbeteren. Deze rol biedt de kans om innovatieve oplossingen te creëren die directe impact hebben op bedrijfsbeslissingen en de datakwaliteit waarborgen in een dynamische, samenwerkende omgeving.

Functiebeschrijving

Are you ready to join the future of innovation at NXP? As a data engineer you will directly impact the cost efficiency and speed of NXP’s New Product Introductions, as you will be responsible for the quality of the data and business insights that enable correct R&D business decisions. Together with your data engineering team and the R&D data scientists, you shape the future of the R&D data analytics platform. Your solutions delight R&D business with a topnotch self-service fully automated cloud-based data platform. What you will do as a Data Engineer at NXP As part of the data engineering team, you will help shape and evolve the R&D data analytics platform. You will work closely with colleagues to explore new ideas, innovate on existing capabilities, and drive the platform forward through collaboration and technical excellence. Your key responsibilities • Cross Functional Collaboration & Enablement: Partner with project managers, resource managers, IT teams, data scientists, and analysts to gather requirements, define project scope, and deliver reliable, model ready datasets. Removing data pipeline friction and ensuring strong alignment with business objectives. • Design, build, and maintain ETL pipelines that ingest and transform data into a cloud-based R&D data lake. • Develop reusable data models and libraries that streamline common analytics and ETL use cases. • Implement and uphold data quality through validation, monitoring, and anomaly detection mechanisms. • Ensure security and compliance by embedding RBAC, lineage, auditing, and regulatory standards into pipelines. • Optimize ETL pipelines for performance, scalability, and cost efficiency. • Automate deployments and operations using CI/CD, Infrastructure as Code, and ETL Jobs. • Monitor and troubleshoot production pipelines, resolving data or platform issues and improving reliability over time. What you bring You can describe yourself as follows: Education & Experience • Education: Master’s degree (or equivalent practical experience) in Data Engineering, Software Engineering, Computer Science, or related technical field. • Experience: 7+ years of professional data engineering experience working with Big Data within an enterprise IT infrastructure environment. • AWS Data Lake Experience: Hands on experience with AWS-native data lake services including AWS Glue, S3, Athena and Lake Formation, covering cataloging, governance, ETL orchestration, and secure data access management. • Databricks Expertise: Extensive hands-on hands-on experience designing and implementing ETL pipelines in Databricks, including proven experience migrating existing cloud-based data lakes and ETL workflows to the Databricks platform. • Delta Lake Expertise: Strong experience working with Delta Lake, including designing Delta tables, optimizing performance and managing schema evolution and versioned data. Technical Skills • Cloud & Automation: Strong experience with cloud-native engineering (Azure/AWS), Infrastructure-as-Code, CI/CD, and DevOps/MLOps best practices. Hands-on experience with AWS CDK, Ansible and GitLab CI is a big plus. • Programming: Advanced proficiency in Python and SQL, with the ability to build robust, maintainable, and reusable code frameworks. • Data Quality & Governance: Strong command of observability, lineage, data quality frameworks, metadata management, and secure data access patterns. • Performance Optimization: Expertise in cluster/job tuning, job orchestration, storage optimization, and cost management in large data lake environments. Professional Attributes • Customer & Stakeholder Focus: Strong communicator who can translate technical concepts into business value and collaborate effectively across data science, architecture, and wider R&D. • Strategic Problem-Solving: Comfortable owning technical challenges and designing long-term, scalable solutions. • Team Mindset: A natural collaborator who contributes to an open, supportive working culture. • Agile Ways of Working: Familiarity with Agile and Scrum methodologies, including iterative delivery, sprint planning, backlog refinement, and cross-functional collaboration. More information about NXP in India... #LI-29f4