Data Engineer – Consumer Goods – LATAM
Day rate: £150 - £300
Duration: 1 – 3 months
Start: ASAP
My new client in the consumer goods sector are embarking on an exciting project, focusing on analysing marketing data.
This project integrates data from various sources, like Adverity, campaign briefs, and marketing reports.
We aim to build a robust data infrastructure that will enable weekly analysis of campaign performance, audience segmentation, and ROI calculation.
The pilot is designed to be scalable, with plans to extend it to other brands and integrate more data.
They are looking for a Data Engineer to design, implement, and maintain the data infrastructure for this innovative marketing analytics project.
The ideal candidate will have strong skills in data integration, warehousing, and processing, with the ability to work on a standalone system that will be the foundation for future expansion.
Primary Responsibilities
Data Source Integration
Set up and maintain connectors for various data sources, with a primary focus on Adverity integration
Develop and optimize data extraction and ingestion pipelines
Implement data transformation and cleaning processes for marketing data
Data Warehousing
Design and implement a scalable data warehouse schema suitable for marketing analytics
Set up efficient ETL/ELT processes for weekly data loading
Develop data partitioning and indexing strategies for optimal query performance
Data Quality and Governance
Implement comprehensive data quality checks and validation rules
Establish data lineage tracking systems
Develop and enforce data governance policies in line with UK regulations
Analytics Support
Collaborate with data analysts to understand and support their data needs
Optimize data models for campaign performance analysis, audience segmentation, and ROI calculations
Develop and maintain data pipelines for generating weekly insights
System Architecture
Design and implement a modular, scalable architecture that can expand to other brands and countries
Ensure the system can handle increasing data volumes and complexity over time
Qualifications
Strong programming skills in Python
Extensive experience with SQL and data warehousing concepts
Proficiency in designing and implementing ETL/ELT processes
Preferred Skills
Experience with cloud platforms (AWS, GCP, or Azure)
Knowledge of data governance and compliance requirements
Basic understanding of DevOps practices and tools
English speaking
Technologies
While we're open to various technology solutions, experience with some of the following is beneficial:
Data Integration: Apache Airflow, Talend, or similar ETL tools
Data Warehousing: Snowflake, Amazon Redshift, or similar
Data Quality: Great Expectations, Deequ, or similar
Data Processing: Apache Spark, dbt, or similar
Version Control: Git
We encourage candidates to bring their expertise and suggest optimal solutions for our needs.
Nice-to-Have DevOps Skills
While not required, familiarity with the following DevOps practices would be beneficial:
Infrastructure as Code (e.g., Terraform, CloudFormation)
Containerization (e.g., Docker)
CI/CD pipelines
Monitoring and logging systems
#J-18808-Ljbffr