Data Engineer – Consumer Goods – LATAM Day rate: £150 - £300Duration: 1 – 3 monthsStart: ASAPMy new client in the consumer goods sector are embarking on an exciting project, focusing on analysing marketing data. This project integrates data from various sources, like Adverity, campaign briefs, and marketing reports. We aim to build a robust data infrastructure that will enable weekly analysis of campaign performance, audience segmentation, and ROI calculation. The pilot is designed to be scalable, with plans to extend it to other brands and integrate more data.They are looking for a Data Engineer to design, implement, and maintain the data infrastructure for this innovative marketing analytics project. The ideal candidate will have strong skills in data integration, warehousing, and processing, with the ability to work on a standalone system that will be the foundation for future expansion.Primary ResponsibilitiesData Source IntegrationSet up and maintain connectors for various data sources, with a primary focus on Adverity integrationDevelop and optimize data extraction and ingestion pipelinesImplement data transformation and cleaning processes for marketing dataData WarehousingDesign and implement a scalable data warehouse schema suitable for marketing analyticsSet up efficient ETL/ELT processes for weekly data loadingDevelop data partitioning and indexing strategies for optimal query performanceData Quality and GovernanceImplement comprehensive data quality checks and validation rulesEstablish data lineage tracking systemsDevelop and enforce data governance policies in line with UK regulationsAnalytics SupportCollaborate with data analysts to understand and support their data needsOptimize data models for campaign performance analysis, audience segmentation, and ROI calculationsDevelop and maintain data pipelines for generating weekly insightsSystem ArchitectureDesign and implement a modular, scalable architecture that can expand to other brands and countriesEnsure the system can handle increasing data volumes and complexity over timeQualificationsStrong programming skills in PythonExtensive experience with SQL and data warehousing conceptsProficiency in designing and implementing ETL/ELT processesPreferred SkillsExperience with cloud platforms (AWS, GCP, or Azure)Knowledge of data governance and compliance requirementsBasic understanding of DevOps practices and toolsEnglish speakingTechnologiesWhile we're open to various technology solutions, experience with some of the following is beneficial:Data Integration: Apache Airflow, Talend, or similar ETL toolsData Warehousing: Snowflake, Amazon Redshift, or similarData Quality: Great Expectations, Deequ, or similarData Processing: Apache Spark, dbt, or similarVersion Control: GitWe encourage candidates to bring their expertise and suggest optimal solutions for our needs. Nice-to-Have DevOps SkillsWhile not required, familiarity with the following DevOps practices would be beneficial:Infrastructure as Code (e.g., Terraform, CloudFormation)Containerization (e.g., Docker)CI/CD pipelinesMonitoring and logging systems