Data Engineer - (REMOTE) Brazil About Remobi: We are building the world's greatest community of remote technologists! Today, organizations that understand the value of remote working will reap the rewards. It doesn't just provide team members with a healthier work-life balance, it gives you the opportunity to access the brightest minds in the world. Our clients access our community to build or extend their existing teams. All are made up of remote, distributed software engineering experts - the best-in-class. Rapidly deployed without compromising on quality. Join our Remobi community to have access to meaningful, innovative freelance projects and play a key role in shaping how companies operate. Job Summary: As a Data Engineer, you will be responsible for the design, development, and maintenance of our data systems. You will work closely with cross-functional teams to ensure the efficient collection, storage, and processing of data. The Team The Data Platform team develops and maintains centralized data infrastructure, supports ETL and ML operations, and leverages the power of data to create actionable insights to help Imprint grow profitably. The ideal candidate will have a strong background in data maintenance, Data Lake S3, DBT, Terraform, SQL, Python, Snowflake, Fivetran ETL Tool, data modeling, Jira, and AWS. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes using Fivetran and other ETL tools. Implement and manage data models using DBT to ensure data accuracy and consistency. Write efficient, maintainable SQL code to extract, transform, and load (ETL) data from various sources. Develop and maintain data infrastructure on AWS, including S3, Redshift, and other relevant services. Collaborate with data analysts, data scientists, and other stakeholders to understand data requirements and deliver solutions. Perform data quality checks and ensure data integrity across different platforms. Monitor and optimize the performance of data systems and pipelines. Utilize Python for data manipulation, automation, and integration tasks. Track and manage project tasks and progress using Jira. Stay up-to-date with emerging data engineering technologies and best practices. Qualifications: Proven experience as a Data Engineer or in a similar role. Experience or familiar with development tools: Airflow (Python), Snowflake (SQL), Github (CI/CD) Experience with DBT (Data Build Tool) for data transformation and modeling. Hands-on experience with Fivetran or other ETL tools. Proficiency in Python for data-related tasks. Experience with AWS services such as S3, Redshift, Lambda, and Glue. Solid understanding of data modeling concepts and techniques. Excellent problem-solving skills and attention to detail. Strong communication skills and the ability to work collaboratively in a team environment. Preferred Qualifications: Experience with Terraform is big plus. Experience with CI/CD data infrastructure build-up. Experience with real-time data processing frameworks like Apache Kafka or Spark.