Data Engineer - (REMOTE) Brazil
About Remobi:
We are building the world's greatest community of remote technologists!
Today, organizations that understand the value of remote working will reap the rewards.
It doesn't just provide team members with a healthier work-life balance, it gives you the opportunity to access the brightest minds in the world.
Our clients access our community to build or extend their existing teams, made up of remote, distributed software engineering experts - the best-in-class.
Join our Remobi community to have access to meaningful, innovative freelance projects and play a key role in shaping how companies operate.
Job Summary:
As a Data Engineer, you will be responsible for the design, development, and maintenance of our data systems.
You will work closely with cross-functional teams to ensure the efficient collection, storage, and processing of data.
Key Responsibilities:
Design, develop, and maintain scalable data pipelines and ETL processes using Fivetran and other ETL tools.
Implement and manage data models using DBT to ensure data accuracy and consistency.
Write efficient, maintainable SQL code to extract, transform, and load (ETL) data from various sources.
Develop and maintain data infrastructure on AWS, including S3, Redshift, and other relevant services.
Collaborate with data analysts, data scientists, and other stakeholders to understand data requirements and deliver solutions.
Perform data quality checks and ensure data integrity across different platforms.
Monitor and optimize the performance of data systems and pipelines.
Utilize Python for data manipulation, automation, and integration tasks.
Track and manage project tasks and progress using Jira.
Stay up-to-date with emerging data engineering technologies and best practices.
Qualifications:
Proven experience as a Data Engineer or in a similar role.
Strong proficiency in SQL and experience with relational databases.
Experience with DBT (Data Build Tool) for data transformation and modeling.
Hands-on experience with Fivetran or other ETL tools.
Proficiency in Python for data-related tasks.
Experience with AWS services such as S3, Redshift, Lambda, and Glue.
Solid understanding of data modeling concepts and techniques.
Familiarity with Jira for project management and issue tracking.
Excellent problem-solving skills and attention to detail.
Strong communication skills and the ability to work collaboratively in a team environment.
Preferred Qualifications:
Knowledge of other programming languages such as Java or Scala.
Familiarity with data visualization tools like Tableau or Power BI.
Experience with real-time data processing frameworks like Apache Kafka or Spark.
#J-18808-Ljbffr