Highlights: SAP a plus; looking for POS aggregators (IRI, Nielsen, Circana, Kantar) Role: Consumer Data Steward
Location: REMOTE in LATAM (Brazil, Chile, El Salvador, Peru, Costa Rica)
Shift/working hours: 8AM-5PM EST
Special notes: highly desired skillset is someone who has worked in consumer domain space, that is specific to customer data from Nielsen, Kantar, etc. This team will focus on 2nd and 3rd party sales data.
Required Skills 4+ years of experience working in Data Governance or Data Management or Data Stewardship experience General understanding of data governance concepts and processes Strong data management background who understands data, how to ingest data, proper data use/consumption, data quality, and stewardshipIntermediate SQL knowledge and experience (ability to write SQL queries) Experience in working with Datasets from POS Aggregators such as IRI, Nielsen, Circana, Kantar etc. Advanced data analytical skills, including identification, interpretation and documentation of data patterns, standards, and quality rules Strong communication skills and Excel proficiency, including experience profiling with Excel Experience researching, validating, profiling and cataloging datasets Ability to own sub-tasks of a project with little supervision. Excellent written and oral communication Desired Skills SAP domain experienceExperience with EDF, Axon, Collibra Daily Activities/Responsibilities A fortune 100 client is looking to add a Data Steward Advisor to their team. This group is focused on 2nd and 3rd party data, mainly coming from Nielsen, Circana, Kantar, etc. and as a data steward, you will work with multiple team members and levels of leadership to create a scorecard for existing data. In order to achieve this, resources on this team will be working directly alongside internal programs to understand why the requested 2nd and 3rd party data is needed and how they plan to use it so that Data Stewards are equipped to conduct data profiling to identify any gaps in accuracy or quality. Data Stewards will then ingest the accurate data into the foundation and build data quality on top.