Job DescriptionsResponsibilities :
•Gather business logic, SQL queries, and data sources from various business units (BUs).
•Develop solutions for data validation, data cleansing, and data accuracy improvement.
•Support the Data Engineering team in preparing and integrating data into the Data Core, ensuring high-quality data availability.
•Identify opportunities for automation and process improvements in data quality management
•Develop and maintain robust data pipelines to ingest, process and transform customer data from various sources (web, mobile, CRM, third-party data) into formats suitable for Customer Data Platform Solution.
•Conduct meeting with users to understand the data requirements and perform database design based on data understanding and requirements with consideration for performance.
•Implement identity resolution strategies to unify and deduplicate customer profiles.
•Develop big data solutions for batch processing and near real-time streaming.
•Own end-to-end data ETL/ELT process framework from Data Source to Customer
Location•2+ years of experience in Data Engineer, Data Quality, or Data Management.•Strong expertise in SQL, data reconciliation, and validation techniques.•Experience with data governance frameworks, policies, and regulatory compliance.•Data Pipeline Development: Strong experience building and maintaining data pipelines using tools like Apache Airflow, Apache Kafka, or similar.•Data pipeline design patterns: Realtime end to end, Non-Realtime•Database Technologies: Proficiency in SQL and NoSQL databases (e.g., MSSQL, PostgreSQL, MongoDB, Oracle, Click house).•Experience in Python is a must•Experience with PySpark or Spark SQL for distributed data processing is a plus•Experience with data cleansing and data quality best practices is a plus•Knowledge in machine/statistical learning, data mining is a plus
Bangkok