Job Description: Pay Range: $60hr - $65hr Responsibilities:
- Engage with business stakeholders to translate business requirements into technical data engineering tasks and project deliverables.
- Design and implement data pipelines using Azure Data Factory, Databricks, and Azure Synapse to process, transform, and load data from various sources into the Azure ecosystem.
- Create data models to support data integration, analytics, and reporting needs.
- Develop and manage ETL pipelines using Azure Data Factory and SQL to extract, transform, and load data into data lakes and data warehouses.
- utomate data processing workflows using Databricks, Azure Functions, or Logic Apps for real-time and batch data integration.
- Monitor and optimize pipeline performance to ensure efficiency and minimize downtime.
- Implement and manage cloud data solutions, ensuring high availability, disaster recovery, and compliance within the Azure environment.
- Integrate data from multiple sources such as on-premises databases, cloud-based applications, APIs, and third-party data sources into Azure services.
- Develop data transformation workflows using Azure Data Bricks, Apache Spark, and SQL for advanced analytics and reporting.
- Ensure compliance with data privacy laws (e.g., GDPR, CCPA).
- Implement CI/CD pipelines using Azure DevOps or GitHub Actions to automate the deployment and testing of data engineering solutions.
- Continuously monitor and refine workflows for automated data processing and integration using triggers, version control, and automated testing.
- Create detailed technical documentation for data architectures, pipelines, processes, and operational workflows.
- Lead the development team to deliver on time as well help team to get unblocked from any issues/challenges.
- Perform PR review & validate for accuracy before check in.
Skills:
- PostgreSQL, MySQL, SQL Server, Azure data factory, Azure Synapse, Logic apps, Fabric, ADLS(Azure Data Lake Storage), Azure Databricks, Azure Cosmos DB has context menu.