Job Location : New York,NY, USA
Contribute to a team responsible for creating and maintaining large scale Big Data systems and Data Environments, used to ingest and process large data sets and provide actionable recommendations using data warehousing and business intelligence; design, develop, test, and maintain AdTech software that extracts, transforms, and loads large volumes of data; develop software systems in an Agile development environment and on big data platforms, including Spark, Hadoop, and AWS EMR, using SQL, Presto, Java, and Python; use Snowflake and Databricks for ETL; use Datadog and Grafana for monitoring; use Terraform and Jenkins for CI/CD; develop Microservices API systems to support overall product development; store relational data in MySQL to support API and data processing applications; debug functional and performance issues on big data platforms and software modules; concurrently execute data processing software using multithreading; write code and scripts to extract MRM ad logs from FreeWheel Big Data platforms, and load MRM ad logs and campaign data; perform audience targeting; ingest audience, identity, and segment data; create dashboards and monitors on Datadog to ensure 24x7 availability of critical software deployments; build new software products and web frontend frameworks; analyze product specifications, write technical specs, create monitoring dashboards, develop test suites, design workflows, and setup database schemas and tables; interface with global engineering, operations, services, and business operations teams to execute proof of concepts and incorporate new requirements; improve system performance and ensure availability and scalability of services; provide production support for data processing systems running on AWS cloud and Snowflake; and troubleshoot data processing problems running on distributed systems. Position is eligible to work remotely one or more days per week, per company policy. REQUIREMENTS: Bachelors degree, or foreign equivalent, in Computer Science, Engineering, or related technical field, and two (2) years of experience developing software in an Agile development environment using SQL, Presto, Java, and Python on Big Data platforms including Spark, Hadoop, or AWS EMR; using Snowflake or Databricks for ETL; using Datadog or Grafana for monitoring; using Terraform or Jenkins for CI/CD; storing relational data in MySQL to support data processing applications; and debugging functional and performance issues on big data platforms and software modules.BENEFITS: $101,566.00 - $195,000.00 per yearPLEASE SEND RESUMES TO JOB_CANDIDATESCOMCAST.COM AND REFERENCE JOB ID 6202 WHEN APPLYING FOR THIS POSITION. Minimum Salary: 101566 Maximum Salary: 195000 Salary Unit: Yearly