Analytics Engineer - Join Parachute : Job Details

Analytics Engineer

Join Parachute

Job Location : all cities,TX, USA

Posted on : 2024-12-14T06:32:46Z

Job Description :

Position Title: Analytics Engineer

Location: Austin, TX (Onsite)

Direct Hire

Join Parachute would not be able to provide sponsorship now or in the future.

About Parachute:

Powered by technology and compassionate design, Parachute has reimagined the plasma donation experience into one that is easier and friendlier. Using a simple app, our members can book donations and track earnings from the palm of their hand. In using a tech-forward approach, were able to offer each member a highly personable and best-in-class experience thats consistent at each and every visit.

Our vision is to introduce an elevated plasma donation experience to markets with smaller populations thats grounded in convenience. This model allows us to positively impact the industry supply chain and help patients gain access to the medication they need.

We have grown from 2 to 28 operations in less than three years and plan to continue our rapid expansion. We are looking for people who share in our passion for helping others and are invigorated by the speed at which our start up moves. Come join us as we help the world gain access to more plasma - one donation at a time.

Purpose:

The Analytics Engineer will play a pivotal role in developing and maintaining Parachute's data transformation layer, ensuring data is modeled for easy accessibility and high-impact decision-making. Collaborating closely with stakeholders across Product and Operations teams, this role will turn raw data into clean, reliable datasets that enable self-serve analytics and actionable insights.

Primary Responsibilities:

  • Build and maintain robust, modular datawarehouse models using dbt, adhering to best practices in analytics engineering.
  • Collaborate with Product and Operations teams to define business logic and model datasets to meet reporting and analytical needs.
  • Migrate existing data pipelines from Python to dbt, ensuring scalability, reusability, and future-proofing.
  • Own and optimize the transformation layer in the datawarehouse to ensure clean, accurate, and well-documented datasets.
  • Design and implement efficient ELT processes to transform raw data into refined, analysis-ready datasets, and support ETL workflows when required.
  • Work closely with stakeholders to identify key metrics, ensure their accuracy, and document them in a semantic layer.
  • Develop interactive dashboards and reports using BI platforms like Power BI, Tableau, Sigma, and Sisense to enable data-driven decision-making, assist stakeholders during their self-service development process, and perform administrative tasks for the BI platforms such as user management and configuration.
  • Deploy and monitor data pipelines to ensure consistent delivery of data to downstream tools like Power BI or custom dashboards.
  • Validate datawarehouse models through thorough testing to ensure accuracy, reliability, and alignment with business needs.
  • Manage deployment processes for data transformations in dev, test, and production environments, ensuring a smooth release cycle.
  • Address and resolve data quality issues reported by stakeholders or detected through monitoring tools.
  • Champion a culture of data governance and documentation, promoting consistency and reliability in data use across the organization.
  • Create clear and concise documentation for all datawarehouse models and transformations to improve data literacy across the organization.
  • Evaluate and implement new tools or processes to enhance the data transformation and delivery pipeline.

Knowledge, Skills, & Abilities:

  • Advanced proficiency in SQL with experience designing and optimizing datawarehouse models.
  • Deep knowledge of datawarehouse modeling best practices and analytics engineering principles (e.g., star schema design, dimensional modeling).
  • Experience with dbt for building and managing data transformations.
  • Familiarity with cloud-based data platforms such as Snowflake (preferred), BigQuery, Redshift, or Databricks.
  • Strong understanding of ELT/ETL processes and how they fit into the analytics engineering workflow.
  • Experience with cloud-based ETL tools such as Azure Data Factory, AWS Glue, or Google Cloud Dataflow.
  • Experience with data visualization tools like Power BI or Tableau to support downstream reporting.
  • Experience in implementing data testing frameworks (e.g., dbt tests, Great Expectations).
  • Experience with version control systems and deployment tools such as GitHub for managing CI/CD pipelines.
  • Familiarity with semantic layer design for centralized metrics and consistent reporting.
  • Ability to collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
  • Comfortable with agile or lean development practices.
  • Proactive, adaptable, and demonstrates initiative in solving complex problems and driving projects forward.
  • Nice to have: Experience with orchestration tools (e.g., Airflow, Dagster), integration tools like Airbyte and Pipedream, and collaboration platforms such as Airtable.

Education and Experience:

  • Bachelors degree in Computer Science, Information Systems, or a related technical field is required.
  • Minimum 3 years experience in analytics engineering, data engineering, or a similar role focused on data transformation and modeling.
Apply Now!

Similar Jobs ( 0)