Techstra is seeking an experienced Databricks Engineer with a minimum of 4-7 years' experience (ex: Data warehouse design, business intelligence reporting, Customer Information Systems, Meter Data Management Systems and/or Outage Management Systems) or in lieu of experience, 6-9 years equivalent combination of education and work experience.
From the day you join, you will hit the ground running surrounded by amazing people. Here is a look of some of the expectations and responsibilities for this position:
Responsibilities:
- Design, build, and manage scalable ETL/ELT/Data pipelines using Databricks
- Integrate Databricks with cloud services
- Ingest structured and unstructured data from multiple sources (e.g., Kafka, REST APIs, databases, cloud storage).
- Write optimized Spark SQL or PySpark code for batch and streaming data transformations.
- Implement data cleansing, validation, and enrichment processes.
- Optimize jobs for performance, memory management, and cost-efficiency
- Create Technical design documents
- Solution code base
- Develop Technical and procedural documentation, as required
Experience & Qualifications:
- Bachelor's degree in computer science or related discipline and 4-7 years' experience (ex: Data warehouse design, business intelligence reporting, Customer Information Systems, Meter Data Management Systems and/or Outage Management Systems) or in lieu of experience, 6-9 years equivalent combination of education and work experience
- Strong knowledge of business practices, processes, data and applications
- Strong problem solving and analysis ability
- Excellent communications skills (written and verbal)
- Possess expertise in cloud technologies
- Databricks certification preferred
- Proficient in the Azure cloud platform and its applications
- Strong experience developing Databricks
- Databricks notebooks and Databricks coding with
- Experience in Databricks technologies with Databricks warehouse expertise
- Experience with deploying/configuring Databricks from the start
- Experience in ODI, IDMC, ADF, Synapse or familiarity on Oracle Golden Gate, Cosmos DB, Hyperscale DB
- Proficient with big data technologies such as Spark, Python, Pyspark
- Experienced in data cleaning, preprocessing, and transforming data into readable formats for analysis
- Experience writing technical design specifications for data integrations and data transformations with structured and unstructured data.
- Experience with Scrum development methodology
- Experience with writing business cases. Experience in capturing, analyzing and translating business needs into functional IT requirements
- Experience developing data integrations and data transformations with structured and unstructured data.
- Application re-engineering, data modeling and performance tuning. Architecture and design for application disaster recovery and high availability: HP ALM, Microsoft Azure DevOps, Customer Information Systems (for example: CIMs, CC&B, SAP CRM&B), AMI Meter Data Management Systems (for example: Oracle Utilities MDM, Itron IEE), Geographic Information Systems (for example: GE Smallworld), Outage Management Systems, Itron/SSN UtilityIQ, Xylem/Sensus AMI, OSI Soft PI
Location: This position is a Hybrid in Office (3 days), Remote (2 days), located in Oakbrook Terrace, IL.
#J-18808-Ljbffr