Databricks Data Engineer Lead
We are seeking a Lead Databricks Engineer to drive the design, development, and optimization of data pipelines and analytics solutions on the Databricks Lakehouse platform. This role is ideal for a hands-on technical leader who is passionate about big data technologies, cloud computing, and enabling business insights through scalable data architectures.

Job Description
This role is ideal for a hands-on technical leader who is passionate about big data technologies, cloud computing, and enabling business insights through scalable data architectures.
Requirements:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or related field;
- 5+ years of experience in Data Engineering or Big Data roles;
- 2+ years of hands-on experience with Databricks, Spark (PySpark or Scala) and Delta Lake;
- Strong knowledge of cloud platforms (AWS, Azure, or GCP) and modern data architectures (Lakehouse, Data Mesh, etc.);
- Proficiency in SQL, Python, and distributed data processing;
- Experience with CI/CD, version control (Git) and DataOps in a data environment;
- Deep understanding of data governance, cataloging and security concepts;
- Experience leading a team or a project, acting as a team/tech lead of a project.
Nice to Have:
- Databricks (Pro, Architect) certification;
- Experience with machine learning pipelines and MLOps in Databricks;
- Exposure to streaming technologies (Kafka, Spark Structured Streaming);
- Knowledge of dbt, Airflow or other similar transformation and orchestration tools.
Other skills:
- Excellent written and verbal communication skills in English;
- Ability to work in a global multi-cultural and multi-national company;
- Ability to lead conversations with both technical and business representatives;
- Proven ability to work both independently and as a part of an international project team.
Responsibilities:
- Lead end-to-end development of data pipelines, ETL/ELT processes, and batch/streaming solutions using Databricks and Apache Spark;
- Design and implement Lakehouse architectures that align with business and technical requirements;
- Collaborate with data scientists, analysts, and engineers to deliver high-performance data products and ML features;
- Define and enforce coding standards, best practices, and performance tuning strategies across Databricks notebooks and jobs;
- Optimize data models in Delta Lake and implement data governance standards using Unity Catalog;
- Manage integration of data sources across cloud platforms (e.g., AWS, Azure, GCP) using native and third-party connectors;
- Contribute to and lead technical reviews, architecture sessions and mentoring of less experienced engineers;
- Automate infrastructure deployment with tools like Terraform, Databricks CLI, or others;
- Ensure data platform solutions are secure, compliant, and scalable across global business units.
What We Offer:
- Monthly salary in USD;
- 100% remote work opportunity;
- 10 business days of paid vacation per year (available after 6 months);
- Up to 10 national holidays (US or based on country of residence);
- 5 personal days off (available after 3 months);
- Covered travel expenses when applicable;
- Employee referral program;
- Paid certification programs;
- Individualized personal development plan (PDP);
- Access to an online language learning platform.
Apply now
Databricks Data Engineer Lead
