Data Engineer
Role details
Job location
Tech stack
Job description
We're looking for a hands-on Data Engineer to join a growing data team and support the development of a robust, scalable cloud data platform that underpins analytics, reporting and operational decision-making across the business.
This role sits within the central data function and will play an important part in building and improving the organisation's cloud data infrastructure, delivering reliable pipelines, well-structured data models and automated data workflows.
The environment makes use of modern cloud data technologies such as Databricks, Azure Synapse and Microsoft Fabric, alongside strong SQL-based data modelling and warehouse development.
This role will suit someone who enjoys working across the full data engineering life cycle - building pipelines, developing warehouse models, improving data quality and helping teams access trusted, well-structured data.
What you'll be doing
Design, build and maintain robust data pipelines to enable reliable data delivery for analytics, reporting and operational use. Develop and optimise data warehouse models aligned to business requirements and analytical workloads. Write and optimise SQL for transformation, integration and performance. Work with cloud data processing technologies such as Databricks, Synapse or Microsoft Fabric. Support Python-based data processing and automation within data pipelines. Implement data validation, monitoring and governance to maintain data quality and reliability. Contribute to containerised and cloud-based data workloads where appropriate. Support CI/CD and DevOps practices that enable controlled, automated deployment of data pipelines. Work closely with business stakeholders and analytics teams to ensure data is accessible, reliable and fit for purpose.
Requirements
Strong hands-on data engineering experience. Advanced SQL skills. Solid experience in data modelling and data warehousing. Experience building and maintaining data pipelines in cloud environments. Experience working with cloud data processing platforms such as Databricks, Synapse or Microsoft Fabric. Experience using Python for data processing, automation or packaging. Familiarity with containerisation and orchestration approaches (eg Docker or Kubernetes). Experience working with DevOps/CI/CD practices in a data engineering environment. Good understanding of data quality, governance and structured data design. Comfortable working with stakeholders and translating business requirements into scalable data solutions.
The opportunity
This is an opportunity to join a growing data function where you'll play a key role in shaping and delivering the organisation's cloud data platform, working closely with technical teams and business stakeholders to build scalable, reliable data solutions.