Search 641 Live Jobs

Love Mondays again!

Data Integration Engineer

Our client is seeking a Data Integration Engineer to join their Guernsey-based team on a permanent, full-time basis. This role offers the opportunity to work in a highly experienced, independent fiduciary and fund administration business specialising in the venture capital sector. You will be responsible for developing and maintaining data pipelines and system integrations within a Microsoft-based ecosystem, ensuring secure, scalable and well-governed data flow across platforms. The role involves working closely with stakeholders to translate complex requirements into efficient and robust technical solutions.

Job Duties:

  • Designing and maintaining end-to-end data pipelines in Microsoft Fabric, following a Medallion architecture
  • Building and supporting system integrations including external APIs, Azure Key Vault, SQL and Lakehouse systems
  • Extending internal libraries with clean, testable Python code and ensuring strong documentation and CI/CD practices
  • Optimising performance, cost, and security of data pipelines using tools such as Spark pools, delta format, and data partitioning
  • Contributing to data modelling standards and governance documentation
  • Troubleshooting issues, performing root-cause analysis, and implementing proactive controls and monitoring
  • Documenting solutions clearly and conducting knowledge-sharing sessions with analysts, developers, and stakeholders
  • Collaborating with business units to translate requirements into incremental and testable deliverables

Job Requirements:

  • 3–5 years’ experience in data engineering or software development, ideally within financial services or another regulated industry
  • Strong Python skills including Pandas, PySpark, unit testing, and building reusable libraries
  • Experience with Microsoft Fabric or Azure data stack (e.g. Azure Synapse, Data Factory, Databricks, ADLS Gen2)
  • Solid understanding of ETL/ELT processes, Medallion or Data Vault architecture, delta format, and schema evolution
  • Proficient in SQL (T-SQL, Spark SQL) and optimising queries on large datasets
  • Familiarity with DevOps practices including Git, CI/CD pipelines (GitHub Actions or Azure DevOps)
  • Experience integrating REST/Graph APIs and event-driven services, with knowledge of OAuth2.0 and managed identities
  • Working knowledge of data quality frameworks and monitoring tools
  • Degree or equivalent experience in Computer Science, Data Engineering, Mathematics or related field; cloud or data certifications are beneficial

What You’ll Love:

You will be joining a forward-thinking firm that blends deep technical expertise with a collaborative and inclusive culture. With flexible working, comprehensive benefits, and exceptional career development opportunities, you will enjoy the advantages of working for a smaller, independent company that prioritises innovation, wellbeing, and professional growth.

Interested? Register today, confidentially, with one of our friendly and dedicated recruitment specialists by clicking here