About us
Our mission is to transform how private markets use business intelligence. The Lantern vision will see GPs use data to drive better & faster decision-making, through deeper insights built on centralised, validated data.
To support our ambitious plans, we are looking for an exceptional Data Engineer to join our mission to innovate private capital software and change its landscape as we bring our products to market.
Our Data Stack
- Python - which supports our EL services, supported along with a modern data stack – i.e. Fivetran, Databricks
- Snowflake and PostgreSQL - to run our databases
- DBT Core - which hosts our extensive library of data models
- Azure - which hosts our infrastructure outside of our data stack - i.e. Databricks
About you
As a Data Engineer, you will join a team with ambitious plans to power Private Market data intelligence and insights. Reporting to the Head of Data and working with our Data Engineering, Data Science, Product, and Development squads, you will become a key member in our growth journey.
Key Responsibilities
- Create and maintain data pipelines
- Assemble large, complex data sets to meet product and business requirements
- Identify, design, and implement internal process improvements, e.g.
- automating manual processes
- optimising data delivery
- re-designing infrastructure for greater scalability
- Construct the pipelines necessary for efficient extraction, transformation, and loading of data from diverse sources.
- Build analytics models or tools that utilise our data pipelines to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
- Work with stakeholders to assist with data-related infrastructure needs and issues
- Have a security-minded focus when designing pipelines to protect end-user data
- Work with data and analytics experts to strive for greater functionality in our data systems
Requirements
On-the-job Experience
- 4+ years of experience in a Data Engineer or Analytics Engineering role
- As well as experience:
- Manipulating, processing, modelling, and extracting value from large, disconnected datasets
- With relational databases and complex query authoring.
- With data warehouse design and modelling
- Building extract processes from API, SFTP, and Databases.
- Project management/documentation software (JIRA, Confluence preferred).
Technical Capabilities
- Expert proficiency of Python with a strong focus on code quality, unit, and E2E testing
- Proficiency in object-oriented development practises.
- Experience in building and maintaining DBT models
- Advanced SQL knowledge and experience working with a variety of SQL databases (Preferrably Snowflake and Postgres)
- Experience with Version Control Software (Preferrably Git & Github)
- Familiarity with cloud data services (Preferrably Azure).
Nice to Have
- Worked within financial services, preferably private equity.
- Experience with dbt and analytics engineering best practices.
- Familiarity with Snowflake performance and modelling patterns.
- Experience with data governance, metadata, or lineage tooling.
- Dagster/Prefect experience
- Snowflake Cost Optimisation
Character
- Humble – aware of your strengths and open about your learning areas.
- Growth-minded – committed to improving your skills and raising the bar as a team.
- Detail-oriented – accuracy matters, especially with financial data.



