As a Senior Data Engineer, you will design and build data pipelines, ensure data quality, and improve pipeline reliability while collaborating with cross-functional teams.
At Kpler, we are dedicated to helping our clients navigate complex markets with ease. By simplifying global trade information and providing valuable insights, we empower organisations to make informed decisions in commodities, energy, and maritime sectors.
Since our founding in 2014, we have focused on delivering top-tier intelligence through user-friendly platforms. Our team of over 700 experts from 35+ countries works tirelessly to transform intricate data into actionable strategies, ensuring our clients stay ahead in a dynamic market landscape. Join us to leverage cutting-edge innovation for impactful results and experience unparalleled support on your journey to success.
You will join a data-intensive product team responsible for building and operating core data pipelines and backend systems that model and deliver insights on global refinery operations and economics. The team works at the intersection of data engineering, backend engineering, and data science, transforming diverse industrial and market data into reliable, production-grade datasets used by customers and internal teams.
This role is a senior individual contributor position, with significant ownership over data architecture, pipeline reliability, and engineering standards. You will help shape how data is engineered, validated, and delivered across the Refineries product.
Role Description - What you will work on
- Design, build, and evolve batch and streaming data pipelines that power refinery modeling, analytics, and customer-facing products.
- Own complex data ingestion, transformation, validation, and delivery workflows across multiple data sources.
- Drive improvements in pipeline reliability, scalability, and observability, including retries, backfills, data quality checks, and monitoring.
- Lead schema design, versioning, and evolution strategies to support stable, long-lived data contracts.
- Build and maintain backend components and APIs used to serve data to downstream systems and applications.
- Partner closely with data scientists, product managers, and other engineers to translate domain requirements into robust technical solutions.
- Continuously improve existing systems as data volume, complexity, and product expectations grow.
Responsibilities
- Design, build, and evolve batch and streaming data pipelines that power refinery modeling, analytics, and customer-facing products.
- Own complex data ingestion, transformation, validation, and delivery workflows across multiple data sources.
- Drive improvements in pipeline reliability, scalability, and observability, including retries, backfills, data quality checks, and monitoring.
- Lead schema design, versioning, and evolution strategies to support stable, long-lived data contracts.
- Build and maintain backend components and APIs used to serve data to downstream systems and applications.
- Partner closely with data scientists, product managers, and other engineers to translate domain requirements into robust technical solutions.
- Continuously improve existing systems as data volume, complexity, and product expectations grow.
Skills and Experience
- Deliver high-quality, well-tested, and maintainable code, setting a strong example for engineering best practices.
- Own significant parts of the data platform end-to-end, from ingestion to production delivery.
- Make architectural contributions to data processing, storage, and delivery patterns.
- Contribute to and improve CI/CD pipelines, automation, and operational tooling.
- Instrument services and pipelines with metrics, logs, and alerts, and help define operational standards.
- Play an active role in incident response, root-cause analysis, and long-term system improvements.
- Review code, mentor other engineers, and help reinforce shared coding and architectural standards across the team.
Nice to have
- Exposure to Kafka, Spark, or streaming architectures.
- Experience with Kubernetes.
- Familiarity with event-driven or microservices architectures.
- Exposure to analytical datastores (e.g. Elasticsearch).
- Full-stack awareness (e.g. ability to read, review, and provide feedback on frontend or API-layer pull requests, without being a primary frontend contributor).
- Prior experience working on data products in energy, commodities, or industrial domains.
We are a dynamic company dedicated to nurturing connections and innovating solutions to tackle market challenges head-on. If you thrive on customer satisfaction and turning ideas into reality, then you’ve found your ideal destination. Are you ready to embark on this exciting journey with us?
We make things happen
We act decisively and with purpose, going the extra mile.
We build
together
We foster relationships and develop creative solutions to address market challenges.
We are here to help
We are accessible and supportive to colleagues and clients with a friendly approach.
Our People Pledge
Don’t meet every single requirement? Research shows that women and people of color are less likely than others to apply if they feel like they don’t match 100% of the job requirements. Don’t let the confidence gap stand in your way, we’d love to hear from you! We understand that experience comes in many different forms and are dedicated to adding new perspectives to the team.
Kpler is committed to providing a fair, inclusive and diverse work-environment. We believe that different perspectives lead to better ideas, and better ideas allow us to better understand the needs and interests of our diverse, global community. We welcome people of different backgrounds, experiences, abilities and perspectives and are an equal opportunity employer.
By applying, I confirm that I have read and accept the Staff Privacy Notice
Top Skills
Elasticsearch
Kafka
Kubernetes
Spark
Similar Jobs
Financial Services
As a Senior Lead Data Engineer, you'll enhance data solutions, mentor teams, drive innovation, and ensure data asset security within an agile environment.
Top Skills:
Ci/Cd PipelinesDatabricksIcebergNosql DatabasesParquetRelational Databases
Music • Software
Design and build data systems for a web platform and factory operations, ensuring data architecture is clean, monitored, and resilient while optimizing cost and performance.
Top Skills:
AirflowAws GlueDbtFivetranKafkaLookerPower BIPythonSQLTableau
Pharmaceutical
The Senior Data Engineer will design and optimize data pipelines using Azure, Databricks, and Snowflake, ensuring compliance with regulations and collaborating with various teams to transform data into insights.
Top Skills:
AdfAdoAzureDatabricksPower BIPythonSnowflakeSQL
What you need to know about the Manchester Tech Scene
Home to a £5 billion digital ecosystem, including MediaCity, which consists of major players like the BBC, ITV and Ericsson, Manchester is one of the U.K.'s top digital tech hubs, at the forefront of advancements in film, television and emerging sectors like as e-sports, while also fostering a community of professionals dedicated to pushing creative and technological boundaries.



