Senior Data Engineer
Role Overview We are seeking a Senior Data Engineer to design, build, and operate production-grade data pipelines that support our client's trading and analytics platforms. This is a hands-on senior engineering role focused on: Developing Python-based data pipelines using Dagster Extracting data from internal and external systems Landing and integrating data into our enterprise data platform Integrating batch and real-time data flows using Kafka Persisting and managing datasets within AWS environments Ensuring production stability, reliability, and engineering excellence In addition to strong technical execution, this role requires seniority in shaping delivery. The successful candidate will help break down complex data initiatives into structured technical work, uphold engineering standards, and safeguard production quality. Key Responsibilities: Data Pipeline Engineering Design, build, and maintain scalable Python-based ETL/ELT pipelines using Dagster. Extract data from operational systems, trading platforms, APIs, databases, and third-party sources. Transform and standardise datasets for integration into the enterprise data platform. Land curated, validated datasets into AWS-backed storage and downstream data systems. Develop ingestion processes for both batch and streaming sources. Integrate real-time data pipelines using Kafka where appropriate. Ensure strong data validation, lineage, traceability, and observability across pipelines. Optimise performance, scalability, and cost efficiency. Enterprise Data Platform Contribution Align new datasets with enterprise data models and governance standards. Ensure schema consistency, data quality controls, and secure access patterns. Contribute to the scalability and evolution of the enterprise data platform through sound engineering practices. Partner with downstream analytics and development teams to enable reliable data consumption. Production Ownership & Stability Own the operational reliability of data pipelines in production. Improve monitoring, alerting, logging, and operational visibility. Lead incident response for data pipeline failures and perform root cause analysis. Drive preventative improvements to reduce operational risk and increase resilience. Promote disciplined deployment and release practices. Technical Delivery & Engineering Leadership Break down large data onboarding and integration initiatives into clear, executable technical tasks. Define technical scope, sequencing, and implementation strategies. Identify risks and propose mitigation plans early in the delivery cycle. Act as a senior code reviewer and gatekeeper for merges into production branches. Enforce best practices in testing, CI/CD, version control, and documentation. Mentor engineers and guide design decisions toward long-term stability and scalability. Help steer the team toward high-quality, maintainable solutions aligned with enterprise standards. Required Technical Skills Strong professional experience building Python data pipelines in production environments. Hands-on experience with Dagster (or comparable orchestration frameworks). Experience extracting and integrating data from heterogeneous source systems. Experience working with Kafka for event-driven or streaming architectures. Solid experience operating within AWS environments (e.g., S3 and related services). Strong SQL skills and data modelling experience. Experience supporting and operating production systems. Familiarity with CI/CD pipelines and modern version control workflows.