No Comments

Reporting to the Head of Data Management, the Senior Data Operations Engineer is responsible for leadership and hands-on maintenance of the day-day operations foe our Azure-based Enterprise Data Architecture. Responsibilities include the processing and movement of data from source through the Data Lake, Data Warehouse, Marts and ultimately consuming systems. This role will play a key role in building off the existing tools and workflows and evolve the control framework, daily monitoring, and triage processes to be a best in class for Data Architecture Operations
 
 
The primary role will be to support the firm’s day-to-day data operations while working on projects to enhance functionality and reliability of the overall data infrastructure.
This role will work cross-functionally, so must be process-minded and innately collaborative. Key stakeholders include other data team members, IT Infrastructure, Product Management, Project Management, and Data Stewards. As a member of the Data Analytics and Reporting Team (DART), the role will work alongside with Database Developers,
 
This position is a mix of project-based and production support work with an emphasis on building a robust, sustainable enterprise data environment. In particular, a key priority above ensuring the stability and scalability of the architecture and environment.

Responsibilities:

  • Collaborating with data engineers related to optimizing application logic and functionality in the Azure Gen2 Data Lake, Data Factory, Data Bricks and SQL Database environment.
  • Working knowledge of the firm’s data warehouse and mart schema.
  • Represent the Data Team on the Change Control Board to assess and communicate impact of proposed change controls and liaison for and with the Data Team and sponsors of change controls that could impact the Data Architecture or Data Team.
  • Work on cross-functional teams to represent the Data Team on corporate projects.
  • Work with Shared Services peers to troubleshoot issues and propose solutions.
  • Support compliance with data stewardship standards and data security procedures.
  • Apply proven communication and problem-solving skills to resolve support issues as they arise.

Qualifications
The ideal candidate will have served in a data engineering, cloud-based data architecture, or DevOps role previously. Bachelor’s degree in Computer Science, Software Development, Database Management, Information Systems or equivalent experience is required. Other qualifications include:

  • Demonstrated experience monitoring and optimizing data architectures
  • In-depth understanding of data management (e. g. permissions, security, and monitoring).
  • Experience with scripting languages such as SQL, Python, Scala (plus).
  • Knowledge of software development best practices.
  • Excellent analytical and organization skills.
  • Effective working in a team as well as working independently.
  • Strong written and verbal communication skills.

Preferred applicants will also have:

  • Expertise in database development projects and ETL processes.
  • Experience in an agile SDLC environment.
  • Experience planning and implementing QA and testing, and data warehousing.
  • Experience with Microsoft Azure Data Lake, Data Factory, and SQL Database products or equivalent products from other cloud services providers (e. g. AWS Elastic MapReduce, AWS Data Pipeline, Amazon RDS).
  • Designing and developing architectures for efficient and scalable data transformations inbound to the EDW and outbound to consuming data stores and systems within Azure and the BP Enterprise Data Architecture
  • Provides subject matter expertise in addressing projects and issues that encompass a wide range of internal and external systems (Core banking, Wealth, Trust, Data Warehousing), components, and processes