Data Engineer II

Location: Tempe, AZ

Make Next Happen Now. For over 30 years, The Company has helped innovative companies and their investors move bold ideas forward, fast. The Company provides targeted banking services to companies of all sizes in innovation centers around the world.

The Information Management team at The Company is responsible for delivering data solutions that support all lines of business across the organization. This includes providing data integration services for all batch data movement, managing, and enhancing the data warehouse, Data Lake & data marts, and providing support for analytics and business intelligence customers.

Do you get excited when you see data? Constantly looking for value in Data? If that is you, we will be looking for you. As a Data Engineer, you will build, append and enhance our existing enterprise data warehouse. You will get an opportunity to closely work with business teams and other application owners, understand the core functionality of banking, credit, risk and finance applications and associated data. You will build data pipelines, tools, and reports that enable analysts, product managers, and business executives.

Key Responsibilities:

  • Design and Build ETL jobs to support The Company’s Enterprise data warehouse.
  • Write Extract-Transform-Load (ETL) jobs using any standard tools and Spark/Hadoop/AWS Glue jobs to calculate business metrics
  • Partnering with business team(s) to understand the business requirements, understand the impact to existing systems and design and Implement new data provisioning pipeline process for Finance / External reporting domains.
  • You will also have the opportunity to display your skills in the following areas: AWS Cloud, Big Data technologies, Design, implement, and build our enterprise data platform (EDP).
  • Design and develop data models for SQL/NoSQL database systems
  • Monitor and troubleshoot operational or data issues in the data pipelines
  • Drive architectural plans and implementation for future data storage, reporting, and analytic solutions

 

Basic Qualifications

  • Bachelor’s degree in Computer Science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience
  • 5+ years of relevant work experience in analytics, data engineering, business intelligence or related field, and 5+ years professional experience
  • 2+ years of experience in implementing big data processing technology: AWS / Azure / GCP, Hadoop, Apache Spark, Python and good to have: understanding of Redshift, Snowflake.
  • Experience using SQL queries, experience in writing and optimizing SQL queries in a business environment with large-scale, complex datasets
  • Detailed knowledge of databases like Oracle / DB2 / SQL Server, data warehouse concepts and technical architecture, infrastructure components, ETL and reporting/analytic tools and environments
  • Hands on experience on major ETL tools like Informatica IICS, SAP BODS and/or any cloud based ETL tools.
  • Hands on experience with scheduling tools like Redwood, Control-M or Tidel. Expects good understanding and experience on reporting tools like Tableau, BOXI etc.
  • Hands on experience in cloud technologies (AWS /google cloud/Azure ) related to Data Ingestion tool ( both real time and batch based), CI/CD processes, Cloud architecture understanding , AWS, Big data implementation.
  • AWS certification is a plus and working knowledge of Glue, Lambda, S3, Athena, Redshift is a plus.

Preferred Qualifications

  • Graduate degree in Computer Science, Mathematics, Statistics, Finance, related technical field
  • Strong ability to effectively communicate with both business and technical teams
  • Demonstrated experience delivering actionable insights for a consumer business
  • Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc.)
  • Basic Experience with Cloud technologies
  • Experience in banking domain is a plus

To apply send a resume to [email protected]

Return to Search