• Partner with analysts to build scalable systems that help unlock the value of data from a wide range of sources such as backend databases, event streams, and marketing platform
    • Consult with our Product and Engineering Teams in the creation of new data in the production environment
    • Create company-wide alignment through standardized metrics across the company.
    • Promote the importance of dimensional data models in communicating across the organization
  • Manage the complete data stack from ingestion through data consumption
    • Connect our teams and their workflows to centralized and secure databases
    • Build tools to increase transparency in reporting company-wide business outcomes
    • Define and promote data engineering best practices
    • Design scalable data solutions leveraging cloud data technologies, preferably in AWS
    • Help define data quality and data security framework to measure and monitor data quality across
      the enterprise
  •  Excellent problem-solving & critical-thinking skills to meet complex data challenges and requirements in a fast-paced, rapidly changing environment

Along with the responsibilities and competencies specified above, we are looking for an individual who possesses a positive, action-oriented attitude and understands the importance of taking initiative within a team environment.

  • 8+ years of progressive professional experience preferred
  • Top-notch SQL, statistical/window functions, complex data types
    • Expert in relational technology, data modeling, and in dimensional modeling
    • Expert in at least two database engines, preferably MySQL, Snowflake, or Athena
    • Metadata-driven and database-centric concepts
    • Database performance
  • Data transformations
    • Expert at ETL and ETL tools, including Airflow/Prefect, DBT
    • ELT and schema-on-read concepts
    • Data ingestion tools, such as Kafka, Singer
    • At least one programming language, preferably Python
    • Unix/Linux scripting, such as bash
    • Experience with APIs, such as via curl
    • Experience with achieving performance through parallelism
    • DAGs
  • Experience with cloud-based infrastructure, particularly AWS
    • Cloud storage, S3
    • Data storage formats, such as Parquet, ORC
    • Experience with external tables
    • Unstructured and semi-structured data types, JSON
  • Data Analytics
    • Experience with at least one visualization tool, preferably, Looker, Tableau, Periscope
  •  Superb communication skills
  • BS/MS degree in Engineering, Mathematics, Physics, Computer Science, or equivalent experience

Bonus Points

  • Big data tools and engines, Glue, DMS, Redshift
  • Enterprise architecture and enterprise data architecture (data modeling and enterprise dimensional modeling)
  • Project & Change Management skills especially experience working in an Agile (SCRUM, Kanban) environment/team focusing on sprint-by-sprint deliveries

Aviada ranked #1 in Best Place to Code

Best Place to Code® is operated by Software Guru. Identifies and awards companies that make efforts to offer the best work conditions to IT Professionals.
Best Place to Code Logo