Big Data Engineer
Key Responsibilities
- Design, create, and manage scalable ETL (extract, transform, load) systems and pipelines for various data sources.
- Manage, improve, and maintain existing data warehouse and data lake solutions.
- Optimize and improve existing data quality and data governance processes to improve performance and stability.
- Build bespoke tools and algorithms for the data science and data analytics teams (and other data-driven teams across the business).
- Work closely with business intelligence teams and software developers to define strategic objectives as data models.
- Work closely with the wider IT team to manage the business’s wider infrastructure.
- Explore the next generation of data-related tech to expand the organization’s capacity and maintain a competitive edge.
Skills and Qualifications
- A degree in computer science (or similar role-related field).
- 5+ Years’ Experience in big data technologies like the Apache Hadoop, Apache Spark, Apache Kafka.
- 5+ Years’ Experience in Hands-on experience using Python and SQL.
- Critical thinking, excellent communication, team working, and problem-solving.
- Several years of experience in software development or data management.
- Strong technical background with knowledge of numerous programming languages, and a general love of writing code.
- Hands-on experience using Python and SQL, and big data technologies like the Apache Hadoop, Apache Spark, Apache Kafka.
- Experience using relational database management systems, e.g. PostgreSQL, MySQL.
- Understanding of batch and real-time data integration, data replication, data streaming, virtualization, and so on.