Skip to content


Data Engineer

BioCatch

Tel Aviv, IL
  • Job Type: Full-Time
  • Function: Data Science
  • Industry: Cybersecurity
  • Post Date: 06/11/2024
  • Website: biocatch.com
  • Company Address: 132 Derech Menachem Begin, Tel Aviv, IL, 67443

About BioCatch

BioCatch is a digital identity company that delivers behavioral biometrics, analyzing human-device interactions to protect users and precious data. Founded in 2011 by experts in neural science research, machine learning and cybersecurity, BioCatch is used by banks and other enterprises to reduce online fraud and protect against cyber threats, without compromising user experience.

Job Description

BioCatch is the leader in Behavioral Biometrics, a technology that leverages machine learning to analyze an online user’s physical and cognitive digital behavior to protect individuals online. BioCatch’s mission is to unlock the power of behavior and deliver actionable insights to create a digital world where identity, trust and ease seamlessly co-exist. Today, BioCatch counts over 25 of the top 100 global banks as customers who use BioCatch solutions to fight fraud, drive digital transformation, and accelerate business growth. BioCatch’s Client Innovation Board, an industry-led initiative including American Express, Barclays, Citi Ventures, and National Australia Bank, helps BioCatch to identify creative and cutting-edge ways to leverage the unique attributes of behavior for fraud prevention. With over a decade of analyzing data, more than 80 registered patents, and unparalleled experience, BioCatch continues to innovate to solve tomorrow’s problems. For more information, please visit www.biocatch.com

 

Main responsibilities: 

  • Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built. 
  • Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
  • Monitor and optimize our (teams’) cloud costs. 
  • Design and construct monitoring tools to ensure the efficiency and reliability of data processes.

 

Requirements

Requirements

  • 3+ Years of Experience in data engineering and big data. - Must 
  • Experience in working with different databases (SQL, Snowflake, Impala, PostgreSQL) – Must 
  • Experience in programming languages (Python, OOP Languages) – Must 
  • Experience with Data modeling, ETL development, data warehousing – Must 
  • Experience with building both batch and streaming data pipelines using PySpark – Big Advantage 
  • Experience in Messaging systems (Kafka, RabbitMQ etc) – Big Advantage 
  • Experience working with any of the major cloud providers: Azure, Google Cloud , AWS) – Big Advantage 
  • Creating and Maintaining Microservices data processes - Big Advantage 
  • Basic knowledge in DevOps concepts (Docker, Kubernetes, Terraform) – Advantage
  • Experience in Design Patterns concepts –Advantage 

 

Our stack: Azure, GCP, Databricks, Snowflake, Airflow, RDBMS, Spark, Kafka, Kubernetes, Micro-Services, Python, SQL

 

Your stack: Proven strong back-end software engineering skills, ability to think for yourself and challenge common assumptions, commit to high-quality execution and embrace collaboration. 

Scroll To Top