Careers

Big Data Software Engineer

DoubleVerify

DoubleVerify

Software Engineering
Tel Aviv-Yafo, Israel
Posted on Feb 10, 2026

Who we are

DoubleVerify is an Israeli-founded big data analytics company (Stock: NYSE: DV). We track and analyze tens of billions of ads every day for the biggest brands in the world.
We operate at a massive scale, handling over 100B events per day and over 1M RPS at peak, we process events in real-time at low latencies (ms) and analyze over 2.5M video years every day. We verify that all ads are fraud free, appear next to appropriate content, appear to people in the right geography and measure the viewability and user’s engagement throughout the ad’s lifecycle.

We are global, with HQ in NYC and R&D centers in Tel Aviv, New York, Finland, Berlin, Belgium and San Diego. We work in a fast-paced environment and have a lot of challenges to solve. If you like to work in a huge scale environment and want to help us build products that have a huge impact on the industry, and the web - then your place is with us.

What you'll do

You will join the Traffic Team, a core engineering team operating at the heart of the company's measurement system.

You will Build and maintain high-throughput streaming systems processing 100B+ daily events.

Tackle performance and optimization challenges that make interview questions actually relevant

Design and implement real-time data processing pipelines using Kafka, Databricks/Spark, and distributed computing

Lead projects end-to-end: design, development, integration, deployment, and production support

Who you are

  • 5+ years of software development experience with JVM-based languages (Scala, Java, Kotlin) with strong functional programming skills
  • Strong grasp of Computer Science fundamentals: functional programming paradigms, object-oriented design, data structures, concurrent/distributed systems
  • Proven experience with high-scale, real-time streaming systems and big data processing.
  • Experience and deep understanding of a wide array of technologies, including:
  • Stream processing: Kafka, Kafka Streams, or similar frameworks (Flink, Spark Streaming, Pulsar).
  • Concurrency frameworks: Akka, Pekko, or equivalent actor systems/reactive programming.
  • Data platforms: Databricks, Spark, Delta Lake, or similar lakehouse technologies.
  • Microservices & containerization: Docker, Kubernetes.
  • Modern databases: Experience across analytical databases (ClickHouse, Snowflake, BigQuery), NoSQL (Cassandra, MongoDB), and columnar stores
  • Cloud infrastructure: GCP or AWS.
  • Hands-on experience developing with AI tools (Cursor, Claude Code, etc..) .
  • Strong DevOps mindset: CI/CD pipelines (GitLab preferred), infrastructure as code, monitoring/alerting.
  • BSc in Computer Science or equivalent experience.
  • Excellent communication skills and ability to collaborate across teams.

Nice to have

  • Previous experience in ad-tech.
  • Experience with schema evolution and data serialization (Avro, Protobuf, Parquet)

#Hybrid#