PulsePoint is a leading healthcare ad technology company that uses real-world data in real time to optimize campaign performance and revolutionize health decision-making. Leveraging proprietary data sets and methodology, PulsePoint targets healthcare professionals and patients with an unprecedented level of accuracy—delivering unparalleled results to the clients we serve.
Sr. Data Engineer
PulsePoint Data Engineering team plays a key role in our technology company that’s experiencing exponential growth. Our data pipeline processes over 80 billion impressions a day (> 20 TB of data, 200 TB uncompressed). This data is used to generate reports, update budgets, and drive our optimization engines. We do all this while running against tight SLAs and provide stats and reports as close to real time as possible.
The working hours are from 9 AM to 6 PM EST (UTC-5).
What you'll be doing:
- Design, build, and maintain reliable and scalable enterprise-level distributed transactional data processing systems for scaling the existing business and supporting new business initiatives.
- Optimize jobs to utilize Kafka, Hadoop, Presto, Spark, and Kubernetes resources in the most efficient way.
- Monitor and provide transparency into data quality across systems (accuracy, consistency, completeness, etc).
- Increase accessibility and effectiveness of data (work with analysts, data scientists, and developers to build/deploy tools and data sets that fit their use cases).
- Provide mentorship and guidance to junior team members.
Team Responsibilities
- Ingest, validate, and process internal & third-party data.
- Create, maintain, and monitor data flows in Python, Spark, Hive, SQL, and Presto for consistency, accuracy, and lag time.
- Maintain and enhance framework for jobs (primarily aggregate jobs in Spark and Hive).
- Create different consumers for data in Kafka using Spark Streaming for near-time aggregation.
- Tools evaluation.
- Backups/Retention/High Availability/Capacity Planning.
- Review/Approval - DDL for database, Hive Framework jobs, and Spark Streaming to make sure they meet our standards.
Some Technologies We Use:
- Python, Spark Streaming, Kafka, Hive, Presto/Trino, Airflow/Luigi, Apache Iceberg, BigQuery.
Requirements
- 6+ years of data engineering experience.
- Fluency in Python and SQL.
- Strong recent Spark experience.
- Experience working in on-prem environments.
- Hadoop and Hive experience.
- Experience in Scala/Java is a plus (Polyglot programmer preferred!).
- East Coast U.S. hours 9 am-6 pm EST; you can work fully remotely.
- Notice period needs to be less than 2 months (or 2 months max).
- Knowledge and exposure to Cloud migration (AWS/GCP/Azure) is a plus.
What we offer
- Remote work. Relocation to EU/UK/US is negotiable (depends on your current location and legal status).
- Flexible schedule, 9 am-6 pm US EST working hours.
- Salary: 8-12k USD/month, higher figures may be negotiated.
- US holiday schedule.
- 21 days of vacation.