Join a Career-Defining Company

Browse open roles within our network of Inspired portfolio companies.
Inspired Capital
companies
Jobs

Data Platform Engineer

Brightai

Brightai

Software Engineering
Palo Alto, CA, USA
Posted on Oct 8, 2025

We are a high-growth company transforming how businesses operate by integrating AI, IoT, and cloud-native services into scalable, real-time platforms. As a Platform Data Engineer, you’ll play a critical role in building and maintaining the data infrastructure that powers our products, services, and insights.

You’ll join a multidisciplinary team focused on ingesting, processing, and managing massive streams of sensor and operational data across a wide array of devices—from drones and robots to industrial systems and smart environments.

Responsibilities

  • Design, build, and maintain scalable, reliable, and high-throughput data ingestion pipelines for structured and semi-structured data.
  • Implement robust and secure data lake and SQL-based storage architectures optimized for performance and cost.
  • Develop and maintain internal tools and frameworks for data ingestion using Python, Golang, and SQL.
  • Collaborate cross-functionally with Cloud, Edge, Product, and AI teams to define data contracts, schemas, and retention policies.
  • Use AWS cloud infrastructure (including Argo Workflows, S3, Lambda, Glue, Kinesis, Athena, and RDS) to support end-to-end data workflows.
  • Employ Infrastructure-as-Code (IaC) practices using Terraform to manage data platform infrastructure.
  • Monitor data pipelines for quality, latency, and failures using tools such as CloudWatch, SumoLogic, or DataDog.
  • Continuously optimize storage, partitioning, and query performance across large-scale datasets.
  • Participate in architecture reviews and ensure the platform adheres to security, compliance, and best practice standards.

Skills and Qualifications

  • 5+ years of professional experience in software engineering or data engineering.
  • Strong programming skills in Python and Golang.
  • Deep understanding of SQL and modern data lake architectures (e.g., using Parquet, Iceberg, or Delta Lake).
  • Hands-on experience with AWS services including but not limited to: S3, Lambda, Glue, Kinesis, Athena, and RDS.
  • Proficiency with Terraform for automating infrastructure deployment and management.
  • Experience working with real-time or batch data ingestion at scale, and designing fault-tolerant ETL/ELT pipelines.
  • Familiarity with event-driven architectures and messaging systems like Kafka or Kinesis.
  • Strong debugging and optimization skills across cloud, network, and application layers.
  • Excellent collaboration, communication, and documentation skills.

Bonus Points

  • Experience working with time-series or IoT sensor data at industrial scale.
  • Familiarity with analytics tools and data warehouse integration (e.g., Redshift, Snowflake).
  • Exposure to gRPC and protobuf-based data contracts.
  • Experience supporting ML pipelines and feature stores.
  • Working knowledge of Kubernetes concepts.
  • Prior startup experience and/or comfort working in fast-paced, iterative environments.