Rebuilding search from PostgreSQL foundations to a fast, observable OpenSearch platform.
- Website
- dovetailapp.com
- Industry
- Customer Intelligence / SaaS Research Tools
- Location
- Sydney, Australia
- Focus
- OpenSearch performance, relevance, observability, and AI retrieval
Dovetail helps teams organise, search, and make sense of customer research, interviews, notes, and qualitative data at scale. As the platform grew, search became increasingly central to the product experience.
Search had originally been serviced through PostgreSQL, which worked well in the earlier stages of the product. But as usage scaled and search became more important to both users and AI-assisted retrieval workflows, Dovetail needed a more capable and observable search platform.
The OpenSearch layer had grown alongside PostgreSQL, but performance issues had become significant and held back a full migration. At its worst, p99 search latency exceeded 30 seconds, with regular timeouts. Search results also needed improvement: recently updated content was not always surfaced strongly enough, favourites were not consistently promoted, and the team had limited visibility into slow queries or operational failures.
Search Pioneer worked with Dovetail to improve OpenSearch across performance, observability, relevance, infrastructure, and hybrid search, transforming it into a fast, reliable, and observable retrieval layer for Dovetail's chat interface and RAG workflows.
What We Did
Search Pioneer began by making the system measurable. A benchmarking framework was introduced to establish a performance baseline, supported by enriched slow query logs, shard error logging, Datadog index visibility, and tracing in the search client to understand where the bottlenecks were.
We built a comprehensive Datadog observability layer, including dashboards, monitors, and structured logs, giving the team the visibility to investigate slow queries, track search health, and diagnose OpenSearch issues with confidence. Runbooks and contextual guidance embedded in monitor alerts helped turn alerts into action, delivering just-in-time knowledge and practical remediation steps when issues emerged.
With better observability in place, the work moved into relevance and ranking. The search layer was refactored, a new highlighter was introduced, boosting was made more structured across content types, and relevance signals were improved.
Infrastructure was also improved. The OpenSearch clusters were migrated to NVMe instances, legacy sharding configuration was cleaned up, and later migrations enabled stronger support for sorting and aggregation.
Search Pioneer also prototyped hybrid search combining BM25 keyword scoring with semantic/neural scoring, laying the foundation for better AI-assisted retrieval.
Results
Dovetail's p99 search latency dropped from more than 30 seconds to approximately 500ms - a 98% reduction - eliminating the regular timeouts that had affected both users and downstream AI retrieval workflows. Average latencies are now ~120ms.
Search is now faster, more relevant, and significantly easier to debug. The team has stronger visibility into query performance, shard failures, slow searches, and search origins, while users benefit from better-ranked results, searchable author names, stronger recency signals, and more useful QuickSearch behaviour.
The engagement transformed search from an organically grown OpenSearch layer, originally evolved from PostgreSQL-backed search, into a fast, observable, and extensible retrieval platform capable of supporting Dovetail's next generation of AI chat, RAG, and agentic workflows.
"Search Pioneer didn't just make search faster - which is why we hired them. They made it measurable, reliable, and ready for the next generation of AI-powered product experiences at Dovetail."