Improving how you search for music in Roon

I’d like to fill you all in on what’s been happening with search in Roon, what we have done in our latest release to make things better, and a little bit of what we have planned for the future.

Before I get started, one piece of background: we feel strongly that it’s best for the product to present a single, clear answer to a search query that blends both library and streaming content without putting them into separate “silos” as some other products do. This allows Roon to give a single set of answers to a query without forcing the user to pick apart, disambiguate, or dig deeper based on where the results are coming from. You’re not searching your library OR searching TIDAL, you’re just searching and getting results. It’s a simpler and better experience.

Thus, Roon has to independently search your library, held within the Roon Core, and streaming content, held within cloud services, and then merge the results together. This merging problem is a tricky one. You don’t often see interleaved results from multiple search engines, and there’s a good reason for that; the fact that we hadn’t completely cracked it had left Roon’s search experience in an unacceptable state.

A bit over a year ago, we decided that not only did this have to be fixed, but that search was a “forever problem” – not something that we could fix once and forget about. It requires continual care and feeding and dedicated staff who think about search and only search, so we hired a search specialist and about a year ago and we got to work tackling search with fresh eyes.

We released the auto-complete feature earlier this year, and in building that, gained a detailed understanding of exactly how and why our existing search engine was getting things wrong. That allowed us to kick off the “big project”: an overhaul of Roon’s search infrastructure end to end.

We began by analyzing hundreds of complaints and reports from the Roon community to understand what the problems were. We used your feedback to build test cases and validate our work. Separately, we analyzed anonymized data from our servers to understand what real-world search queries looked like.

As we dug deeper, we figured out that one of the major problems is that the search engine used for the Roon library just worked too differently from the search engine used for streaming content. The two search engines computed and scored results according to different principles, each established during different eras of Roon’s product development.

The library algorithm generally returned results that were too noisy and numerous, and in a significant number of cases, noise from the library drowned out more accurate streaming results. This was especially painful for people with large libraries.

Another problem that we found is that queries for classical music just look different from queries for other content, and Roon’s search engine was behaving particularly badly with some of these queries.

We decided that in general, our approach to cloud-based search was sane (if in need of some tweaks), and the approach to library search was, quite simply, wrong.

Thus, the library search engine required a complete, ground-up rewrite. Since the most mature search technology is cloud-based and Roon’s library is not, we ended up building an embedded search engine that implements the same ideas as cloud-based engines like ElasticSearch, but in a way that lets it run inside of the Roon Core.

We also built a model that can distinguish classical and non-classical search queries prior to performing a search, so that we can tweak various parts of the search process to produce more appropriate results for classical or non-classical queries. Alongside this, we updated the user interface to give more priority to composers and compositions when a classical search is detected, which should save classical users a bit of scrolling.

Then, we had to come up with a new approach to merging library and cloud results. This required a fair amount of consideration, but we ended up landing on a really neat (and as far as we know, novel) approach for making consistent scores for search results that came from different search engines, and we’ve implemented it in Roon.

Finally, we spent months testing this stuff amongst ourselves, then with increasingly larger groups of users, until it was clear that people were feeling improvement. During this process, we iterated on all parts of the system.

I’m confident that the major and structural issues with Roon’s search engine have been addressed. I’m also sure that for the foreseeable future, people will sometimes run into searches that they don’t feel are working right. Search is a “forever problem”, right?

Now that the bulky work is done, we will be able to iterate with the Roon community more rapidly as feedback comes in, and we intend to continue improving search indefinitely.