Helping IDAGIO add 1100% more opera to its catalogue every week.

Role Product Designer
Company IDAGIO
Team Music Ingestion Unit

Problem Statement

IDAGIO is the world’s best classical music streaming service, and relies on an exhaustive, metadata-complete catalogue of opera to maintain its core user base. Due to complexities with the manual input of this metadata, the average opera recording was taking a minimum of 5 hours to add to the catalogue. How could internal tooling be improved in order to speed this process up?

IDAGIO's Discover screen

Roles and responsibilities

I would be working directly with a team consisting of a Product Manager, Full-Stack Engineer, Backend Engineer, Label Relations Manager and Content Director to build a data-rich internal application to solve the problem. We were aiming to try and increase the number of operas we added to our database each week by around 50%.

I only had a total of ~20 hours time on the project – so from the beginning, the engineer and I decided to keep my UI work to a minimum: He would rely largely on our internal storyboarding framework, I would focus on the UX, and we'd pair during the latter phase of implementation. Not only was this setup really fun 🥳, but it helped strengthen and speed up our decision making throughout.

With the exception of the engineer, no one had worked with a UX Designer before – which meant stepping into something of an educational role and guiding several of the team meetings.


The users of the tool would be a team of internal stakeholders called "COPs" ("Content Operations", to the uninitiated). COPs consists of around 20 composers, performers, and musicologists who are responsible for taking the tangled mess of metadata which IDAGIO receives from record labels, fact-checking and adding missing information, then inputting it to the database.

IDAGIO's COPs team hard at work

IDAGIO's COPs team hard at work


User interviews

Because the COPs stakeholders were already quite engaged with the project, I decided to spend a large part of the early reserach stages listening and observing. I did this through several rounds of user interviews and behavioural observation, both within the COPs team and with the wider project unit, with the goal of better understanding where the blockers were in the existing process.

These discussions revealed the real problem with adding operas to the database:

  • Data comes to IDAGIO from the record labels in a .xml file that we need to then update in two separate databases, and audio edit. This then gets sanitised into a recording.
  • Most conventional pop and classical recordings (i.e. performances of a work) contain a uniform number of tracks – for instance, a recording of Beethoven’s Symphony No.5 will almost always consist of 4 tracks (one track for each movement of the work). Our database needs to ally each recording to a pre-existing work, and because most recordings of Beethoven 5 have the same structure as that pre-existing work, they map neatly into the database.
  • It’s not so easy with opera. Between recitative (the spoken narrative between songs), revisions made by composers, and unusual staging requirements, it’s near impossible to nail down a consistent work structure. Some recordings of Mozart’s seminal 1784 singspiel Die Zauberflote contain 47 tracks - while others are simply a 2 and a half hour single-track recording of the entire production. Neither can be mapped onto the database, since their structures do not sync consistently with the database’s conception of Die Zauberflote.
  • This meant our ingestion team couldn’t use the existing MVP version of the ingestion tool to digest opera albums, and would instead have to manually edit and upload a .csv file to the database. This was arduous, error-prone, and also no fun at all 🤷.

As a result of the research, we were able to better understand our goal: Find a way to allow our users to create structure in these opera albums so that they no longer need to manually edit a .csv file each time.

Information architecture

In parallel with these interviews, I sat with our full-stacker and Backend Engineer to understand the technical requirements. Following a lengthy but productive whiteboarding session, we managed to map out how the database could expose and ingest data.

We came to the conclusion that the only constant in an opera is the division of acts – the higher level work divider. This is what we’d use to create structure in the work/album.

By having the division of workparts as our primary focus, we managed to come to agreement on the flow that would be in scope for an MVP release. This included things like autocomplete functions on work-tagging (which would require constant refreshing of the catalogue database), as well as tagging specific performers and roles within the recording.

A map of the user flow, including out of scope areas

A map of the user flow, including out of scope areas


I tested an early prototype of this workpart division flow with two members of the COPs team using three sample data sets. This was the first time either had participated in any user testing – so a large component of the test was empathising with their understanding of the task at hand.

In this iteration, the album tracks became draggable blocks, which could then be moved across groups to form workparts. It didn't work: While both were very happy with the overall flow of the prototype, neither could understand the way I'd mapped out the UI for the process of workpart division.

After talking with them a little more, I realised my mistake: track lists will never be in a different order so should never be movable.

Following feedback from these sessions, I revised the prototype to its final form: a draggable workpart divider which would be inserted into the tracklisting. By revising the experience in this way, there would be less moving parts for the user.

Once internal testing was complete, I created a set of high-fidelity wireframes to handover to our engineer. Once a first version had been built, we spent the last few hours of my allotted time pairing to fine-tune the final release candidate.

Designs for the second (more successful) prototype

Designs for the second (more successful) prototype


We successfully shipped the feature in December 2018, and have already shipped the next iteration. Overall we dramatically exceeded our goals:


Hours saved in the import of 153 operas


Number of opera recordings ingested per week (before project)


Number of opera recordings ingested per week (after project)


Weekly percentage increase in operas ingested

And most importantly...the COPs team loved it 💁


Trust your process

A significant part of this project was bringing a team together who were largely unfamiliar with UX practice. This really forced me to create structure and lead with a design-thinking approach.

Test, test, test

Having all our users in-house really amplified the power of testing ideas, no matter how small, before sending them through the development pipeline.


This wasn’t the first project where I’d paired closely with a developer, but it was the first that really felt like our processes married seamlessly. Having a significant overlap between disciplines saved us time, resources, and likely some costly mistakes.