This summer I was very lucky to join JW Player as an engineering intern on the Data Team. It has been a fantastic experience. Aside from sailing on the Hudson River, enjoying Ping Pong games, and cycling on the Governors Island, I learned about their state-of-the-art data pipeline, followed Agile practices, and worked with an amazing group of people. I was part of the Discovery squad of the Data Team, worked on evaluating recommendation systems, and was responsible for developing an evaluation tool for our data-driven recommendations.
With data-driven recommendations, we want to show our users relevant videos to increase video plays and user engagement. The question is how to evaluate if the recommended content is relevant, and which metrics to use as the measure. Generally there are three methods for evaluating recommendation systems: offline experiments, online trials, and user studies. In this project, we are using the user study approach, by directly asking the opinions of the viewers whether the recommended video is relevant or not.
- September 20, 2016
- Dan Meng
Architecture of the evaluation tool