ACM ByteCast - Mounia Lalmas - Episode 18
Episode Date: July 27, 2021In this episode of ACM ByteCast, Rashmi Mohan hosts Mounia Lalmas, Director of Research and Head of Tech Research in Personalization at Spotify, leading a team of researchers in content personalizatio...n and discovery. Prior to that, she was Director of Research at Yahoo London. She also holds an Honorary Professorship at University College London. Mounia’s work focuses on studying user engagement in areas such as native advertising, digital media, social media, and search, and now audio (music and talk). She is a frequent conference speaker, author, and organizer whose research has appeared at many ACM (and other) conferences, including CIKM, RecSys, SIGIR, SIGKDD, UMAP, WSDM, WWW, and more. Mounia relates her beginnings in computing as a young student growing up in Algeria, her love for mathematical abstraction, and passion for evaluation and user engagement. She also traces her interest in the field of information retrieval and highlights some of the challenges in building robust recommender systems for music lovers. Mounia and Rashmi also discuss the differences between academic and industrial research, the important role conferences and networking play in computing research, and what excites her most in the fields of personalization research and information retrieval.
Transcript
Discussion (0)
This is ACM ByteCast, a podcast series from the Association for Computing Machinery,
the world's largest educational and scientific computing society.
We talk to researchers, practitioners, and innovators
who are at the intersection of computing research and practice.
They share their experiences, the lessons they've learned,
and their own visions for the future of computing.
I am your host, Rashmi Mohan.
If you accidentally discovered your new favorite song while out on your morning run today,
you'll have our next guest to thank.
Munia Lalmas is a Director of Research and Head of Tech Research at Spotify, where she
leads a team of researchers across the globe
solving problems in the domain of content personalization and discovery. She has a rich
career in studying user engagement and holds an honorary professorship at University College
London. She is an author and a regular committee chair on many top-tier conferences like SIGIR and Wisdom.
Munia, welcome to ACM ByteCast.
Thank you.
Munia, I'd love to lead with the question that I ask all my guests.
If you could please introduce yourself and talk about what you currently do,
and also give us some background and insight into what drew you into this field of work.
Okay, thank you.
I'm Munia, I'm based at Spotify.
I'm a researcher. I've always been a researcher, first in academia, now in industry. My passion
has always been evaluation and user engagement. This is a very important problem in many online industries, especially around personalization.
Why this excites me is it's a hard problem.
Everything is becoming more and more online.
Personalization is getting bigger and bigger.
And doing it right remains a hard problem.
And not just that, knowing whether we're doing it right or not so that's why user engagement
is something that I get up every morning oh how can I solve few things that's super exciting
but I'm wondering if I could go back even further like what drew you into computing
this was a long time ago I was just good at math loved math. I love the abstraction level that you can get
with math, for example, with respect to algebra, logic, geometries, and so on.
At that time, if you were good at math, you were going more to a career related to become a
teacher. And I was not sure this is what I wanted to do. And there was this opportunity.
It was called where I grew up in Algeria, Informatic.
So people that were quite good at math, many of us ended up into this area of informatic computing and we are still there.
That's fantastic.
Did you always imagine that you would get into computing research or as you sort of delved into your academic pursuits,
you found that there were problems that really that you wanted to solve because you didn't have the answers to them?
Yeah, I didn't know much about the whole area of research at that time when you start as an undergraduate student i start to get into it in algeria when
your study is about five years and at the end of those five years you do your master is the
equivalent of a master project is one year research at that time the hot topic were actually
the expert system and this is where i got really interesting is started to think about what challenge and how to approach those talents.
So not just applying things, but, okay, we don't know how to do this.
This is a problem we need to solve and how to go on about it by being rigorous,
by doing a lot of reading, understanding what are the latest into state of the art and so on and this is what drew me more
into research as part of this final year project which then brought me to do master this is where
i moved to scotland to do my master of applied science at the university of glasgow and then
i ended up doing the phd so it it was not, but it kind of followed this excitement I got at the
end of my, the equivalent of a bachelor's degree, for example, in the UK. I like how you call out
those traits of, you know, doggedness, the need to sort of be rigorous about the kind of problems
you're trying to solve, the need to read more and investigate more, such classic traits for
anybody who's, whether they want to do a PhD or just is interested more in researching the areas
that they're working on. Going back to the point that you made earlier, which is around information
retrieval, and you were saying that the problems are just becoming bigger. How do you feel like
your research interests have evolved over time? So when I started information retrieval,
actually, it was a coincidence. I did my master at Glasgow University. I did it actually on formal
methods, again, the strong link with my interest in math and in practical logic. And it was like,
okay, I'm interested in a PhD. And it happened at the University of Glasgow. There was a very strong information retrieval group.
And at that time, one of the big topics is how to use logics into building a better information retrieval system.
So that is how I ended up in information retrieval.
And I stayed there.
At that time, we didn't have the big search engine.
Information retrieval was still, for example,
a big intersection with library information studies and so on.
And then with search engines, things got really, really big.
You have suddenly everybody with an information need.
It can be very precise.
It can be very vague.
And so satisfying a user is not easy.
It's very, very, very wide and so on.
And so this is where I got more and more interested about the evaluation. So less about the algorithm, but it's always easy to return results. They may be to some extent good enough,
but how do we even know if it's good enough? And so I move more into the evaluation of the results.
And evaluation is a big part of information retrieval research.
Anybody that is into this area as a researcher
or applied researcher or engineer will always say,
okay, how do we know that what we're returning to the user is good?
And so on.
And this has been more my path and less on the algorithmic side.
There's been a lot of progress on the algorithmic,
but I'm always being interested.
What does it mean for the user?
Are we retrieving the right things at the right time?
And this journey is actually not finished.
While we're growing the algorithm, we still keep
on asking the questions. What does it mean to the users? Because search has grown now so much.
Internet, e-commerce, music, shopping, and so on. And intents are very different.
Satisfaction are very different. So there's a lot of research still waiting to be done.
It sounds so exciting and such a valid point that you bring up that the evaluation of the results
almost is what will feed back into the algorithm to make it better and one of the things that I
know when I was looking up your previous work the need to sort of measure was there even in a lot of
the previous early work that you did. I know that you did a lot
of work around measuring user engagement. And that was, again, pioneering at that time, just like you
said, when search was sort of just coming up, the idea of providing content to users, consumption
of content was growing. And so understanding how does a user stay engaged was really important.
And you did some very, very incredible work around that.
Do you remember sort of the key innovations at that time that felt like a paradigm
shift in the field? That's a really interesting question. I could write a book about it.
So there are two parts. There's the evaluation, which is the offline evaluation with very precise metrics.
For example, area under the curve, precision and recall.
Then there is the online evaluation where actually systems are running and we get actual feedback from the user with clicks, time spans, and so on.
Earlier in the field of information control,
there was a lot of progress with the
offline evaluation, also in the sense that precision and recall does not capture everything
related to engagement, to satisfaction, and so on. And not just me, but a number of people have
been working around that, trying to build better metrics, especially with respect to offline then going back to the online it is different here is
you have those concrete feedback from the user and there's always this thing is if everybody click
it's a notion that the results are good and so on and this worked for a while because it can be
viewed more as a proxy of a user engagement a lot of the
metrics actually that we are people refer to a proxy of engagement but then there's always this
question what does it really mean in reality the value of a click and so on so this is where i
started to be much more interested so you have have the offline, you have the online.
There is a connection between those two.
High precision does not mean that it's going to lead to long-term engagement.
So those are the questions, not just myself, but many people at company and also at university
start to really understand the connection.
And to some extent, what was maybe the breakthrough
is to ask those questions
and to not just rely on metrics that everybody use
and say, oh, well, I'm doing right.
I've got the right precision and so on.
So with the breakthrough is just to say,
well, actually, what does it mean to have high precision?
What does it mean to have high record?
What does it mean to have high click-through rate and so on and by trying to ask this question
myself and others have come up with maybe better metrics to really understand well satisfaction
means often user returning and then if you agree with that you try to find what are metrics
that correlate with this.
So it's really the thinking.
Another thing that is also important is metrics.
When people talk about metrics, both online and offline, they have to be very careful
that the task that is being looked into has an impact of what the metrics mean.
So for example, social media,
a good metric is people spending quite a lot of time on it.
While search, it could be that the people just click on result
and spend as little time.
If the search results are good,
the user find what they're looking for and just leave.
So again, this idea of metrics of evaluation and so on
is very specific to the task at hand.
And the field of information retrieval, I would say, was one of the earlier ones to really look at asking those questions.
Precision and recall are still used, but we are using also a lot of other metrics.
There's this new area, which is I'm not so much involved myself,
which is conversational search. What is the metric? What is success? And so on. And being
brought up as an information neutral person, we constantly keep asking. So as long as we ask,
we will make progress. Thank you for that, actually, because what you bring up is not
something that we may always think about if we're not in the field, which is really the metric of success is different depending on the type of
problem that you're trying to solve. And similarly, also looking at the impact that a metric has on
what you're trying to solve for. For example, I mean, one of the things that I was thinking about,
which I know in the past I've also looked at, is there are so many other dimensions that are changing with how a user is interacting with the service that you're
providing. Whether that is search or whether that is reading content, what you initially may think
of as the mode of engagement, maybe they're reading on a desktop versus now moving over,
not now, but maybe a few years ago, moving over to a device like a mobile phone,
suddenly the kind of metrics that you're looking at are very different. You may not get the same
signals that you do with a desktop computer in terms of clicks, etc. Do you remember some of
those shifts and what do you see as trends that are changing now in terms of the kind of metrics
that people have to evaluate? A big change that people try to incorporate when they measure success is user behavior.
So, for example, on the desktop, it's very well known there is lack above the fold.
People don't go below.
So everything has to be optimized for quite at the top of the pages
or what is visible on the desktop screen.
This is not the case on a mobile phone.
Scrolling down is much, much more common.
So if you just look at scrolling down on desktop and compare this to the phone,
if you just compare metrics without being aware those are different user behavior,
then you just get completely the wrong results.
And this changed a little bit what is viewed as success, for example, when reading news on desktop and reading news on mobile.
For us, for example, just in the context of Spotify, so we have the mobile
experience where we try to find a way to combine familiar content that the user wants to listen
of being to a particular artist, genre, and so on, and the other content which is more about
discovery. And we take into account that people would browse to some extent up and down.
And this allows you to investigate the value of metrics differently.
So again, what I'm trying to say is user behavior
has also a strong effect on the metrics.
And even if it's the same product,
and sometimes you may keep the same metric,
you may keep, okay, let's just keep at the moment
click-through rate but it has to be interpreted differently knowing it is desktop and knowing it
is mobile or other kind of device another thing also which it's related to again some of the work
we have at spotify so we have playlist. Spotify has a lot of playlists.
People go into them.
And what is success of a playlist?
And we have playlists that are made for people to fall asleep.
And some are made for people maybe to do kind of party-type environments.
So a sleeping playlist, success means the user starts listening to that playlist
and does not do much. That's the whole point of a sleeping playlist. While other playlists about
songs that you may want to use to build a party playlist, there's going to be a lot of interaction
and a lot of skip and so on. And this is success.
The user is trying to act like a DJ
and try to extract some track
to create their own playlist.
So again, even the same product,
but two different parts of it,
two different playlists with different intent,
success is very different.
And we are looking into how this is helping actually
building a better personalization at Spotify.
Hopefully it gives you an idea that there's not one answer.
It's user model, the application,
and also sometimes going down to the item itself.
Absolutely. I think that gives a lot of clarity, right?
And of the two things that
you said there that I'd like to sort of maybe talk a little bit more about is one, of course,
you're talking about personalizing the experience for the user, understanding the user's intent and
possibly the context in which they're using the product. And one of the other things that you said,
which is in some cases, you are trying to optimize for discovery. And in some cases,
you're trying to optimize for surf and in some cases you're trying to
optimize for surfacing maybe new content how do you strike a balance between those two that is a
good question that is what we're working every day so there's this various way again i'm talking
about work we're doing at spotify not just research a researcher work with engineers product manager
designer and so you can organize for example
the front page in a way this is for discovery this is for the kind of things you're looking to do now
so that is one way you try to understand various needs and because we have so many users and other
online services in shopping and they have so many users,
they get a good understanding what are general big needs. Needs of something novel, need of
something to solve a problem that is just now. If you have enough of those identified
intense needs, it's easy to even organize the front page the home page accordingly now the other
thing is what do you how does personalization work personalization is about trying to return
to the user what is the most relevant to them and of course the definition of relevance is something
that evolve it could be this is what you tend to listen to
we're going to give you more of this but in the context of entertainment and particularly in the
context of music we know that this is just not good enough we have to feed the user of course
with what they want to listen now but this is always a journey. User will just evolve into that listening.
And we can just decide, okay, there is a playlist.
We may try to find a way to add track to it
that is related to other track in that playlist,
is related to what we think the user is likely to listen or not.
And then you get signal back.
Did it work or it didn't work?
And then from there, you can build better algorithm.
It's a mixture of what is called in the bandit area,
kind of explore exploits,
although the definition there a bit more technical
is like trying to provide to the user what they need now,
but try also to show
other things. And it could be something as simple, maybe 10% of the time I'm going to give to the
user something that is a big difference. And from then you start to get some signals. Does it work?
Does it not work? And then you can build also better algorithms to incorporate those signals
back. It's an exploration by itself. In the context of
music, I think we're lucky that many users are on a discovery journey. They change into what they
listen. They may listen to something quite a lot now, but at some point they will want something
else. So I think we have more room to push more discovery. It doesn't mean it is easy. It's not
just, I'm going to give you something
completely random no it's just to find the right level we're also trying to understand how our
user ready to receive more different diverse content you know some people are just pretty
happy to listening to a particular type of things and this is perfectly all right some people are
more open and we can look into this by just looking at the listening behavior do they listen to very diverse sets of for example java and so on or is
it very very specific this is this notion of going to understanding the users understanding the
content understanding and acknowledging that we have to find a way to give the right content,
but somewhat injecting into it something a little bit different.
We can do that explicitly.
For example, at Spotify, we have Discover Weekly,
which is this playlist every Monday, which is about new content.
Or we can do it by one user is listening.
Okay, maybe having some track or some
song that are a little bit different they haven't listened to that particular artist that is close
enough to for example what the session is about so we try in those various way and we're learning
from there and again at the end those are signals then i've we take back understand fed back into
algorithms allow us to build better algorithms and so on
it's funny because i heard from of course i have teenage daughters who use spotify and they
absolutely enjoy it and you're often touted to be able to quote unquote read the mind of your user
and what you're talking about is incredible because it seems like a lot of the work you're
doing is really to understand what does my user need and providing them that and also like you mentioned music is probably an area which
lends itself well to a little bit of exploration and discovery but there's going back to this idea
behind reading the user's mind you know it feels like Spotify is able to give the user what they're seeking at that time.
How do you achieve that?
I mean, is that a very conscious mission that you are sort of using as your North Star as you go down your paths of research or product ideas?
Well, of course, we're not going to tell you our secret.
But again, music is not new people relate to music group of people relate to music people
listen to music together we have editors that are expert in some particular type of music
so we know that what people want in the morning on mond Monday is very different to what they want on Friday evening.
So there's a lot of knowledge that comes from experts in music.
So this is why we have playlists that are created exactly for that.
We have playlists for, again, lullaby, sleeping, yoga, the gym, and so on. And working with experts, they just know what they're
doing. Now, the second part is, imagine a playlist about happy music or sad music. What makes me
happy may not be what another person feels, oh, this is a happy song or this is a sad song and so on. And this is where
the personalization comes into account. So from this, oh, this is sad music, this is happy music,
this is running music. So how do we personalize to, for example, artists, genre or beats and so
on that is specific to the user? So it's combining a little bit what we refer to as human in the loop
and the algorithm to bring this together.
And by doing this, it looks like we're reading the mind of the user,
which is good.
Yeah, that's great.
I mean, I like the phrase that you say, human in the loop,
and to bring that extra level of sort of
intelligence into these recommendations. So in terms of personalization research, Munia,
what are the common computing problems that you're trying to solve or what maybe the industry is
really looking at right now? There's a number of them. And for example, a lot of algorithms are not yet scalable.
So we may come with the best algorithm,
whether in the research
or in academia and so on. And then
it has to be scalable. Users do not wait
much for a search result. And so there's
this notion that it's a really, really, really good
algorithm that works really, really well. We need to make it scalable. And by scalable, it's a really, really, really good algorithm that works really, really well.
We need to make it scalable.
And by scalable, it's also to be reactive.
It's like, okay, suddenly we have a lot of signals that are a little bit different.
How does the algorithm react to it?
So this is also the aspect is explanation and interpretability, transparency.
Those algorithms are really optimizing in general, always for the next click.
Sometimes not much explanation.
There's a whole area with bias.
And there's a lot of research around this at Spotify, but also elsewhere.
By just letting the algorithm running on their own,
there's a lot of problems happening
and it's important to address them.
The other thing, which is a little bit related to this,
is to move from, again,
I'm talking a little bit more machine learning jargon,
is what those algorithms are trying to do
is to optimize for a metric for an objective
function then to be like click-through rate just to click optimize for the next click and it's
it's very very well known that this is good for the moment but it's not good long term it's a
hard question how do we know that what we're trying to optimize for now is good long term so
some of the research we have done is people that have a more diverse listening
tend to stay longer, for example, on Spotify.
So it's important for, again, going back to this discovery.
So it's not just about optimizing for the next clip,
it's optimizing for the long, what we call the long-term user satisfaction.
And we're not there yet, which is kind of exciting
because this is often also going back to my passion,
which is metrics and user engagement.
And at Spotify, how we are proposing to go into this
is to rethink how we do the optimization.
And we are investing in one particular technology which is re-transfer
learning because we believe it will allow us to do that while also allowing us to be
interpret the various models transparency and those are challenge but they are challenge that
many are trying to address now and this is one one of our focus from the personalization perspective,
not just the next listening behavior, the long-term listening behavior. And this is what
will make personalization more successful. Yeah, I like how you tied it together in terms of,
you know, the work that you're doing in personalization, but taking it back to the
work that you do in user engagement, not just looking for user engagement in the near term, but really the long-term behavior that, you know,
you're trying to optimize for. That's great. One of the things Munia also wanted to touch upon is
just in terms of your career, you've spent a lot of time, as you had said earlier, doing research
in both academia, as well as in industry. And in industry, when you're working for a research organization
or heading one up like you do, how do you strike the balance between optimizing for what is bringing
business value versus researching for the ability to actually do groundbreaking work? I know it's a
discussion that has happened oftentimes, which is like, how do you balance the two worlds?
What is your philosophy around that?
That's a very, very good question.
It's a question that many research organizations always keep asking themselves, revisits and so on.
The way we're doing it, and my answer is going to be maybe very Spotify-specific. We are trying to address a challenge that is relevant to Spotify products.
At the moment, this is what we were doing.
For example, how do you optimize for long-term and short-term?
We have the support of the product, the engineer, and so on.
So we're working with them.
So we are trying to make better product,
better algorithm, better methodology for evaluation purpose.
So we do research to improve the product
or to build a better product.
And as a byproduct of this,
we, for example, publish. publish also we publish also with our
colleagues that are not necessarily in research really it is a bit of a challenge it's a bit of
a balance we are lucky there are a lot of really really interesting research problem at spotify
so we can jump on many and this has allowed us to really help on various occasions
and so on so it maybe it's a right time the way we work now and our contribution has been very much
valued at the same time because those are often maybe not necessarily brand new problem, but they're new in the context of audio listening.
So that's why there's a good choice of research
that needs to be done.
And this is very much valued by the product team,
by the business.
So that is our current philosophy.
Whether it will be the philosophy in two years' time, I don't know. But I can still add that there is this investment in reinforcement learning. We know this is going to be not tomorrow, but it we want to be, for example, in five years, which technology,
and how do we progress toward this, for example, this vision.
So we always try to define the long-term research needs into steps.
And this allowed us to come up earlier with proof of concept.
Okay, this is good. This is this is less good okay let's move
this way and not this way and finally is to continue discuss with the business what the
research is doing and not just to work on our own with no communication with product team
so hopefully it gives you a little bit
how we're trying to make it work, but it's likely that we evolve as we grow. But at the moment,
this is how it works and it's working pretty well. Yeah, I like how you say that. I mean,
you know, iterate through the ideas that you have, but also get the validation working closely with
a product or an engineering team to see, you know,
if you're actually sort of moving in the right direction, but also fuel the needs of research
in this area, which is so nascent in and of itself. But I know, Munia, that one of the other
things I wanted to definitely talk to you about was you have a lot of interactions that happen,
you know, in the community overall, right right you do participate in conferences you're
on various committees one of the things I read about the work that you've done around the
initiative for XML retrieval you co-led that project I'm just curious you know why was it
important for you to do that and what do you think is the value that you get from these
industry sort of engagement and participation so this this is going back to me, information retrieval researcher,
interested in evaluation.
Evaluation is very big, again, in information retrieval.
We have the track initiative, which is like every year with a number of tasks,
people building test collection, how to evaluate and how to compare approaches.
So we know how everybody or how the state of the art is advancing in a number of areas.
For this one, Ilex, we were discussing a particular problem, which is at that time,
it was XML was the big thing. Everything was going to be represented with the XML format.
And then there was this notion, we don't need to return the whole document but we need to return just a bit of the documents
and we found that pretty quickly that the way we did evaluation with precision and recall just
didn't work out and it was interesting a lot of people got interested into that area like research
they always go into phases. At some point,
this is a popular problem people are trying to solve. So we had the opportunity to kind of build
a group, which was international, across the globe, coming both from industry and academia,
to try to solve this problem. I work on exam and retrieval. We call it focus retrieval. How do I know that my
system is, my approach is doing well? By doing this, help us really to make progress together.
So it was not one person deciding that's the way to do it, but they also allow to build a strong
community among people interested in a particular research area.
It also allows master's students and PhD students to take a topic that for their dissertation,
contribute to it,
were able also to validate their work and so on.
Without that, they would not have been able to validate.
Having a mixture of academia and industry is always good.
Industry brings perspective that maybe in academia
we're not very much aware of.
Those are the constraints we're having.
Those are the questions we're having and so on.
Those are sometimes the data sets that we're having.
And it's to try to, again, conversation is everything.
And academia often comes with very very strong models but
scalability become an issue and all those kind of things so having those conversation
allowed us to really make a good progress in this particular area but also to bring a strong
community of researcher that now still all over the world in various places
academia and industry we should also view research as an education, especially people
based in academia is growing the next researcher. And those initiatives like TREC and INEX and so
on allow also to do that. There's also the education part, which I'm also passionate about.
I can tell just by the passion with which you speak of this and the tremendous value that it brings, not just you, but the community overall. How does one find these
opportunities? How does one engage, whether it's somebody early in their career or somebody who's
in industry or academia? What would you suggest? Be open. I don't like this word, but it is an important word, is find a way to network to understand what are the opportunities.
So it's beyond networking, but networking with the purpose.
It's important to know what is out there.
Identify important research area and if it's a student this will happen partly by being part of a group in a
university a research group if it's an industry it will be a combination of what are the needs
of the business and what is happening outside and so on so again it's go back to this conversation, ask questions, attending important
events like some of the conferences and discuss and discuss and discuss and discuss.
And things which we used to do in my early, early, early age is to organize workshops. So
if there's a particular area that is of interest, especially if it's a bit multidisciplinary, a bit of metrics, a bit of machine learning, a bit of design and so on, organize workshop around this because then one can bring experts and really also start to build an understanding of what one can do for the career, maybe the next one to three years or longer term and so on.
So again, it's a lot going back to this conversation, talking to people and organizing
workshop is a great way to really learn a lot because also it pushed us to do this networking,
maybe in a more constrained way. Thank you for those, you know, those are very practical
and actionable tips. I'm sure that our
listeners will really appreciate. So Munia, what do you do outside of work? What are your hobbies
or what are your passions? I have mostly two. I used to like doing a lot of weight training
in the past, but I damaged my back and it took me a while to be able to replace it. So I have started yoga a bit more than a year ago.
And especially with the pandemic, you can't do much.
So a lot of the yoga went all online.
So I really took it seriously.
And this is becoming a hobby to the point I'm starting reading books about the value of yoga. It's both about the actual yoga exercise,
but the kind of well-being spirituality that comes into it.
And people who know me know that I really like Prosecco.
I won't call it as a hobby or passion,
but it's something that I like very much.
That's great.
And the part that you bring up about well-being is so important
in these times, especially. This has been an excellent conversation, Munia. For our final
bite, I'd love to understand what are you most excited about in the field of personalization
research or information retrieval, the areas that you're interested in what excites me is so you have
information retrieval you have this whole area which is very related but still different
recommender system you have also voice how people now interact with online system
if you put this into ecosystem so personalization is very much about the user user has a need or
user wants to get things done or want to listen to something and we forget the content provider
and for example in the context of spotify or the the artist and so that's what i call the ecosystem
and all this it's all related to what I refer to downstream interaction.
Interaction is not just a click.
It's a relationship in the context of at least Spotify,
but it's also elsewhere between the user, a user and content.
And there's various ways the interaction is evolving.
We have now the whole area of conversation. And so it's like now we're trying to not evaluate a click or an approach.
We're trying to evaluate how users interact with content during the journey.
And this is just fascinating because success is not just now.
It's a success of a journey.
And I'm looking very much forward
and I'm already starting to look into this.
What does the success of a journey mean?
And this is super, super exciting, at least for me.
I think, you know, what you say is relevant,
of course, in the field of personalization research,
but overall as well,
there's so much depth to that statement,
success of a journey.
Munia, thank you so much for speaking to us at ACM ByteCast. We thoroughly enjoyed it. Thank you very much.
ACM ByteCast is a production of the Association for Computing Machinery's Practitioners Board.
To learn more about ACM and its activities, visit acm.org. For more information about this and other episodes,
please visit our website at learning.acm.org slash bytecast.
That's learning.acm.org slash b-y-t-e-c-a-s-t.