Programming Throwdown - 122 - Building Conversational AI's with Joe Bradley
Episode Date: October 25, 2021When you ask Alexa or Google a question and it responds, how does that actually work? Could we have more in-depth conversations and what would that look like? Today we dive into convers...ational AI with Joe Bradley and answer these questions and many more.Thanks for supporting the show!00:00:15 Introduction00:01:24 Introducing Joe Bradley00:04:44 How Joe got into Conversation AI00:21:35 Zork and WordNet00:27:48 Automatic Image Detection/Captioning00:39:31 MuZero00:45:27 Codex00:50:15 GPT and businesses00:55:16 Artificial General Intelligence01:00:05 What is LivePerson01:16:30 Working at LivePerson01:21:18 Job opportunities in LivePerson01:27:04 How to reach Joe01:32:40 FarewellsResources mentioned in this episode:Companies:LivePerson: liveperson.comPyTorch: pytorch.orgTensorFlow: tensorflow.com ★ Support this podcast on Patreon ★
Transcript
Discussion (0)
Hey everybody. So one of my most fond memories
is playing this game, this really esoteric game called Essex, which I think is named after a
place in the UK. But it was this game where it was like Zork, if you are familiar with that,
but it was one of these text-based story games where you would type, you know,
go north, go west. And the thing that amazed me, I mean, I was maybe seven or eight years old.
The thing that amazed me was that I could, I felt like I could just say anything to these computer
players and that they would respond with really interesting things. And it ran on a single floppy disk.
And so, you know, honestly, to this day,
it kind of blows my mind.
But as a child, it just completely blew my mind.
I mean, I was wondering if there was a real person somehow involved like, you know, in real time.
And I've always been really interested
in conversational AI.
And I feel like we're all just exceptionally lucky
to have Joe Bradley here,
who is the chief scientist at LivePerson, who is an expert in this field and really going to talk
to us about how conversational AI kind of works, a bit of the history behind it and how kind of
LivePerson and other folks do it today. So thanks so much for coming on the show, Joe.
Hey, thanks for having me. I'm really excited to be here and even more excited about the fact that you just brought up zork for me that was the one it wasn't the you
know the essex or that that i i didn't know about but but yeah i spent hours doing that as a kid
and it's funny too i have children myself now and i see them you know just just last week i went
downstairs and they're not playing zork right right? But they're playing the equivalent on the Amazon device on the Alexa.
Somebody's coded up a little like exploration game and now they're talking to that.
But it's essentially the same interface, just with the voice instead of text.
And it's still fun for them, which I think is like really interesting, even in this age
of like, hey, does everything have to be these like cool looking graphics?
And, you know, what's the UX got to be like, there's still something powerful about
just talking to a machine, like you can like, like you want to, and in a way that's easy in a way
that's natural for you. And then having the thing respond and tell you a story, right? That's,
that's cool still. Yeah, I think that, you know, when it's abstract like that, you kind of, you know, your mind fills in those gaps and fills them in a way that's like really interesting and pleasing to you. Right. And so that's kind of the, I think, who's the Scott McCloud, I want to say, if I'm getting that right. But he wrote a book about understanding comics. And he explains that the reason why comics are,
originally they're meant to be written really quickly,
but another reason why comics work so well as a medium
is that they're so abstract that when you watch,
for example, Dilbert,
you kind of put yourself into Dilbert
versus if Dilbert was photorealistic,
you wouldn't really be able to do that as easily.
And so yeah, games like Zorkork and Essex, they do that.
Your mind fills in all those gaps.
Yeah, they hit this spot for us that's like,
if you think about how evolved we are to have conversation, right?
And how important conversation is to us as human beings, right?
It's the fundamental way in which we created efficiency, resources, wealth, right?
All the things that we have today
are founded on the abstractions
that make conversations possible, right?
Language itself.
And then the ability to have those
in a two-way or multi-way dialogue
so that you can build more
than just what's inside your own mind, right?
So I think in some ways it's like surprising
when I look at my kids playing Zork on So I think in some ways, it's like surprising when I
look at my kids playing Zork on that device. And in other ways, it's like, oh, no, wait,
we're evolved to do this. Like, this is what we want. We want to build stories. And now it's just
interesting. It's kind of interesting that last 30, 40 years, we've begun to be able to do that
with inanimate objects, right? And that like, we sort of know, and we sort of don't care,
and it's sort of still fun.
Yeah, it's amazing. So I want to dive into, I'd love to know how these things, how Zork works and then also fast forwarding the future.
But before we get into the tech side of it, you know, what is kind of your background and what led you to building conversational AI at LifePerson?
What's that story like?
Yeah, it's a little, I don't want to say topsy-turvy
because it's not a bad story in my mind,
but it's definitely a little all over the place.
So if you go back far enough,
you start getting to an interest in opera singing
and a major in English literature
and a whole bunch of stuff that doesn't fit very well with where I am today, at least not in most people's minds.
So for me, I tend to have moments of five-year moments in my life where I get really, really passionate about something,
and those end up being a stepping stone to kind of somewhere else. And so when I was, you know, at an undergraduate school and, and starting,
you know, working as a classroom teacher in San Francisco, I got reminded about how, you know,
I was teaching math and science to, to seventh graders, which is by the way, the hardest job
I've ever done by a factor of 10. It sounds difficult. I remember middle school as being one
of the hardest points. I've only ever been in one fist fight in my entire life other than martial
arts, which doesn't count. And it was in middle school because somebody, oh, I was playing
basketball and I blocked somebody because I'm a very tall, but I'm not particularly athletic,
but I do have the height. And I blocked someone and he was upset. And actually, I wouldn't really call it a fist
fight. He just punched me in the face and I was really upset about that. But yeah, but I think
middle school is extremely challenging age. And, you know, Patrick and I also have kids. And so
that's that's an age I'm not really looking forward to, but we'll have to get through it.
Yeah, I mean, a couple like I will will not forget the day one of my students, she was this probably
about six foot tall, 12 year old girl looked at me and she said, I was like, just getting
started.
And I was like, you know, like I said, it was this undergrad English, English major
who was like, I want to go be helpful.
I want to go do something meaningful in a, you know, in, in a city.
And I just moved to San Francisco. So I got this job teaching and I was a couple
months in, I wasn't, I was not good at it yet. I hadn't figured it out. And she looks up at me
and she says, Hey, Mr. Bradley, this class is just like WWF Smackdown. And I like, I mean,
I hung my head that day. Like it was, it was not like I, cause she was right. Right. I was like, wow, this is this is not good.
And I mean, thankfully, the institution of public school in San Francisco was very helpful.
They brought me this mentor teacher who who had all this knowledge about how to work with kids.
It's really just like like simple things that you'd never think of. Right.
Like don't talk about what one kid isn't doing well, stand next to them and talk to the kid who's doing it right and praise them.
And then the other kid's going to suddenly look up and want to do the right thing.
You know, all these sort of tricks of the trade that were just like changing, life changing for me.
But anyway, yeah. So to get back to your question, you know, I sort of rediscovered a love I had for math.
I studied a bit in college and I was I was doing teacher training on, you know, negative adding negative numbers or something like that.
How do you how do you teach a 12-year-old how to do that?
And what are good visualizations and all these things?
And it just brought back to me like, oh, man, I had so many questions State system, which I think deserves a plug because I feel extremely fortunate to have
been able to go get an advanced degree in mathematics and pay something at the time
like 800 bucks a semester to do it.
Oh my gosh, amazing.
It's more now.
I know it's more now, but I think it's still a pretty good deal.
And back then, that's life-changing stuff, right?
Like these are cultural institutions that we have built for ourselves that's that's life-changing stuff right like these are cultural institutions or you know that we have built for ourselves that that affect people's progress in life dramatically
and so that was a special i think that's a special institution for me that was a special time
uh you know and that kind of led into physics right like i was doing the math i was like well
i want to i want to do something with this math as fun as the abstraction is i've always kind of been
attracted to abstraction i wanted i wanted to be good for something. So I ended up going up to University
of Washington, moving up to Seattle and working on a PhD in physics. And that kind of led me to,
you know, a lot, obviously more applied mathematics, obviously beginning to work in
statistics much more deeply because I ended up doing a bunch of experimental work. And,
you know, in addition to turning bolts and building instruments and firing x-rays and lasers and stuff, like you've got to go do the data analysis. And we started to think that there
were some, you know, more advanced statistical analyses that would help us understand convolutions
of data and all these things that, so that started to filter in. And as I became a full-time scientist,
I just found a lot more passion for that when I was
working at the National Labs.
And it was one sort of component of a larger decision that led me to change tracks and
end up kind of in the industry I'm in today.
And that was kind of by way of Amazon, right?
So I sent all these resumes out because I was interested in, I'm getting into machine
learning.
It's a live field.
It's new. I think I have been reading the papers. I've been like, this is a fun time, as opposed to
physics, which is like hundreds of years old and everything's super sub-disciplined and very,
very narrow and very, very tight. And it takes a long time to approach anything state-of-the-art.
I was like, machine learning, well, this is different, right? This is new. This is live.
You can learn it, not quickly. It's not easy. I don't mean to say that, but you can get to the
front of the discipline much more effectively. And I think there's just a lot of ways in which
knowledge is shared in the ML community and they're pioneering this academic research and
shared code and all these things that wasn't really happening in physics that that's starting to now yeah i think i think you're seeing
the same thing with economics so i've always been really interested in economics and um
and what you're seeing is ml start because ultimately economics is is especially micro
economics is ultimately about people and the way they behave. And so you need to,
it's statistical by nature because we're not going to have a human brain processor. So we're going to
be estimating that. And so, yeah, you're seeing ML make a ton of strides in that field. And I think
at some point ML will diffuse and just be a part of all of these fields. And ML, as we know it now,
will become more of
like a core, you know, how do we make the learning really well? But I think we'll get to the point
where maybe even conversational AI won't even be ML. It might be part of like a speech understanding
or something like that. Yeah, I think that's right. I mean, I think if you really back up
and think about what machine learning is, and even if you think about that in the context of the neural nets,
what are they really doing?
They're taking these qualitatively
different kind of problems
because they have such a high degree of dimensionality
and such a weird set of correlations
across the variables that describe them.
And then they're trying to find these
really, really useful local potential minimum in them.
These really, really useful like solutions,
whether or not they're, you know,
the perfect or the global, you know,
and I know there's a whole branch of ML
that's devoted to finding, you know,
convex problems and all that stuff, right?
Like, but I think the way the field's evolving
is we realize that like real problems
are like way too dimensionally complex
to solve in those exact ways
or to even approximate towards those global solutions.
And we're all just okay with sort of these local solutions, but that's going to like,
like any field that has anything approximating a real problem. And most good fields do it's going to benefit from, from that basic approach. I think, I mean, obviously creates
new problems of like, well, how do you know if your answer is good enough and what is good enough
mean, but for real problems, like those are,, those are questions you just have to cope with.
Yeah. So you don't have a CS degree, correct?
No, no. I'm a research self-taught programmer at start anyway. And I got a little bit of a
crash course about how bad a programmer that made me when I worked at Amazon, for sure.
So yeah, let's go back a tiny bit.
How did you interview at Amazon?
Because this is something we get asked constantly.
Someone says, I have a background in physics, or I have a background in economics, or I have a background in chemical engineering.
And I want to interview at Amazon.
How did you get that right mindset to be able to do that?
Yeah.
So, so it was a case of like, there's a little bit of luck in there, but I also was intentional
about how I did it.
So I can't claim this will always work, but I think it was a relatively smart strategy.
So I'll explain it a little bit.
So I did, as I said, I sent out a bunch of resumes and heard very little back, right?
It's like, who's this?
You know, basically I think people would see that resume and be like, who's this national
lab business is like doing x-ray experiments,
get this thing off my desk is like, was kind of an easy response for most sane humans to have.
Or sadly it's even an AI. So a person didn't even have an opportunity to see
the potential, right? Could be. I mean, we definitely, our recruiters at LivePerson,
right? They look at a lot of resumes still, but I think there's search queries. You know, you got to be smart
about how you set up your resume. You want to have the right search terms on it, all this stuff.
Like you got to get past, you got to get a human looking at it. That's true. Yep. And then what I
did is I wrote, even though it wasn't asked for or common at the time, and it's probably less so now,
I wrote what I thought was a thoughtful cover letter explaining what the heck I was doing there and why the heck I thought this made sense.
And so I tried to break down my experience. In particular, I spent, you know, like a while crafting this paragraph about how I had been doing these experiments in a national lab context where we'd go in and we'd have limited x-ray beam time and we'd sit down and we'd have, you know, 24 hours to conduct the experiment.
And we had to be very thoughtful about how we planned it. We had to be very good at troubleshooting
problems. We had to, you know, we had to really deliver in this very tight, constrained environment
or we were not going to be successful. And my pitch was like, here's, look at what I've
delivered. Here's the output, the output of these publications, this advance in this field, right?
So I tried to get, I didn't want to make that like a four page monologue.
Right. That was like had to whittle that down to, you know, to four or five good sentences.
And there was some recruiter. I knew her at one point.
I've forgotten her name now because it's been quite a while.
It'll probably come to me after the after the call.
But there was this woman who who was a recruiter who saw it and read it and just thought like she was just like, I think this could work, right? I think we should talk to this guy. And I did talk to her. As I said,
I talked to her, you know, later after I joined Amazon, I had a couple of conversations with her.
And she kind of relayed this to me that reading some of that text was an important part in her
decision making. It wasn't the only thing, of course. But I think that's smart to do. I think,
you know, obviously, you got to get past the sort of AI or the querying, all that stuff. But then you should remember,
there's a, if you remember, there's a human on the other end of this and they need me to make sense.
If I'm not coming from what they see and what they expect, they need me to make sense of why
my experience fits and how that story should go together. We're all narrative machines in the end,
so they need me to give them that narrative and take the time and the space in your resume or in a cover
letter, however you do it, to make sure that story is told and make sure that story is clear, but also
make sure it's like super concise and tight. Like I can't make that point strongly enough. If you
write well, that also says something. And you don't have a lot of attention. You should imagine you've got 30 seconds of attention
if you're lucky.
How do I tell that story that quick?
Yeah, absolutely.
And so when you went to Amazon,
were you working on conversational AI on day one,
or is that something that you,
a team that you transitioned to later?
So I ended up doing a number
of natural language applications at Amazon
and at Nike subsequently. A lot of it
related to what kinds of interests they have in products, what's useful to them about products,
things like that, what their website experience was like, a lot of analysis of text.
And I started building conversational AI with some of the
experts we have at LivePerson. My real first foray into conversational AI was with LivePerson. That's
where I began to build. And we've brought on a lot of people. So I consider myself a little bit
more of a generalist. My background's more statistics, machine learning, et cetera. I
obviously have learned a lot about conversational AI in the last three, four years working at LP.
But we, of course, have people that have spent their whole careers on it as well.
Got it. Okay. So if you could kind of lay the land for us here. So there's natural language
processing, which I think, and please correct me here, is an umbrella that covers, let's say, translation, maybe embeddings or semantic understandings, no, no, my under, you know, my semantic description of this field
is more accurate. But yeah, I mean, by and large, NLP is like a big box. And inside that box goes
like the science of processing natural language, right? Like kind of quite literally. And
conversational AI is a kind of processing of natural language. It's a very specific one, obviously a very important one, we think.
But yeah, yeah, I think that's fair.
You also have inside of conversational AI and or inside of natural language processing, you know, you have the subfield of natural language understanding.
Right. Which is which is, again, very important to conversational AI.
And that's sort of this, you know, again, it's another one of these ones that like is what it says it is.
Right. So it is the science of trying to, you know, use algorithms to create, you know,
what do we mean by understanding essentially a structured form of or a structured interpretation of words,
of natural language that we use, you know, go from
this like unstructured text, you know, go from that format and, and sort of begin to categorize
and, you know, and adorn these natural language utterances with, with like things that are
meaningful to us in the abstract. Yeah, that makes sense. And then in the other direction,
then you're talking about natural language generation, where you say, I have this structure, this JSON, which has this
information that I know somebody needs. The movie starts at this time, or this is when
your pizza order is ready, and it has to get turned into text that people would, it would
feel natural to them. That's right. Yeah.
You've got that, you know, the sort of semantic box of metadata, however you get it, you know,
wherever that arose from some system did some processing and, and said like, here's, here's
the semantics or the meaning of what I want you to say machine.
And then the, the NLG post portion, like input is that, output is spoken language or written
language or whatever it is.
The other piece in the middle there that people can slice and dice in different ways, and
it's still a very open sort of research question, is really around all the dialogue management.
So you have the text coming in or the words coming in.
Let's assume it's just written text because then you can always kind of model the speech
and the voice side as at the other end of this pipeline.
So you have text coming in.
You have this understanding capability whose job it is to decide what it means what was
said or what was said means.
I guess it's the best way to say that. And then that comes in a structure now that interacts with the dialogue manager, or, you know,
and again, that can kind of be a few pieces. But broadly speaking, dialogue manager's job is to
take that information, understand it in the context of, you know, what's going on in the
conversation up until now. Some managers do a lot of that and some managers do none of that,
but logically that's the responsibility
of this piece of the system.
And then to turn that into a next action.
And then that next action gets passed out
as like one element of that next action.
There may be lots of things that happen, right?
It may go call an API and check your balance
for your bank or whatever it is, But it's also going to result very likely or most of the time in wanting to send
you some text back. And so in a system that has a decoupled NLG component, as you mentioned before,
it will send back now this metadata blob and the NLG's job will be to go and turn that into
language and thus kind of conversation, right?
So those I think are the, like in big animal letters, so to speak,
those are kind of the big blocks here as far as a conversational AI system goes.
Yeah, that makes sense.
So I think if I remember correctly, Zork didn't have conversation.
It was more of like a pick up sword hit hit troll i mean it's
been a long long time since i played that but but uh there have been interactive fictions with
conversation from a long time ago like this sx1 i'm sure there are other ones of its era
and i and trying to unpack it today i i think what they were doing is they probably had some WordNet ontology,
you know, on the disk. And so I think maybe what they were doing was, you know, they were taking
your sentence of what you told the avatar and they were looking at each word and they were maybe traversing up
this word net so just a bit of background for folks word net is basically a tree of the english
language and it's been around for a very long time so you can imagine i think the root word is entity
if i remember correctly so everything is a child of entity and um you know there's there's objects
and abstractions so if you were to take like,
like happy, happy would be an emotion, which is an abstraction, which is an entity. And so you
could kind of go up this chain. And so if I asked this avatar in this game from the 1980s, you know,
are you happy? Or are you, I don't know, I guess elated or something like that. What they would do is they would go up this WordNet chain and then they probably by hand kind of thought about what to
do if they get these really high level words and they were able to kind of cover, you know,
a pretty broad range there. And so my guess is there's something like that and they were just
pulling out individual words. And then if they see that word, then they know, okay, this person asked me about the key or the person asked me about being
sad. So start the sad narrative where this person can go on this quest to make me not sad anymore,
things like that. Yeah. Yeah. No, I think, and I think you described then like a lot of,
you know, what natural language understanding, you know, was and kind
of had to be for a long time, right? And like that, there was there were sort of techniques like that,
you know, a lot of a lot of like different, very ornate ways of dealing with language,
because language is exceptionally complicated, and dialogue, even more so, right? Like just the
understanding piece of this has been an extremely important and huge research effort for a long,
long time, right? And I think a lot of times we sort of claim it's a lot more closed down today
than it really is. Like that's sort of something interesting to talk about too, is like how good
are we at natural language understanding? How good are we at dialogue handling? And my answer is,
we're not as good as we think we are, neither in the academic nor in the professional context.
And part of doing this work well is beginning with that recognition and then realizing that we have
to build tools and capabilities to help us get to the level set where we kind of, where many of us kind of already believe we are. And I think there's a lot of reasons for that. There's hype
in both the professional and the academic context, which would also be interesting to talk about.
But I do, but you do see, you know, material advances that are very, very meaningful in the
same way that we saw for computer vision, you know, starting back in 2006, you know, the beginning of a neural net approach
and the first kind of, you know, early auto encoders, right. That, that start to change
that problem around. And then the development of convolutional neural networks that, you know,
where the, the network sort of satisfies some of the symmetries of this, of the system itself.
And that allows it to work really well and kind of reshape the functional
capacity of these networks well enough so that they make really good guesses at these approximate
local minima we were talking about before, you start to see some of that happen in the last five
to 10 years in the natural language space. But it's frankly harder, in my opinion, because
I can talk about what the symmetries of physical space are, right?
Like, you know, move left, move right, move up, move down,
rotate, reflect.
Yeah, I mean, if you're training something
to like play chess or something and you flip the board,
you know, on its axis, your strategy doesn't have to change
or it's a mirror strategy.
And so you could even just directly, you know,
hack that into the system and now it only
has to learn half as much, but for language, it's totally unclear what that symmetry is.
Well, yeah.
I mean, what is it like a symmetry in language is like a synonym or an antonym.
Yeah, right.
Right.
And so, so how do you teach a computer to understand that?
Right.
And I think what, what, if you start to break the problems apart like that, like it starts
to become clear, or at least you can make a good intuitive explanation for why attention mechanisms are really meaningful
and why the neural nets for some of these modern applications have developed the way
that they have, because they're trying to solve some of these symmetry problems in the
same way that you and I do.
You know words through context, right?
And we're really good at recognizing which elements of context actually
impact each other, even in a long form piece of text. So I think we've started to learn some of
that. And I think obviously there's been a lot of major advances recently, separating out what's
kind of hype or what it means to have advanced with something like a GPT-3, for example,
like what that really is good at versus what it's really not good at.
I think that's that's actually very hard for us in some ways because of the nature of what it's doing.
It's producing, you know, such a such compelling text on its own that it's hard for us not to imagine,
you know, like a like a some kind of wizard of Oz entity back there. That's like
all knowing that's doing that, um, and really tease it out. Yeah. I thought the, I thought the,
um, there's one research, I'm sure it's a whole body of research, but I thought it was really
fascinating. I can't remember if it's using GANs or transformers. I don't totally remember, but,
um, it would, um, it would caption an image. So you
would give it an image and what would come out would be, you know, a girl is sitting on a red
swing, you know, talking to a boy near a tree. And I thought that was so cool. I mean, when I saw that,
that blew my mind. And I felt like that. I mean, well, I actually, it would be great to know for your
opinion. I mean, it could be, it could be hype because I don't know the tech that well, but I
feel like we reached a milestone when I saw that paper. No, no, I a hundred percent agree with you.
So the automatic image captioning is an example. It's almost an anti-example of like GPT in some
ways, because what that's doing and it, and there's a long way to go there. I mean, you know,
you're not going to, it's not going to approximate there. I mean, you know, you're not going to,
it's not going to approximate, you know, really complex tasks. It's not going to handle really
complex tasks the way that like you and I with the eloquence that you and I would,
or any human, but it did something fundamentally different, right? Which is to do that research,
and a lot of this came out of Google, you know, in sort of the image search space, like there's,
I think, a lot of genesis there. But to do that work well, what had to happen is you had to develop a shared mathematical
representation of the visual image and of the text, right? And so the way you typically train
models like this, I mean, it's been a while since I looked at this research, so this could have
changed. But the last time I looked at it, which was several years ago, the way you typically train
models like that is you have like, you know, two vectors, you know, one representing the, you know, the sort of vectorized form of the speech or the language, one representing the vectorized form of the image.
And then you're trying to optimize their inner product. Right. So they're getting closer and closer together.
Yeah. Just to double click on that. So we had an episode, our last episode was with the CEO of Pinecone, and Pinecone is a vector
similarity database.
And we talked about how you could take sort of any classification problem, or even in
an unsupervised or self-supervised way, you can kind of take a chunk of data and you can
create a embedding or a very relatively low dimensional representation of that.
And what Joe's here talking about is,
you can actually do this with two different media
and try to join them together
when you've seen them together on the same website
or if you have a hand curated set.
And now you end up with two things
that are projected
into the same space and then you can go backwards with one of them. Yeah, that's exactly right.
Sorry for not giving a little more background there. But to me, that's fascinating because
it's like a step in teaching a system to have a multimodal representation of reality.
Right. So now it's now the image and the text are in a shared space,
as you say, a shared mathematical space,
a shared semantic space, which is how you and I work, right?
Like we don't differentiate, you know,
like the concept of red dress, you know,
the words and the picture of that in a physical red dress,
that's all, you know, those are all of a piece for us, right?
Those are all related, like obviously very strongly, but it's just different representations of the same
fundamental object. So I think that, and that's very much not what like GPT did. And don't get
me wrong, I'm not trying to talk trash about GPT. I think it's an amazing advancement. In some ways,
I think we should think of these big language models as like national resources that we're
building because they,
I mean, it literally take as much power, like you measure the power required to train one of these things, like in units of Hoover Dam, like that kind of, that like, that's actually a reasonable
scale. Like it's, I forget how many like Hoover Dam days it is to train GPT, but it's not like
0.0005 or anything. It's like a real number. Yep. Yep. And you can transfer learn off of it. So
it really is. I think that's a beautiful way of looking at it. It really is like a real number. Yep. Yep. And you can transfer learn off of it. So it really is.
I think that's a beautiful way of looking at it.
It really is like a national treasure that we've all spent a ton of time and energy curating.
And now everyone can benefit.
Yeah.
And with a whole bunch of infrastructure required to do it, right?
You can't do that without the data centers.
You can't go do that without, you know, industrial strength power lines going everywhere, right?
Yada, yada, yada. But GPT is different, right? What it's trained on is not,
and it's not trying to find this multimodal representation, right? It's trained on basically most of the text of the internet, right? So it's a window into what for us is like a projection of reality onto text, right?
That is GPT's reality, right?
That's what it knows, right?
Which is why, you know, and we start to think about it like that, like the ways in which
the model is amazing and the ways in which the model is confusing, at least to me, start
to make a little bit more sense, right?
You can ask, so you think, all right, if I train this model on this and I ask it who the president of the United States is,
and in 1815, it's going to give me a good answer because it's got all these good relations. It's
got a bunch of techs to work with that could tell it that answer. I mean, obviously, it's an amazing
advance that we're able to synthesize that into a system that can make the inference. But then it's also like if you go and ask it,
who was the president in 1705, right? Before the country was instantiated, it'll give you a
reasonable answer of a person who's kind of sort of presidential, right? It could be like Ben
Franklin or whoever, that's probably too early for Ben Franklin. But yeah, you can go and sort
of trick it with these questions and these premises.
And it doesn't do a great job, you know, without some further prompting and some further help in like understanding that that question doesn't make sense in that context,
because it's this it's this really big and sophisticated association machine that doesn't have larger political context,
like doesn't really understand what it means in the same
way that you and I do for some of these historical events to have taken place. It's, it's really
doing much more associative work. And a lot of that's that thin layer of like, it's just on this,
like if language is kind of a membrane of our reality, right. You know, it's, it's stuck on
that membrane and it can't escape. Yep. Yep. Yeah. I think so. Correct me if I'm wrong. GPT-3 is trained
such that they take a string of N words and then they try and predict the next word, the N plus
one word. And so what you end up with is now you can give GPT-3 a sentence or a paragraph or what
have you. It will generate the next word
that it feels like belongs there. So what you just described is fundamentally what a language
model is. And GPT-3 is a language model, right? So mathematically, the definition of a language
model is given n words, predict the n plus first. And however you get there, however you want to do
that, that could be a set of rules in the background, that could be whatever, like some algorithms got to do that job.
And then once you have, the reason that's an important construct in the field is that
once you have that, it unlocks all these other things, right?
It's like, now I can turn that prediction and the underlying representations of the
text and the knowledge that fuel it into really many of the applications that we care about
today. So it's kind of a building block. And so GPT, and you train these language models in
different ways. You don't always train it, specifically train it only with that task.
That is the mathematical sort of fundamental definition of what task a language model has
to perform, but there's lots of transfer learning like different ways you get the model to be good at doing that. And then of course,
on the other side, there's a bunch of applications you'll go and you go and turn on for it. So,
but a lot of, yeah, a lot of this sort of self-supervised aspect of training language
models is really about, you know, making adjudications of how good or how well a word
fits in its context. And the nice thing about something like GPD3 is
that although you have to use, you know, Hoover damn day's worth of energy, you don't have to
do any manual labeling because it's self-supervised. It's just scanning this data from the internet
and it's trying to learn the rhythm of this data and it can just see right
away, did I predict this word correctly or not, without any human intervention.
Yeah, no, that's why it's powerful and that's also why it's limited. There's only so much you
can learn that way, but there's a lot you can learn that way. And as a baseline for learning
other things, it's kind of the best one we've got right now. That's why people are so excited about it. And it's pretty cool. I mean, we didn't have anything like this 15 years ago. And now we have models that can tell us stories that really are, you know, rapturous that it wrote about itself, right. Is a, like an amazing way to illustrate that. Hey guys, we're onto something new here.
Like this is where, you know,
we're in Star Trek territory kind of sort of a little bit.
Yeah. So, so bring us back down to earth. In my opinion,
I feel like there's two big pieces that are missing and there's probably a lot
more, but, but I'd love to get your take on it.
I think one piece that's missing is getting
uncertainty estimations from really any of these models. And so to your point, you say, what is
the American president in 59 BC? The model should just say, well, I'm not confident about anything
here, right? And then the second point is having some kind of symbolic understanding to where, you know,
it can understand, you know, in a modular way, it can understand and compartmentalize,
you know, America as a concept and when that concept started.
And you can use like some first order predicate logic to say, well, you know, the question
is invalid because of these,
these sort of symbols. I think that we've gone to this sort of this, this embedding soup. And
because of that, we've lost the ability to, to think about things and reason about things in
like a logical methodical search kind of based way. And I feel like those are the two, at least
in my opinion, the two big missing pieces. Yeah, they're both really, really interesting and they're great points.
Let me try and like at least take on one of them. Maybe we'll get to both because I love to talk
about both. So as far as like the logic point that you make, I, so look, I agree with both your
points. First of all, I think those things are missing. I think the logical element or the higher order logic, we don't know how to teach computers to do well.
I personally don't think the answer is, at least not in as flexible a way as you and I do it when we sort of think through the universe as we do.
I don't think the answer is to construct a kind of symbology of it, right? That, that, that like, then these inferences are going to like be
mapped into, and then, and then there'll be some, like some kind of logical computation that happens
on the symbology. And then that's going to go back to the, to the embedding space and like,
give it a different, I don't think, I actually don't think that'll work. I mean, I could be
wrong. What do I know? I'm just, just, just one, one guy. But the reason I think that space, like
that space of like, what is, what are the constructs and what are the abstractions that we use to make decisions at the higher order logic is itself just as complicated as the reality it's trying to simplify.
Right. In terms of like the relations between the constructs and the boundaries, a lot of things are complicated because the boundaries around what, what is this construct versus, you versus not, like what's in the set
and what's not in the set are just hard questions. And so I think in the end, we're going to have to
find a way to teach a machine to construct representations like that for itself, right?
But there is going to have to be some notion or some parallel process that serves the same
function of the logical hierarchies that you and I would use. And like, you know, we teach humans
how to do that. Right. And, and, and sometimes, you know, when you get into like deep political
disagreements, like sometimes at the root of that is just a, like a, you know, kind of a misalignment
of some of these definitions of, of categories. I think just to riff on that, like one thing that I think paves away is the Mew Zero.
I don't know if you've been following Mew Zero from DeepMind, but DeepMind, their claim to fame
or their initial claim to fame was beating the world master at Go. So Go is this board game.
It's just a lot more complicated in terms of branching and other stuff than chess.
And they were able to beat Lisa Dahl.
And now a computer is the world Go champion.
But they didn't stop there.
The next thing they did is they built something called AlphaZero.
And AlphaZero removed all of the Go-specific logic.
And it treated it as just a field of...
Now, they did have to do some feature engineering, right?
So you can imagine, you know,
Go has just white and black stones.
Chess doesn't.
Chess has a whole bunch of variety.
And so you have to do some proper feature engineering.
But, you know, beyond that,
the rules of Go and all these tricks
were not in AlphaZero, and it was able to perform just as well as AlphaGo.
But what they did have written by hand is the game tree.
So when they're doing this game tree search, the mechanics of Go are built into that.
So in other words, AlphaZero will say and and this now it's alpha zero
so it works on many games alpha zero will say take this move this go move this checkers move this
chess move and then that would go into some program that a person wrote that would make the move
you know adjust the game appropriately tell them if they want or not and then um um and then it
would go and then it would uh and then that would conclude a simulation.
So they would simulate using the hand-coded engine
after so many simulations and they have a good action, right?
So then they took it to the next level with Mu0,
where in Mu0, they don't even use the rules of the game.
And so actually in the neural network, they don't even use the rules of the game.
And so actually in the neural network,
it has to represent the rules of the game and it actually has to do the simulation in the neural net.
I think they're using like an NLSTM
or a transformer or something.
And so literally they've gotten to a point now
where you give it a go board and it thinks
for a while and it does all these simulations and it just comes back and it says, you know,
place the stone here.
And I feel like that they're starting to unpack that, that planning and that reasoning and
that symbology.
It's totally uninterpretable to us, but they're starting
to unpack that process, which I think is really exciting. Yeah. I mean, it's kind of like if you
had a child, right. And you, you asked them to, like, you never told them how to play the game,
but you just let them watch, you know, a bunch of people play, like, would they like, how well
would they do in understanding the rules? And could they construct that abstraction? That's
probably like, you know, easier for some people, harder for others, but fundamentally possible, right?
We wouldn't see that as beyond us.
I think it's interesting.
Yeah, I think you're right.
I think that is like, I like the is that that playing field, so to speak,
or that area of, you know, board games like that
is so ridiculously simpler than, you know,
than any like sort of knowledge or human conversation
or language-based, you know,
kind of real application that we have.
I used to give these talks where I would
talk about like, okay, everybody who's learning a lot about machine learning, and I'm not trying
to put you in this box, you just referenced something in my mind, but everybody who's
trying to first learn or who kind of sort of first learns about how to get machine learning
done in one context rolls into the conversational context and they have kind of the same idea,
right? Which is they're like, cool, have kind of the same idea, right?
Which is they're like, cool, I just, I need a feedback loop here, right? Like I just need, what I need is like somebody, you know, a person's talking to the computer
and like good thing happens or bad thing happens at the end.
And then I, and as soon as I have that, I'm going to like, going to do this self-optimization
thing because that's what machine, that's how machine learning works.
And there's a reason no systems work like that in the world. There are pieces of systems
that work like that in real applications, but there is no end-to-end trained dialogue machine
that means anything to any person that's used for real applications that is in that closed-loop
form. And it fundamentally comes down
to how complex the space of language and dialogues really is. And even though the space of Go,
in a mathematical perspective, is vastly complex, right? And two to the, whatever, 113 or 111,
or how many points there are on the board, kind of combinations of stones and all this stuff. Like those are huge numbers. But when you start with language, you begin with what is literally
mathematically an infinite dimensional vector space. Yep. Yep. And the other part of it is
there's no ambiguity in Go. I mean, you place the stone and the same thing happens every single
time and there's no room for interpretation. But for language, it's the exact opposite.
I mean, you couldn't build a language that didn't rely on assumptions.
I mean, if you do, you kind of end up with something like a computer language, right?
Yeah, that's right.
That's sort of where that goes in a lot of ways.
And I mean, those are obviously meaningful and interesting, but like, you know, nobody
talks in Python for a reason.
Right.
Except Patrick, but nobody else.
Yeah.
Well, that's actually interesting because it's another thing GPT is really good at.
It's really good at writing computer code out of language, which I think is a fascinating
application.
Right.
They call it, they're calling it Codex.
Yeah.
There's like a few different ways this has come to life.
So, you know, there's a bunch of people that have done...
You can go poke through Twitter and find all sorts of good examples of...
You can literally build a front end in some cases
by telling the machine what you want the web page to look like.
Oh, I've seen those demos.
They're amazing.
But to your other point...
Sorry, we kind of meandered a bit. But to your other point, right? So sorry, we kind of meandered a
bit. But to your other point around uncertainty, I also really, really agree with that, I think.
And I think it's a big problem in conversational AI, right? Like you have these models that are
built on, you know, especially natural language understanding that are built on these transformers
and these embedded representations. And they're, you know, really good. In a lot of ways, they're, you know, really good in a lot of ways. They're very smart, but
they can make really dumb mistakes still, right? That's not beyond them. And so most industrial
strength models, most real systems have some, you know, sort of are forced to have some combination
of rules and backstops against these, you know, neural network approaches. And I think a lot of
what's under that or a lot of what's under that,
or a lot of what's missing is,
is there a system around that has a good idea
about how good this natural language understanding system
is likely to be at this problem?
And my personal opinion,
and some of the research work that we're doing,
is that you actually need that system
to be fairly decoupled from the system whose
job it is to make the prediction in the first place.
I think you can't have zero coupling and the art of it is kind of like in what way is,
let's say the natural language understanding understander decoupled and in what ways is
it coupled to the system itself?
That's kind of like the hairy edge of this problem.
But I think we haven't done a great job at that yet.
And I think there's a lot of research, not we personally, but like we as a culture.
And I think there's a lot of research still to be done.
And I think it'll be important because I think it'll foundationalize some of the other problems
that you talked about, right?
So if we want to start building a better hierarchy of understanding for some
of these models, a step on the way there is to ask, well, when is it wrong? Have a separate
opinion about when these models are wrong, which can help us develop separate understanding
of the categories of types of times when these models are wrong. So you can start to imagine
an interplay, right? If I separate that system enough,
it's going to begin to categorize, you know,
a whole slew of cases where this thing is wrong
that it doesn't know about.
And then those slew of cases become the basis
for an abstraction of, okay, what's a knowledge center,
you know, that this thing, where this thing is weak.
Yep, yep, yeah, totally.
Yeah, I think as a field, handling uncertainty
is very, very difficult right now with, with deep networks. I mean, there's, and this is, you know, this is for something simple, like a supervised model. Let's say the model that predicts cats and dogs or something. I mean, it's, it's, there's, there's still not any consensus. I mean, there's effectively two or three camps. There's one camp that says,
well, train, you know, 10 different models, either on one-tenth of the data or you shuffle the data
for each model. And then now you'll get 10 different answers. And so, you know, depending
on how much those answers vary, you can say how confident you are.
There's another camp that is effectively doing the same thing, but within the model.
So, you know, you kind of multiply the layers and now you have this distribution.
And then there's this camp that says, let's put priors on everything and use kind of Bayesian approaches.
And none of them work very well. I mean, the reason why you can't just go on pytorch.org or tensorflow.com or whatever and just get a model with uncertainty is because there's not
really a method that satisfies it very well. There's a lot of reasons for that, but I think
one of the most important ones is it sort of broadly goes under the header of like length, like processes that generate language and how varied they are.
Right. And so I think, you know, this is one of the things we see in an industrial context all the time is that you can have models that are great and doing well, you know, based on real live training data from real live people that really live talk
to the model and that you have now learned to correctly predict. And then something very subtle
about the underlying environment can change. People can start talking about the business
promotion in a different way because now there's an advertisement that describes in a different way.
And all of a sudden, some of these things really start to fall apart.
This is one of the areas that for me that illustrates like why NLU kind of isn't where
we think it should be or where we think it is.
And I think that, you know, mathematically what's going on under that is that, you know,
there's the generating function for language and the infinite dimensionality of this vector
space, right?
Or, you know, allow for, they make it very hard to train models that
are really rooted in the semantics as much as we'd like. And probably, you know, in the same way that,
you know, GPT is existing on this membrane and, you know, really only seeing what it can see and
missing these big pieces of reality and not having some of this like base symbolism to attach to,
you know, the models that are doing the work of conversational AI today
are still a little bit, you know, they can be infected with problems like that. And they can
be confused by what we call lexical cues, more so than we'd like, right? So a lexical cue, for those
who don't know, is just like something more on the surface of language about the word choice and the
top level design of the words in the sentence versus semantic, which is like, okay, what's that deeper, you know, agreed upon meaning.
Yeah. I mean, one thing, one, one really prime example of that is cats and dogs, right? I think
that, you know, we always hear that analogy, like it's raining cats and dogs, or this person's a
dog lover, this person's a cat lover. And so our mental model, you know, as a society is that cats
and dogs are opposite. And all of us as a child grew up watching movies where cats and dogs didn't like each other.
Right. I mean, that's a trope. Right.
And so in our mind, cats and dogs are really far apart.
But if you were to use GPT-3, it's going to put them right next to each other.
And the reason is websites that talk about cats, a lot of them talk about dogs, too.
I mean, imagine a website that sells pet food. Right. I mean, they're going to be talking about cats and dogs at the same time.
And so actually when you look at anything unsupervised, it's going to put cats and
dogs very close together, especially in the grand scheme of anything that could be talked about.
But we have this sort of a cultural separation and those kinds of things are just very hard to put into the
model in any meaningful way. Yeah. I mean, I think what you hope is that scenarios like that will be
accomplished via the high dimensionality of the embedding space, right? You can learn about some
of that because on some dimensions, and high dimensionality is really hard to visualize,
as I'm sure you
know, and really hard to have intuition about because the distance metrics and all this start
to behave really weird. But you would hope that there'd be some dimensionality where there's like
this separation, right, which in our minds would correspond to like a character of the animal,
or something like that, rather than a, you know, a functional view of the system, right? Because we're also not
surprised when we go into a pet store and we see cat food next to dog food, right? That doesn't
make us say, what's wrong with this world, right? So we have those separate dimensions or something.
So I would hope that the representations can accommodate that.
But I think the challenge to go back to the earlier point is, you know, what are those
underlying dimensions? They're not stable, right? Like if I slant a concept like slightly differently,
we might, we might redimensionalize that space a little bit differently. And, and so we sort of
make them up on the fly in a lot of ways. Obviously, there's some touchstones, but there's something that we create dynamically. How do we interact with models at that level through the
training process or through even a discursive process in dealing with them? Which is another
thing I really like about GPT-3 is that you now talk to the model by giving it examples and can
train it with real language.
That's also in advance, I think is very important. And we're going to need to kind of cope with that
and figure out how we use it because ultimately the better these things are at creating abstractions
and redimensionalizing, you know, what they're talking about in the ways that we're kind of
discussing, you know, the more we're going to need to converse with them to make sure we understand how their
minds are working in the same way that when you and I talk to each other, like we got to get to
a baseline about like, is he, you know, what does, is he actually a dog hater? Like I have to kind
of figure that out first to talk to you appropriately about dogs or something. Yeah. I wonder if,
if someone, you know, so if, if OpenAI is going to train a
new GPT model, I wonder if they started with books for babies and, and, you know, the random network
was biased in, in favor of baby's books. And then they literally trained it on books meant for older
and older people. And so it kind of followed the same path as a human in
terms of what kind of material they consume. I wonder what that model would presuppose. I feel
like there's something maybe really interesting there. Yeah, yeah, I agree. I think you quickly
end up in the world of AGI, right? It's artificial general intelligence. And that definitely by no
means an expert there. But I think that's kind of where a lot of this stuff goes in the end.
It's like, well, I have to start teaching these things more like they're a child.
And I have to start relying. And again, this is why uncertainty is so important. I have to start
relying on their own understanding of themselves and their ability to express their understanding
of themselves in order to influence them in the ways that I want to and
have them take right actions. And so you see it sort of quickly starts to, if you really break
it down, I think you start thinking like, wait a second, I've got to imagine this as more like
a human thing that I'm teaching and less like a mathematical process that I'm training. Obviously,
the math doesn't go away. Right, right. So we'll dive into a little
bit on transformers maybe, and then we can back out and look at the whole problem again. So
if I remember correctly, my knowledge is very limited, but transformer, somehow it takes in,
it can take in an arbitrary number of words, and then it creates some embedded space
of that, that, that arbitrary, that variable length, you know, paragraph, I guess. And now
then from that embedded, that's the encoder. And then from that embedded space, you have a decoder
that can emit words and then, and then also emit emit transformation to the space. So imagine like you
have this embedding that contains all these things that you want to say. And as you say those things,
you move to different points of the embedding where now there's less to say. And eventually
you emit some special token that says I'm done. Is that, I mean, do I kind of get that right? Or how do transformers work?
Yeah. I mean, I think, you know, it's an area that's live, right? There's definitely,
definitely a lot going on there. So I guess like what I would reference, I think for folks,
like one thing that I think is important is this notion of attention, right? So
one of the hard problems in language for a long time has been like, how well can a model One thing that I think is important is this notion of attention.
One of the hard problems in language for a long time has been, how well can a model reference
across a great degree of space between concepts?
When you look at a problem, trying to think like a sequence problem, like a time series
sequence, one of the longstanding problems in time
series analysis is like, well, what if I have an effect like last January and now it affects this
March or something, right? And that's a macroeconomic underlying effect. And so there's
been all this work in sort of hand curated feature building in those contexts, which these are
sequence models as well. So they're fundamentally the, you know, kind of like mathematically the same type of object that you have when you analyze
language. You know, I think there's a lot of bespoke techniques to do that in time series
analysis and, you know, all the ARIMA models and just, they go just like tons and tons of work.
I think the same thing was done for many years, you know, in a hand curated way when looking at
language. And so I think one of the, one of the years in a hand-curated way when looking at language.
And so I think one of the advances that's pretty important in this context is that we've begun to train these models and these transformer-based systems in a way that they can locate those long associations.
And in a way that they can provide for us, this is kind of another way to talk to the model.
They can provide for us connections between them, right?
You can ask, well, what other words in the sentence does this word relate to?
And in an industrial context, that's super helpful because as a way of tuning and debugging
and, you know, trying to improve systems, right, those sort of like co-reference patterns. I don't mean
co-reference in the literal sense of co-reference of language, but these patterns of correlation
to these patterns of mutual interest or mutual effect become ways to find, to go and kind of
ferret out the misunderstandings that the models are having and try to improve them.
Got it. Okay, cool. That makes sense. Yeah, I think I definitely have to get ramped up on it.
My background's in reinforcement learning, and I saw a paper recently about someone using transformers to do sequential decision-making. And so transformers are kind of coming into
like a battering ram, just coming into my field and so many other fields. And so transformers are kind of coming into like a battering ram,
just coming into my field and so many other fields. And so I have to definitely, I have a to-do to get,
to get ramped up on them. But, but yeah, I think that I've had seen the word attention a lot and
yeah, maybe we'll, uh, well maybe we'll get you back on and we could do a whole show on
transformers or we could get somehow get ramped up on that.
Yeah, let's pivot to LivePerson.
So tell us about LivePerson and what you all do and what kind of services or products you provide.
Yeah, so, all right, LivePerson,
fundamentally what we're trying to do,
we have some new things we're trying to do as well
in terms of relating directly to customers,
but the core business of LivePerson for the last 20 years or so has been about, you know,
essentially making the connection between us, the people in the world and the brands we have to deal
with or want to deal with a little bit better and making our lives a little easier because of it. And so that began
back in the day with online chat, right? Which is sort of almost a dirty word now. And I think we
actually would say at LivePerson that we thought it was a kind of a dirtier word first, right?
Not that online chat doesn't have useful applications and we still do a lot of online
chat, but we don't see it as the future of like you know, like, Hey, I log into a website and I have a, like a sort of not persistent or like a
liminal connection with a, with an agent on the other side. And I'm chatting with them for five
minutes. And then, you know, if I break the connection and it's gone and I have to start
again, we don't see that as like a model for like a real, for like a really good customer experience
or a really good brand experience for that matter.
But that is kind of nonetheless how the company was built because that model does have some
advantages over just the straight phone model that was what was there before.
But as time has gone on, I guess it's about four years ago now, maybe five,
LivePerson went early into a messaging context and what we call more asynchronous communication, right?
So now you've got brands and consumers communicating
through SMS messages, through WhatsApp,
through Line in Japan, through whatever, right?
There's a sort of umpteen messaging platforms out there.
One of the things we do is we make sure
that you can hold a great conversation with your
customers on these platforms. But where that obviously starts to lead, and it's really
interesting when you look at the conversations people want to have and what it means to shop
that way compared to what it means to shop in the web signature way that we shop today or, you know, the website context,
I mean, the way that we shop today, it's fascinating, right? And one of the things,
you know, the jokes I sometimes make is like one of my interview questions at Amazon was like,
well, help me build a model about, you know, whether or not the shopper is about to leave
the website or help me build a model as whether or not this is a gift shopper. How would you build
that? Right. And these are things that people just tell you in a conversational
context because they're trying to get help. So one of my favorites is a woman talking to a sporting
goods company and she's like, hey, I late gift shopping for my 12 grandkids and my sixth grade
grandkids. Can you please help me find some gifts for them? These Literally, the kinds of things that a company like an Amazon would be
sifting through web search and web activity history to try and make an inference about
are just things that people want to let you know that this is their problem. They want you to give
them help in solving it. She was very interested in the fact that this service agent on the end of the line could
next year reach out ahead of time with some proactive offers, all this stuff that can feel
seedy if a company is doing it behind the scenes on a website, but feels very natural if you're
the one telling the company, this is my problem and I'd like you to help me.
Yeah, that makes sense. Yeah. When I drink the Kool-Aid on this problem and I'd like you to help me. Yeah, that makes sense.
Yeah. So when I kind of drink the Kool-Aid on this problem and sometimes I drink the Kool-Aid,
I feel like what we're building is a way to do a more open and transparent and a little bit better
and warmer conversational experience for shopping, which takes you back to the world that we all used
to live in kind of before the internet where most of shopping was like that.
Yeah. I think that to your point, I mean, uh, I've, I've recently been buying all my clothes online since COVID. Um, you know,
once you've narrowed down the size and, and, you know, almost everything comes in almost every
size. So narrowing down the size doesn't really do anything. Well, you know, it's almost like,
like, how do you sort of search for your aesthetic? It's really difficult. Yeah. But I think that,
that if it was sequential, like maybe I would search for blue shirt and I would look at what
came back and I would realize, okay, really what I need is a blue striped shirt. And, and somehow
knowing that this person pivoted to blue striped shirt, that, that now it's like, okay, you have
these two things and maybe you would show a Stripe shirt that's Navy because that Stripe part is really important.
And so just, yeah, just handling that modality,
I think is a conversation.
And whether you're doing it through a search engine
or you're doing it with text generation,
I think that's something that has to be addressed.
Well, and if you think about it,
like what you really want to understand
if you're in that conversation with someone
and trying to help them buy a shirt is is something a little more fundamental about what they're doing.
Right. Like, are we are we in a situation here? I don't know how you shop, but like, you know, I might be in a situation where I've like I've already had three copies of this basic shirt.
I'm trying to find it in a different color and I know it fits well.
And I and I kind of just want to get moving and like go get myself a fourth one because I love these shirts.
Like it's a pretty uncomplicated mission.
And what I want in that situation
is someone to show me, you know,
all the different copies of that particular item.
And then maybe, you know,
I want a little back and forth
on the style I'm looking for and I'm done.
You know, or I may be like,
have been like poking around the internet.
I'm like, oh, you know,
I need a little bit of a fashion change.
Like, I feel like I'm kind of boring right now
and I'm looking for new ideas.
And I, you know, I'm looking for a shirt that like, you know, maybe
I wouldn't have worn before and I might need a little, like I might need a really different
experience. Maybe I'm teaching seventh grade and I need a periodic table. That's somehow also like
WWF. Right, right. I need this SmackDown, like I need to turn it over and then, you know,
Macho Man Randy Savage can jump out of it or something.
Yeah, like a hydrogen atom and just crush it.
Yeah, exactly. So I think you want those missions. This is a lot of where the three-line for at least my career sort of matches between Amazon and Nike and live person. It's like, what are those missions and how do you define them? And I think the language context is obviously the best context to define them in.
A lot of what we build is about allowing brands to sense that and to identify those missions from the language that those customers give them so they can be more helpful.
And so when we talk about conversational AI, or at least the tools of conversational AI and how we use them at LivePerson,
there's at least two big categories. One category is around, of course, helping brands build these
systems so that they can have automated ways for customers to solve their problems so that it's
easier and faster and better for everyone. And then the other side is like, okay, well,
how do you... Are you set up to really listen to your customers? You know, do you know, for instance, that when, you know,
your customer complains about having tried to call you having tried to reach on your website,
and having tried to, you know, to text message with you, by the time they're at that point,
they've got like an 80% chance that they're going to leave you. Because there's this canonical
problem that like, that just just frankly makes everybody mad.
And so being able to listen to stuff like that at scale and understand that at scale
so that you can provide a better customer experience,
that's another sort of layer
of the conversational AI offering
that we think is really important for us
and really important for brands.
Because in the end,
consumers in America and other places
now, we're pretty fickle beasts, right? Like we've been kind of trained by like the Amazons of the
world and other companies that have pioneered in customer service, you know, to create like these
really positive service experiences for us that they're really different than 30 years ago.
Right. But now that's the norm. Like, and if you want to, you want to keep building your customer base, if you don't have a captive audience,
if you're not like a cable company or something where like nobody can leave you,
then you have to be great at this now. And you have to kind of blur that line between
what are you, you know, how are you solving their problems? And then how does that translate into
future growth between you and them and building that strong relationship? So, so we kind of build products on both sides of that, right? We build natural language understanding
with both of those use cases in mind, for instance. It sounds like another part on the
product side that I think would be really tricky would be knowing when to hand off to a real person.
That seems like something that a lot of companies wouldn't know how to do. They would really rely on you for that.
Yeah, that's right. And I think that's actually like a nook in a much bigger problem, which is,
how do I understand how well the system is doing with my customers? How do I understand the quality
of these conversations between the computer and the person? And there is not good tooling for
that.
This is one of these foundational areas where I think you just don't find anything good in the industry right now because it's one of these annoyingly hard problems. And it's kind of
related to all the abstract stuff we've been talking about. When does a model know it's doing
well? When does it not? And how deep an understanding and how introspective is that
model in the first place? And the answer is typically not very. So you can't rely on the model to do it. You need to kind of build some of
these separate systems to be able to see this. And some of that requires like really rolling up
your sleeves and doing some dirty work and being like, what are like the top 10, you know, kind of
like, what's the right way to describe like the top 10 kinds of problems that people face when
they're trying to talk to automated systems and how they break down and
how that customer experience goes wrong. So we spent a lot of time on that. We have products,
you know, that that we're bringing to market right now. In fact, on measurement, you know,
the this got this like the product has this pithy name of Max, right, which is the meaningful
automated connection score, which is really about like, hey, how, how good or bad was your CX in, in these
automated conversations? And like, where should you improve it? So, and it's important for two
reasons, right? One, you need to know how well you're doing in order to have any sensible business
strategy. And two, tuning and optimizing these AI systems is like really hard. I shouldn't say
it's really hard. It's, it's a lot more of work and ongoing work
than people at first often conceptualize that it's going to be.
Yeah, that's right.
You got to think of it like if you're building a website,
you don't build it on a Thursday, send it live,
and then don't touch it for a year, right?
Like you would never do that.
But a lot of people think about conversation,
oh, build the bot, I won't touch it, it's done.
But it's really, it's an ongoing process iteration
and we need better tools to,
in the same way we do website A-B testing, in the same way that we have a bunch of tagging and kind of infrastructure and software built up around learning what's working about your
web presence and improving it. We need the same thing. You need the same things if you're going
to learn about how to improve your conversational AI. And that's part of what this metric and the system is about is helping you quickly locate places where the conversations got wrong.
Where did the bucket stuck in a circle, which drives us all crazy?
Where did the NLU just totally barf and lose it and just completely break the customer trust?
So our play in general has really been
about quality. And one of the, one of the, obviously one of the most critical pieces of
quality is like, do you really have good measurement? Do you have good understanding
about what quality is? And do you have good pointers as to where quality went wrong?
Yeah. I mean, this is a huge, huge problem. I think the people who, uh, if someone can even
crack this, um, in a generic way, it generic way, it would move the entire industry a mile forward.
Because I think we have all the same issues where it's just so hard to triage one of these situations.
What we end up doing, you're probably doing something similar, is just coming up with so many metrics.
For example, how many people are
clicking on this, like what's the click rate? What's let's use Amazon example. What's the
click rate? What's the conversion rate? How many people abandon the whole site? Um, how many people
bounce all of that stuff. And then you have this other, you have this, this downstream problem
where as soon as you define a metric, you create an incentive.
And so people, you know, machine learning engineers at your company are going to try to
drive that metric up. And so typically what happens is eventually something that's not being
recorded will start to suffer, right? And so now you have to make a metric to track that.
And so you end up like just adding more and more balls in the air. And it now you have to make a metric to track that. And so you end up like
just adding more and more balls in the air. And it's because you're in this arms race with
yourself. Right. And so, yeah, it's just so, so difficult to go through that process.
It's a little bit like a metrics alphabet soup in some ways. One of the things we did to deal
with that, the first thing we did
before we started kind of working on this max project was we sat down and said like, well,
we need to organize like this system. Like we need to organize the metrics somehow because,
you know, people are doing exactly what you're saying. And like a conversation I probably had
five times now is I'll go east before the pandemic, I would go physically to brands all the
time. And now I talk to them on the phone, but, but they, you. But I'd sit down and be like, OK, well, how's it going?
They're like, oh, yeah, we built this bot.
You know, it's whatever platform they built it on.
And they're like, it's great.
It's got an 80% containment rate.
It's awesome.
I'm like, that's great.
What are you doing with the 80% extra capacity you now have in the human contact center?
What did you choose to do with all those people?
You must have people at their desk with nothing to do, right. And the answer is never like, oh yeah, yeah. Like
here's what we did with them. The answer is always like, well, no, no, actually volumes in the,
in the human side have actually gone up or, or they, they basically stayed the same, but,
but the bot's doing great. Right. And it's because it's fundamentally, because this concept of a
containment is, is just a very broken way to think about whether or not the bot serves,
serves the person in the way that it needed, you know, it solved the problem or not. So like I said, one of the things we did was we sat down,
we said, well, let's organize the metrics into, you know, some conceptual framework that people
can use so that they can see, because we're not going to get out of this. Like there's not going
to be one magic number that does everything. This is actually a highly multivariate problem.
There's lots of different ways to think about what good means.
And people are adapting. As soon as you make a metric,
someone will try to game it
and then they'll cause another problem.
Right, right.
So you can't like kind of live on that one.
It's just, it's too unstable.
Yep.
So we built this framework,
you know, we gave it the pithy name
of the four E's, right?
And there's an efficiency vector of metrics.
There's a, you know, effectiveness in there's an efficiency vector of metrics. There's a, you know, effectiveness
in solving the customer problem vector of metrics. There's an emotion, right? What's the customer's
emotional response. And now I'm going to like totally blank on the fourth E, which is going
to make me super embarrassed. It'll come to me in a minute. Anyway, so we've split into these
categories and under them, we put both sort of existing contact center metrics
that people are used to, like repeat contact rate within an hour, within three hours, within
a day. And then some of these newer metrics like, hey, what's the max CX on this? Or what's
the sentiment of the customer as measured by these machine learning models?
Now you can begin to give brands a sort of reasonable target to optimize for that makes conceptual sense to
them. The fourth E by the way was effort, right? How much work did it take to solve this problem?
And that's really, these are like much more connected to things that we care about as
customers. Like when I tried to contact you, did it work or not? That's effectiveness. Like
how much of a pain and frustrating was it? That's emotion. How much work did I actually have to go
through to do it? That's my effort, right? So those correlate, these concepts, I think, correlate a lot better to
what a great CX is. So we spent a lot of time on that. And then that set up, again, like some of
the max work and some of the analytics work that we've done subsequently as well. Very cool. Very
cool. So what is something that makes, and I know with COVID with COVID we're all working from home. And so,
and so everyone's, uh, job looks pretty much the same. There's, there's, there's kids running
around and there's a dog and there's, you know, but like, but like before COVID or maybe projecting
into the future, what about live person is really unique or you kind of walk into the office or
maybe you talk to the folks and you feel like this is something I've really seen anywhere else. Yeah. You mean like in terms of being inside the company or in terms of
what the product offers? It could be, I know this is, this is inside the company. So it could be,
it could be, you know, everyone plays ping pong on Thursdays, or it could be something about the
nature of the kind of, the kind of ethos that you've built, something like that. Yeah. Yeah.
So for me, I mean, I don't know, you probably have to bring people on,
you know, on the, like on the team to make sure I'm not lying to you, but I can tell you about
some of the things we strive to do and some of the things that I see is successful at.
I don't think it's easy. I guess, let me talk about the science domain. Like, I don't think
it's easy to have, to, to combine two critical concepts that are important
for real success in like scientific research and product development. And those two concepts are
one, like the truth seeking, you know, are you really satisfied with what you've done or could
it be better, you know, or what's really wrong here, those kinds of questions. And then two,
like a spirit of real collaboration and support.
Right. So oftentimes, like when I was a physicist, right, like the way that physics solves this
problem is by not solving it. They just let like they just go after option one. And, you know,
it's it's it's not at all uncommon to for the chief like discourse between two physicists to
begin with, like, you know, for somebody to start out with, well, that's great what you said,
but let me tell you all the ways in which you are an idiot. Right. Like literally, you know, for somebody to start out with, well, that's great what you said, but let me tell you all the ways in which you are an idiot.
Right.
Like literally, you know, we have a term for that.
We call that the intelligent jerk.
Yeah.
It's a, it's a archetype, an anti hero of the, of the tech world.
Yeah.
Yeah.
Yeah.
And I mean, it's very common and like, there's a sense to it logically.
Like if you don't have some of that disagreeableness, and I mean that like literally in the psychological sense, you know, you run into like too much conformity and you don't do great work, right?
Like you can't actually do well without some degree of that. same impulse and understand how to express it in a way that is constructive, right, rather than
destructive and like builds partnership rather than tearing it down. Those are special people.
And we work very hard to, you know, to really staff ourselves with that combination of skills,
like up and down the science org. And I'm pretty pleased with how we've done. I think,
I think it's one of my, it's, it's without a doubt, my favorite place to go to work and, and do science work and learn about, you know, research
that I've ever had. And I had good situations in Amazon, a good situation at Nike. I'm not saying
they were bad, they were great, but this, I think we've done something special by prioritizing,
you know, that combination. We have not hired great people that we felt like just weren't
going to hit dimension two very well.
Yep. Yeah. It's a really good, really good point. Cause the other thing too, is, um,
people who are really good, you know, and have the right, you know, emotional quotient and, and, and, and have that ethic and, and, and have all of those, those pieces,
they're going to grow really quickly. And people go quickly, you know, I don't know what data
there is on this, but people who go quickly will maybe want to change teams because there's an
opportunity somewhere else or change companies or something like that. But people who are in over
their head or who don't have the right character, those tend to be the people who stay forever and then so it's like it's so difficult
to i mean so much time needs to be spent to get spent to get that right because yeah otherwise
you know you there could be whole years where there's there's problems that that that take a
very long time to sort out yeah i think there's some dynamics like you're talking about that are
very real you know i felt like i think one know, one partial antidote to at least the first
part, I don't have a good antidote to the second one, but I think one thing we're also
really fortunate about at LivePerson is we have pretty amazing data and capabilities
to work with, right?
There's hundreds of millions of goal-oriented dialogues coming through the platform every
day.
Obviously, we use those thoughtfully, carefully, and respecting all the contractual arrangements we make. We don't use all those conversations the same way, but different
brands have different expectations about how we use that data. But nonetheless, there's lots and
lots of amazing conversations to work with. And there's also tens of thousands of people sitting
on the platform who can inform conversational AI with their opinions and their expert opinions,
right? All these agents sitting there able to give feedback to the systems and tell them when they're working
well and when they're not. And so some of those tools, I think like fuel, you know, much more so
than anything that I or the other managers on the team do every day, fuel like people coming and
really kind of hunkering down with live person and doing, doing good work for a long time. So it's,
it's been nice to experience that, but yeah, in general, I agree with your,
your assessment of the dynamics. Like you get somebody in a, you know, who's struggling on
one of these two dimensions of work and it's hard for them to leave because it's like,
there's a lot more risk for them. Yep. Yep. So, so for folks who are listening to this and,
and completely enamored, they want to do conversational AI,
they want to get into this field. Does LivePerson have any openings? And can you kind of break it
down into, does LivePerson have internships? Do you have full-time positions? And if there are
post-COVID, if there are geographic locations you want to focus on, what are those?
Sure. So the first thing to say is we're hiring a lot. The company's grown. You go read our public
record reports. We are a public company, so you can see how we're doing and we're growing rapidly.
So we have a lot to do. This year, I'm really trying to focus on building an exceptional
scientific research team that's going to look farther afield in time, right? So we've typically had at LivePerson a strategy of, well, like we want to
work on research, we want to work on science, but we want it to be in the context of being
productizable in the near to medium term. So really not much more than six months out,
we want to see this come to life in a way that impacts the product. And I think that was right
for where we were and all the stuff we needed to build. There was a lot of obvious stuff to do, but now we're in a different
place. And I think we want to be looking on one, two, three, four year timeframes for how some of
this stuff turns into technology that we can use. And we want to go a little deeper with academic
partnerships as well. So I'm staffing just a full team on research with kind of that mandate,
looking for a lead for that who wants to come in and say, OK, I see this data you've got.
I understand you guys have a basic dialogue problem you're trying to solve.
And I want to go push science research in this direction in partnership with this, you know, academic institution over the next three, four years. And here's the agenda and is going to help drive that. And I mean, I have some ideas about where that should go, but I'm really looking for a very
senior lead to come in and, you know, and take the reins on that. So we're actively recruiting
there. And they, of course, will actively recruit for the team, you know, that works with them.
But we have other jobs and other opportunities as well. LivePerson's an engineering company,
you know, on my team, you know, I hire analysts as well. We do it like I run a lot of
the analytics products for live person. And we hire, of course, machine learning engineers. We
hire people to build backend systems that are supporting all the model building and the model
training and the model management. And, you know, we're doing a bunch of work of migrating pieces
of the platform into the cloud. And so there's a lot of like real engineering work. So one vector that often works well
is for if someone has a strong engineering background
and they're interested in this space, right?
There is a, you know,
there's definitely work to do to become an expert
in the science and the field,
but there are ways in, I think, to the,
there's a lot of engineering work that provides like a boots on the ground, it kind of introduction to the technology stack and,
and kind of gets you, and then you kind of get into the flow where we're now like, okay,
you're coming to our Friday brown bags. You're talking to, you're working closely with scientists
because you're building things that they're using. And so you're in conversations with them in a different way. It's a really live place to learn and grow.
So, I mean, I would come, if I were someone with a strong engineering background who was looking to
get in this field, I'd be looking for a way to do it with a role like that, rather than trying to
say, well, okay, I'm going to go to bootcamp for six weeks. I'm going to read a bunch of papers.
I'm going to do some prototype chatbots, and then I'm going to be a bunch of papers. I'm going to do some, you know, prototype like chatbots.
And then I'm going to be ready to do a research role tomorrow.
I think, you know, maybe there are some people out there that are just like astoundingly
fast learners and that will work.
But I think for most of us regular humans, you know, you want to kind of, you want to
find a way, like, I'm going to build this into my life for a couple of years.
Yep.
And there's so much tribal knowledge, so much tribal knowledge that, I mean,
that bootcamp doesn't exist
where you're going to learn the tribal knowledge
to be able to ace a research scientist interview.
So yeah, I think your advice is spot on.
Yeah, and I mean, in addition to acing the interview,
even if you can, like, and you show up on day one,
like, are you ready to do the job
or are you setting yourself up to succeed?
Right. Yep. Yep. Do you have internships? You have a lot of folks who are maybe in the middle of their college and they're looking for something over the summer, but then they're
going to go back to college afterwards. Does that exist? Yeah. Yeah. In fact, I just, we just
set up a meeting, I think for next week to sit down and talk about how we're approaching next year's crop of internships and what we'll do there.
So, yeah, we will be doing internships and we do them all year.
We have interns right now and, you know, for different disciplines and different times makes sense.
Right. But there are a lot in the summer.
And, you know, we've been focusing a little bit on some of the graduate students, you know, the last year or two.
So it's been mostly Ph.D. or master's degree students are coming in and doing internships.
But I think we're going to change that this year and have a little bit more undergrad work to be done.
The team's gotten a lot bigger. And one thing that means is that there's a wider variety of like types of work that needs doing. And I think we have some spots for
a really good undergrad research project or two that, that look pretty interesting. Like I said,
we're kind of still in the middle of figuring it out, but, but that's, that's basically where
we're headed. And, and for me, I like to make sure that, Hey, if we're going to put an internship
together, we want it to be something that's like a little risky, right? It's a great time to take
a risk, but also something where we see like a legitimate chance for success.
And like, it's going to,
we're going to learn something either way and everyone's going to benefit from
it. So, so I think we'll be in a position to do that with undergrads next year.
We're definitely continuously in a position to do that with grad students,
you know, that that's not going to change.
So we'd love to hear from people who are interested in, you know,
possibly joining us in that way.
Cool. Excellent. And so folks want to check out LivePerson, you go to LivePerson.com
and you can, you know, see what they're all about. And if you want to reach Joe,
you could reach Joe at jbradleyatliveperson.com and you can shoot him. I'm sure you can shoot
him a resume. He'll forward it to
the right folks, or there's probably a place on LivePerson where there's like a careers page or
something like that where you can. You can definitely apply online. You can definitely
reach out to me too if you want to say hey. And I would love to hear from the audience here.
As I mentioned before we got started, I think you guys do a pretty cool, pretty cool podcast here. I like how, how application focused
it is. And yeah, it feels like it has a lot of, a lot of weight to it. So I'm sure your,
your listenership are all pretty cool people. Yeah, definitely. And they're, they're super
motivated. We've actually connected a lot of interns to careers to at least to internships.
And so, and, uh, and, and we're constantly getting emails. Uh, the show has been out for a while.
So we're constantly getting emails from folks who have landed a full-time jobs or even been in them
for several years. And it's, that's really special. So, so I feel, I feel good about, uh, I think
there's, there's going to be people out there who are a great match for LivePerson.
And now if you're listening to this, you're one of those matches, you know, check out
the show notes, check out the website and get connected.
It sounds really, oh, actually, so we didn't cover location.
So where is LivePerson based?
Yeah, we're a distributed company now.
We took, I mean, it's evolving.
Like I think a lot of companies are. But when
the pandemic started, we wanted to give certainty to everyone early on. And so, you know, we said,
hey, if you want to move where you want to move, go move, right? Like as long as you can do your
job, right? As long as it doesn't impact your work, you know, we were pretty open. And we're
not sort of demanding people be ready for a mass return or anything like that, because we wanted
people to live their lives. You know, now that we're coming out of the pandemic, I think we're not sort of demanding people be ready for a mass return or anything like that, because we wanted people to live their lives. You know, now that we're coming out of the pandemic, I think
we're hopefully coming out, I guess we're resurging a little bit. But as we sort of take steps out,
we are beginning to open physical offices again. We have an office open in Seattle here
that I've been to, you know, we do COVID testing either the morning before you come in, or you go
do it at the office before you come in.
We're obviously following all the policies and protocols that are local to the areas, and we have our own sort of standards on top of that.
So I think we will be coming back more and more physically.
We're probably going to maintain a high degree of flexibility about where people work and how that comes to life.
We have some policies and standards around how to manage the pay scales and things that
you have to do from a corporate sense for the economics to make sense.
But I think we've done that in a fair way and I think we're open-minded, and particularly
when it comes to great science professionals and researchers and folks with a lot of machine
learning engineering expertise and those people that people that are, you know, really like really doing great in these,
in these very hot fields. Like these are the kinds of people we want to be really flexible with. Like
the most important thing is for you to come and work with us and for us to get some good work done
together. And the exact details about the location and all that are pretty secondary.
Yep. Yep. Totally makes sense. Yeah. I think it's consistent
with where everyone's at and, uh, I'm hoping they come up with some nice whiteboarding tool. I mean,
that is really what as scientists, this is the thing we need is some, some way to whiteboard
together. Yeah. We're looking at that right now. We've got a couple of options that I want to get
put in the office and try out to see how they work. I don't know if they're any good yet or not.
And we haven't really tried out the tech, but, but I kind of agree.
Like I'd love to be sitting here and like have a whiteboard here that I can
write on. And that like the writing of my, you know,
my partner on the other end of the communication like appears and it's almost
as if they were here. Like somebody solves that there'll be billionaires
overnight.
Yeah. I wonder if, you know,
in general the VR headset
is never going to be so convenient
that I would wear it all day.
But I wonder if maybe that's,
everyone keeps talking about the metaverse.
I have no idea what the metaverse is.
I'm assuming it's connected to VR,
but that might be something
where we could go into VR for an hour
and then there could be some holodeck-like thing
and then somehow at the end,
it would accumulate a whole bunch of notes. Yeah, yeah yeah yeah yeah i think it'll be interesting to see how
we solve these problems they're they're much bigger problems now or more important problems
now they were a couple years ago yeah i wonder about vr2 i i guess the metaverse is the what
the old neil stevenson reference from snow crash or something which is oh that's right oh it's been
so long i think i had to read that in high school
it's been a long time since i should i should give it a reread it's one of the only sci-fi
books i know of that where the three ring binder appears uh in a starring role at least for the
first part of it yeah that's right it's been a while though yeah that's where i think they were
able to they someone i think it's like a murder mystery or something, or so at some point I think somebody dies.
Yeah.
They're somehow killed in VR.
I mean,
it's,
it's,
I have very,
very,
uh,
not lucid at all memories of,
of that book.
Language is a virus,
right?
Is this whole concept he's exploring.
And I think there's this capability to like,
you can,
you can show people information and it infects them and gets them sick.
If you show it to them in the metaverse or whatever,
it's been a while for me too, but I think, I think it's something like that.
Yeah. Yeah. We should make that book of the show next time.
We need to read it again. Yeah.
Yeah. That'd be fun.
You guys ever want to do a Stevenson fest or something? He's I'll be,
I'd be happy to come back.
Oh, cool. All right. We might take you up on that.
Cool. Joe, thank you so much i know we're over
time by by a lot but i really appreciate you spending the extra time and and chatting with
us it's been absolutely amazing and it's been it's been a real pleasure thank you so much
yeah the feelings mutual is a super fun conversation thank you so much for giving
the opportunity to come on and chat with you guys very much appreciate it and i love
like the depth and the quality of
the discussion. So thank you so much for facilitating that. Cool. And thanks for everyone
out there. I think you continue to support us on Patreon and on Audible and we really appreciate
that. I've been trying to post a bit more on Twitter. In the past, I've mainly used Twitter
to send show notes, but I'm trying to put some more content on there.
If I see something that I think is pretty cool
and related to coding and tech,
I've been trying to share more content there.
So follow us on Twitter,
follow us on all the other ones as well,
LinkedIn and Facebook.
Any of those,
we'll definitely be posting the show notes there
every time as they come out.
And subscribe. definitely we'll definitely be posting the show notes there every time as they come out and uh and subscribe you know we can if we're not on a streaming platform at this point let us know
i think enough people have let us know about enough platforms that were on all of them but
there's there's new ones coming out all the time so definitely keep us up to date uh keep us honest
if there's something we're not on and we'll catch everyone in two weeks thank you so much
music by eric barnwell programming throwdown is distributed under a creative commons know it.