ACM ByteCast - Alfred Spector - Episode 72
Episode Date: July 22, 2025In this episode of ACM ByteCast, Rashmi Mohan hosts ACM Fellow and 2016 ACM Software System Award recipient Alfred Spector, Professor of Practice in the MIT EECS Department. He was previously CTO of T...wo Sigma Investments, and before that Vice President of Research and Special Initiatives at Google. Alfred played a key role in developing the Andrew File System (AFS), a breakthrough in distributed computing that later became a commercial venture. He is also known for coining the term “CS + X.” He is a Fellow of the American Academy of Arts and Sciences, Hertz Foundation, and National Academy of Engineering, and recipient of the IEEE Kanai Award for Distributed Computing. Alfred recounts how he initially pursued programming out of personal enjoyment in college. He talks about developing AFS at Carnegie Mellon University, the challenges of turning academic research into commercial products, and the transition from academia to entrepreneurship, sharing some of the lessons learned along the way. Alfred touches on his time at IBM, which acquired his startup Transarc, and the differences between startups and large corporations. He also talks about some of his most notable work as a technical leader at Google, such as Google Translate. Finally, he offers a unique perspective on the rapid evolution of AI and advocates for a more multidisciplinary approach for developing responsible technology. “Google’s hybrid approach to research” paper “More Than Just Algorithms” (ACM Queue article)
Transcript
Discussion (0)
This is ACM Bytcast, a podcast series from the Association for Computing Machinery, the
world's largest educational and scientific computing society.
We talk to researchers, practitioners, and innovators who are at the intersection of
computing research and practice.
They share their experiences, the lessons they've learned, and their own visions for
the future of computing.
I am your host, Rashmi Mohan.
There was a time when the creation, application, and use of artificial intelligence was an exclusive
discipline, only privy to a handful of people with deep technical knowledge in data science or
computing. In today's world of generative and agentic AI,
the playing field is a lot more level,
bringing in novices and experts from various domains
to help define and make world-class products
and applications.
While it may have taken some of us by surprise,
our next guest saw this revolution coming from a mile away.
Dr. Alfred Spector coined the phrase CS plus X
to stress the burgeoning importance of computer science
across a spectrum of disciplines.
He's a professor of practice at MIT
and a senior advisor at Blackstone.
Previously, Dr. Spector was CTO at Two Sigma,
head of research at Google, and spent many
years leading research and engineering teams both at Google and IBM and his own venture,
TransArc.
He is a Hertz Fellow and also currently on their board, and an ACM and IEEE Fellow as
well.
He is a published author, has won the IEEE CanEye Award for distributed computing,
and the ACM Software Systems Award amongst many other laurels.
Alfred, it is truly an honor to speak with you. Welcome to ACM Bytecast.
It's my pleasure to be here. Rashmi, thank you so much for having me on Bytecast.
Absolutely. I'm so excited to begin our conversation. I'd love to lead with a question that I ask all my guests out, but which is if you could
please introduce yourself and talk about what you currently do and give us a little bit
of insight into what drew you into this field of work.
Well, let me start in reverse order.
So like many people, I think I got involved with computing because I so enjoyed getting lost in the code.
It was so much fun to think about how to structure systems and to write code that was at the time had
to be extremely efficient and to make it all work and to actually have people relatively quickly
using the fruits of our labor. So I got involved as a programmer,
I got involved in college, it wasn't even my expectation.
I thought I would be more of an economist or a journalist,
but I enjoyed programming that much.
So many of us get involved because of the deep technology
and of course, as our careers progress,
some of us stay in that, that's terrific,
and some of us move and become broader
in what we do. It'll be very interesting and we can even discuss what the effect of AI
tools will be on our core programming and system design disciplines. What I do today
is very much a broadening of that. After having led organizations and maybe managed 10 or
20,000 people in my career, what I'm now doing are things that I really want to do.
I've returned back to teaching at MIT, which is where I started as a professor
at Carnegie Mellon.
I started as a teacher at Carnegie Mellon, that is, and I'm now back into
teaching
and doing things of that form again. I advise Blackstone
on how to adopt AI, particularly for
its investors in the companies which it owns.
So I'm involved in that and I've been involved in various ways in national service.
So that sort of makes up three of the four things I'm doing.
The last one is advising small companies, which are now playing such a critical role
in the innovation ecosystem.
Wonderful. Thank you for giving me that because there are so many of those areas that
we will talk about as we go through our conversation in terms of getting into
computing itself, Alfred, did you happen to take like a programming class when
you were at college?
Is that what sort of gave you that introduction?
Because I agree with you.
I think once you get into it and the encoding makes sense to you, it's actually
very empowering to see what you can build.
Right. I think as we think about teaching coding and encouraging people to get into the field,
I think people get into it in different ways.
Like so many people have started in junior high school or middle school with Scratch,
which is a tool provided by Professor Reznik and others at MIT that is just taking the world
afire with a kind of graphical programming. That's amazing. Some people get involved because of
science, right? They need the tool or something of that form. Some people are now getting involved,
maybe more easily with AI techniques that are helping them. For me, it was really that I took a calculus course and believe
it or not, even back in 1972, there was the notion that you could help teach calculus if you could
show how numerical integration was done, say, using the trapezoidal method. So we were given
an account and the basic programming language and And I thought that was really interesting.
And I branched out from there.
Then I took a course from a person who eventually
became a pretty well-known venture capitalist, guy
named Ben Wegbrite, where we implemented, we
learned Lisp, we learned assembler for then larger
computers called PDP 10s.
And then we built the Lisp system or part of it
in assembler for the PDP 10s. And then we built the Lisp system, or part of it, in Assembler for the
PDP Towns. And it was an amazing introductory programming class in so many different ways.
Fantastic. It's very fascinating to hear that. And I absolutely hear you.
My daughters, when they were much younger, did get into programming using Scratch.
They were very fascinated by the fact that they could sort of visually see what they were building.
It helped them quite a bit. One of of your first roles or jobs for that matter was at Carnegie Mellon University
And you spent a fair amount of time there
Researching areas of distributed computing and from what I read the Andrew file system was also born there
So I'm wondering if you could tell us your recollection of that journey
Absolutely in the late 1970s and early 80s,
people were exploring how files could be shared
across workstations or personal computers
that were just then becoming popular.
Xerox did that for some tens of workstations
that would exist within the Xerox Palo Alto Research Center,
and other people looked at small-scale environments.
At CMU, we had the idea that we would want to get to a world
where you could have a personal computer workstation
on the desk of everyone on the campus,
maybe getting up towards 10,000 people,
students and faculty alike.
That led to all manner of additional challenges
led to issues of how do you achieve the necessary performance on networks that were
then limited?
How do you make the system manageable?
How do you make it reliable enough given a lot of components now there would be lots
of likelihood of failure?
And finally, how do you improve the security in the system in a world where security, for
example, about exams before they're
taken and many other things would become really important. So we tried to innovate
in a number of ways in that. And the Andrew File System, later termed AFS, was
created by a number of really smart people, really thoughtful people at
Carnegie Mellon, and it set a new standard for file sharing at scale.
Eventually, perhaps partially because of my ideas, we were able to figure out how Carnegie Mellon and it set a new standard for file sharing at scale.
Eventually, perhaps partially because of my ideas, we were able to figure out
how to scale it across the country.
And by even 19, let's see, 1991 or so now commercially, there were almost a million
accounts on the system across institutions all around, around America and some overseas. That's amazing.
And that was going to sort of lead me to my next question that AFS was spun off as a commercial
enterprise in your company.
And it was also a great time to be in that space given sort of the exploding era of the internet.
So what was that journey like for you to sort of go from being a professor to being an entrepreneur
and also maybe talk to us a little bit more about
what does it mean to sort of make something that's had its inception as a research project into
a more commercially viable product? What kind of offerings did you have?
So I think number one, I think an interesting question for anyone considering that journey from
research to entrepreneurship is you have to question your own motivation. Why are you doing
this? And for us that had been involved in the project at Carnegie Mellon, many of whom went off
to start a commercial company around it with me, we had the feeling that we just couldn't scale the project with university
resources as much as we felt it should be scaled. We thought we had built something
that would be really significant to a much larger number of people, but it would require
far more money than you would get in the research realm. So that was our motivation.
The second thing is there are a whole set of additional topics that
come up when you're doing something in the commercial space because after all
people have to open their wallets and give you money for something. That's the
nature of it. That's how you prove value is that the value of the product is
greater than the amount of money that the client or customer will pay. That
requires a really deep amount of thinking beyond what you might have to do in the research
world where you can focus on a particular set of problems.
Now we had been pretty broad with AFS because we had rolled it out even by the time we started
the company at a number of universities and had at least tens of thousands of users on
the system, but still it required
a lot more to think about everything else you had to do to enable scaling.
I was fortunate, and this is a question that your listeners have to think about, is when
would you start a company?
Would you start one when you have the experience of kind of management of reasonably large teams, or is it better to do it when you have completely youthful zeal and enthusiasm
and you have almost, you feel infinite time to do it.
I did it sort of at the mixture of the two extremes.
I had already been a leader of a research center at Carnegie
Mellon where I had maybe 30, 40, or 50 employees, I don't recall.
So I did quite a bit of management experience. But on the other hand, I certainly hadn't been
an experienced CEO at that point and there was an incredible amount to learn. The breadth of what
needs to be done is considerably different when one is in the research world in a relatively
narrow domain. That is super insightful in terms of just the learning that is required from not just
the technical expertise that you have, but what does it take to run a company and deal
with the various facets of that?
Even negotiation is something that one thinks about.
So I remember being in a negotiation with my attorney, an amazing startup attorney, and the other counterparty,
the counterparty in the negotiation, made an initial proposal, which was better than
what we had thought we would get.
And you might say, the right thing to go do is just say, sure, that's great.
Right.
However, my attorney, and I did that being naive, and my attorney took me out into the
hall and was about to spank me for saying that because she remarked that I had ruined the day
of that other counterparty because they obviously knew they had made a proposal that was too
wonderful for us and not good enough for them, and that I should have actually negotiated,
I probably would have gotten a little bit more and they would have been far happier.
I had never thought about that as a research faculty member that had written good papers
for the symposium and operating system principles, but I learned that lesson very quickly.
I avoided the spanking.
That's fascinating actually to think about the
psychology of how somebody, when they approach the negotiating table, what they're coming in with
their initial offer and what they leave with. That's pretty amazing. It's a great story.
But I was going to ask you one more question around this topic Alfred, which is for somebody
who is sort of working on a project and has not yet chosen to go commercial, are
there any specific or was there a specific metric that you were measuring?
I mean, it sounds like in your case, adoption across thousands of users was a great way
to indicate that this has much more far-reaching impact than what we could possibly achieve
within the university environment.
So I'm just wondering if there is anything sort of more that you could abstract out
for anybody else that's in that position.
It's a really good question.
So as researchers and technologists of which I think most people that are part of the association
Computing Machinery and even this group that mostly listens to this podcast,
we sort of love technology.
Right? That's sort of what makes us tick. Right? Is that look at this new algorithm,
look at this new approach to inferencing or training and think of the efficiency benefits,
think of the security that we can achieve. What I think we have to think about when starting
something that's commercial is we have to kind of love the technology less
and love the value of the product to a consumer more. So we have to kind of change our allegiance
from the technology to the features and products of the features of the product and i think that's what is the biggest change we have to make.
And i didn't make that change fast enough and some places and if there's anything i learned from i mean we have a lot of successes and of course like everything else we had some failures as well i think the failures were primarily do.
I think the failures were primarily due to me loving the technology more than a crisp and clear and objective view of the value of certain very complex things that we tried.
That's an excellent point.
I think relevant to every engineer out there.
I think we tend to want to build and we're fascinated by what technology can do. But I think keeping the customer first and keeping the business value is super critical.
Yes, indeed. In every company where we have product managers and engineers, I think you
have to have that give and take, Rashmi, between the product manager that clearly has the product
first call. They may be technology focused. They may realize a key technology that organization has is the special sauce,
but they're ultimately concerned with the product and the engineers have to be
more concerned with the engineering.
It's very hard to do.
It's extremely interesting.
We have an enormous amount of ego as engineers built into what we're doing.
And we work really hard at it.
It's the confluence of those two things and the CEO has to see all of
them, but if the CEO comes from a technical realm or head of engineering
comes, they have to recognize they have to move more towards product.
That makes sense.
Yeah.
That's a great piece of advice.
So from there though, Alfred, TransArk got acquired by IBM.
So now you're in a much, much larger organization.
What does that transition look like?
What happens to a product and a company
when you get acquired into a much larger organization?
Well, what happens to the people that are acquired
and what happens to the product?
So there are big differences between a small company and a big one.
IBM was, we were a few hundred people, IBM was a thousand times larger than that, three
orders of magnitude bigger, approximately.
So it provides opportunity, but it also requires an enormously different set of understanding of what constitutes
the possible, what constitutes the necessary.
So in a big company, there are rules and teams and mechanisms to prevent problems, to prevent
lawsuits, that all reduces velocity and can be quite frustrating for people. On the
other hand, there are opportunities because big organizations typically have
lots of customers and they have lots of mindshare and even relatively small
things can be very, very valuable at the company. So there's quite a change for
the people in dealing with all of the infrastructure and policies and
such things that they have, but there's also a lot of opportunity.
I was given the benefit of running within a year, running the unit which bought my company.
And that was very interesting because that was the largest transaction processing software
business in the world.
Very profitable, still heavily in use today.
But an interesting idea was that I learned within a month or two, it was the
conservatism of that product which was used by 20,000 large enterprise
customers. The conservatism and lack of change in that product that made it
consistent is what was really valued by the customers,
which is of course almost the opposite
of what a startup is trying to do
is to rapidly improve and change.
So very interesting.
For a product, of course, it depends on the company.
A company may be buying technology from you,
it may be buying a new product line,
it may be buying routes to market, new customer segments.
One has to realize though, but the centrality of what the company was doing
to everything has now probably become just a piece of the acquiring organization.
Absolutely.
So many insights in that conversation that we just had, especially around the
fact that one, you know, smaller part of a
larger organization, it's great that you got the opportunity to sort of look at that, lead the
larger organization as well. And the fact that your focus is the ability to keep that product
consistent for customers. And to your point, I think we focus a lot on innovation and providing
new ways of doing things and new features.
I think doing that while keeping the user experience consistent is so valuable in certain
situations as the one you just described.
Right.
So, Lou Gerstner, who was the CEO of IBM when I was acquired by IBM, or my company was acquired,
I guess I was acquired too, used to to say every little company wants to be big and every big company wants to be little.
And I've not seen that written down, but I think it's a really good aphorism.
So if you're a small company, you want to be big, you want a lot of revenue, you want
lots of salespeople and lots of installed customer base, et cetera.
And of course, coming with that are all sorts of restrictions, right?
When you're big, you have a lot of money.
People might sue you.
The government might be after you for some reason of anti-competitive behavior.
You are certainly at risk of confusing customers if you have multiple
products that are in the same space.
So everything becomes harder. risk of confusing customers if you have multiple products that are in the same space.
So everything becomes harder.
So it does show the sort of innate challenge that we have in an economic system of what
just naturally happens when growth occurs.
ACM Bytecast is available on Apple Podcasts, Google Podcasts, Podbean, Spotify, Stitcher
and TuneIn.
If you're enjoying this episode, please subscribe and leave us a review on your favorite platform.
I'd love to also understand from you, Alphard, your role at Google, which was head of research.
How different was that from being an IBM where you're kind of leading a product line
and are responsible for sales and customers, et cetera?
Does being head of research give you more sort of freedom to explore and
quote unquote innovate without having those constraints?
Well, it depends.
So first I had also at IBM the last, other than the last year,
for five years before my last year at IBM,
I was head of all of software and services research,
was half of the IBM research division.
So I'd had some experience at IBM with it.
So this applies really to both companies,
to Google and to IBM.
In research, I think in computer science research
or anything relating to computation and artificial intelligence
in such things, we are doing really applied research.
We are doing things that are in advance of the product, in advance of what a product
division will do, but we're not just doing things which are purely, if you will, exploratory.
We have a goal in mind.
We think that if we were to do this, that would result in something better for a product
and would ultimately probably be profitable for the company.
So at least that's my view of how industrial research should be done.
You should have many fewer constraints
than the product groups,
but you should still be focusing upon things
that are of economic value,
because that's the nature of a firm
and a capitalist society.
So we're not just looking up into the stars
and saying, isn't that interesting?
I wonder what that pulsar is doing.
That doesn't seem to be very likely
to have economic value,
say particularly to a software company.
With that in mind, I had honed my skills, I think, at IBM and leading in a way
that would create the appropriate levels of looking into the future and having
the right talent and the right flexibility, while also working with the rest of the company
to ensure that what was being done would be in fact deemed as valuable. So one of
the big things that was particularly true at Google is the desirability of
the research group being completely separate. in my opinion, was nil.
There was no desirability of that,
because Google's great asset was a vast user base
and a large amount of data.
So if Google research could have any particularly unique value,
it would be that it was associated with a company with both of those assets.
So we wanted to be connected to the business so that we would have access
and reason to be involved with that.
So examples were Google translate, which Google research led on, of course,
it's become, I would say almost a global winner in that space.
Maybe the most used translation system with literally
tens of thousands of language pairs now that are being done.
We couldn't have done that without the data and reach of Google.
And that was great for a research project on a grand challenge problem in artificial
intelligence.
The leader of that, Franz Och, was truly amazing. And I think he couldn't have done that in a more segregated, isolated research organization.
Yeah, that makes a lot of sense, Afrid, because especially, I think, the ability to take an idea
but apply it to a user base, see how people react, do some sort of A-B testing to kind of also validate
some of the ideas that you have, that's invaluable.
And I really like and appreciate the point that you're making about being embedded within
the product team so that the research is actually leading to product outcomes.
Like you said, it is a business and it is run for economic viability, so both points
that make a lot of sense.
So we wrote a paper on this. I'm happy to report myself, Peter Norvig, who's of course, a well-known artificial
intelligence expert who co-wrote probably the most popular book in the field.
A younger employee at the time named Slav Petrov.
I think Peter and Slav are still at Google.
We wrote a paper called Google's Hybrid Approach to Research, which I think is,
it's a very short paper.
It's on the communications of the ACM
and its free access.
And I think it describes our thinking
on how to structure research.
And we believe it was pretty influential
because it was somewhat of a change
from the view that the research establishment
and industry should be completely protected
from the company
and allowed to do whatever it wants with absolute university-like freedom.
We tried to provide a lot of freedom, but we tried to be integrated more,
and it discussed the ways that we tried to do integration in a mutually beneficial fashion.
Got it. Thank you for sharing that. We'll definitely link it in
our show notes for our listeners to be able to read it.
One of the things that you mentioned also is,
I mean, one is of course being embedded with products,
so having research and product work closely together.
To me, that's not naturally leads to
my next question around cross-pollination of knowledge.
But that also takes me to, was this around the time
when you coined the phrase CS plus X
to talk about cross pollination,
not just within an organization,
but interdisciplinary work?
No, that was quite a bit earlier.
The CS plus X expression came about
when I was asked to give a talk,
the first talk at Harvard Center for Research
and Computing in Society.
And I was kind of a systems person,
thinking about operating systems
and distributed systems at the time,
thinking about how do I give a talk at Harvard,
a liberal arts university with some engineering
that would be relevant.
They didn't wanna hear about scalability and distributed file systems or
third party authentication or something like that. So I
realized at the time
that the current thinking
about programming and computing in the United States was completely wrong and
That was just after the internet bubble had burst, the bubble of roughly 2000.
And the thinking was this was all a mistake, that the internet was overly hyped and such,
and that the enrollments actually were declining at that time.
And what I did was I said, I'm sorry, this is all completely wrong.
You have no idea how big computing is.
In fact, it's so big that it's going to
affect every discipline on campus.
If you don't actually adopt computing
within all of the departments and areas on campus,
the so-called X's if you will,
that you'll be in trouble, that everyone will want to be a computer science major.
So you need CS plus X ideas, however organized, within the university. And
furthermore, that's going to be the front where innovation occurs in society. Now maybe we would say it's data science plus X
or AI plus X, but it's all about the same notion.
And I think that's proven to be true.
If the other departments didn't do enough
with computational things, their enrollments declined
and computing went up.
The universities are increasingly trying now to build these hybrid
programs. I think it's an even more important phrase though now if we fast forward 23 years
or 22 years from that original talk. It's even more important today, particularly because it may very well be that in pure technology,
we'll have so many labor-saving aids that the details of the technology
are less critical to engineers and computer scientists,
and we ourselves need more of the Xs
if we're going to be really valuable, the skills and other areas.
Absolutely. I completely agree with you that the ability that
you had that vision from that many years ago, it
could not be more relevant today where CS itself is
becoming so accessible to people from all fields.
But also the fact, I'm curious Alfred as to how
does one have that sort of vision when like, you
know, most of your sort of, especially around the
internet bubble, most people are unsure.
I feel like maybe a few years ago, AI had the same sort of reaction where people are
like, this is just hype.
I don't know whether this is really going to last.
Is it going to make a difference?
What kind of markers do you use to say, yeah, you know what?
This is the real deal.
This is actually going to have a pretty significant effect
on our industry.
So what motivated that was, I would say,
a complete holistic feeling of terror on my part.
So here I was giving a talk to all of these people
that were humanists and social scientists
and technologists all merged together.
And I had to think of something to say.
Now, you may think I'm just trying to be humorous in that but it's true
but to be frank the advice that I have to give to anyone who might be listening to this is
get out of your comfort zone.
I was bold, I was willing to go do that, but I was out of my comfort zone.
Would have been easy to talk to a group of people working on operating systems or
new security protocols for the web or something
this was very hard for me and it forced me to think through some of the issues that I had been working on so
I would say we all have to get it of our comfort zone and
We have to sort of relish that sometimes not all the time or we'll go crazy
But I think that was what motivated this idea with respect to artificial intelligence the other thing that that came up.
I have always believed i remember even as an undergraduate in college of this is now the early nineteen seventies i always believed it was inevitable that at some time.
Computers would do what people do.
computers would do what people do and I still believe that. I don't think we know exactly what it is we do and when it is computers will exactly do it, but we're
getting closer and closer to it. That thesis that I had is still true. Is it
five years away or 20 years away and what are the differences between us and
these computational systems? All these things are big and interesting open questions,
but the changes to society will be truly far more significant
than even what the internet and the availability of mobile phones
and such things had in the past.
We haven't yet begun to confront the impact of these things,
and that impact will happen.
That sounds very real, but let's talk about that a little bit more, Alfred, in terms of
we are the people that are helping build these systems.
The computers that are going to be able to do many of the tasks that we do today in a
far more efficient manner, in a far more accurate manner. We're helping build those systems.
What should we be aware of?
I know you've co-authored a book called Data Science in Context.
I was trying to learn more about that in terms of what drove you to conceptualize this topic.
What do you want people to know of as we're in this brave new era?
Let me answer this in two ways.
So first, there's a sort of narrower answer
to your question.
So as folks that build, design, regulate systems,
I think we have to consider a variety of,
if you will, rubric. In order to get the systems to be as effective valuable safe profitable whatever as we want.
So the book that norvig and wigans wing and i wrote.
As you said data science and context.
In that book we proposed a rubric.
science and context. In that book, we proposed a rubric of the elements that you need to consider
when you use data-driven methods to solve really important, far-ranging problems. And those could be data science-oriented things or artificial intelligence. We use the word data
science, but it could have been either. The rubric elements include seven things that I think we all need to consider.
So one is, do we have the data?
Should we have the data?
Two is, is there a model?
Do we actually know how to do things?
There's some things which I think are just fundamentally unpredictable, human, machine
or not.
So is there a way of solving the problem?
Three, can we make them dependable enough in the sense of security,
privacy, freedom from abuse and overall resilience?
Fourth is do they need to be explainable in some way? Do they need to be understandable?
Do they need to be interpretable? Do they need to show causality?
Do they need to be reproducible? Do they need to be auditable? Those go under an understandability rubric element.
Not everything does, but some things do.
For example, in the realm of science,
you need to have reproducibility
or people rightfully shouldn't trust the results.
The next one is, do we really know the objectives?
And this is where it's going to get very complex.
Can we build TikTok systems that are extremely fun to use that will while away our time?
Sure, we can clearly do that on Reels or on YouTube shorts or TikTok or whatever.
Is it good for us?
Hard to know the answer.
What is the right objective for that?
How are we going to reach societal conclusions
on topics like this and many, many more
that are now going to be coming up
as computers become ever more powerful?
The example of recommendations are,
is frankly a fairly small one.
The next one is just overall fault tolerance of systems.
And the last one is we have to think about
the broader emergent phenomena that come from of systems. And the last one is we have to think about the broader emergent phenomena
that come from these systems. A lot of people talk about ethical implications, that's correct,
but it's not by any means enough. You have to think about geopolitics, you have to think about
economic implications, political science. We have to learn from history to see what works and what
doesn't work. And frankly, I believe we even need to be thinking what fiction writers write
about because we're in those domains where the fiction writers, whether doing
not science fiction or other forms of fiction are often more creative than we
in technology.
So that's sort of the narrowest topic.
The broader topic is I think we need much more of a fusion between technology and the liberal
arts now.
And if I were raising children, if I were myself, I'm involved in education, I would
be advising our students to be less narrow and broader in a very deep way in the studies
they sub the subjects they study.
And I think in society, we're going to have to bring together the accumulated wisdom
of our civilizations to harness these technologies effectively.
Yeah, I really appreciate what you just said,
Pafir, especially because I think the one thank you for very clearly
articulating the rubric.
It really brought home to me the fact that we really need diverse perspectives
when we're trying to build the next set of computing systems that we will all use.
It's going to impact all of our lives.
The other thing I thought was really interesting was when you were talking about objectives,
there's a large part of this which is also diversity in terms of where these systems are being built geographically, culturally speaking.
I think having computer scientists as well as folks who are working in these different
fields but from also different cultural and different environments will really add to
creating the right solutions across the world.
Well, that's right. Although frankly, the world doesn't have an agreement as to what the right solutions are.
One of the reasons we use the word context in the title of our book, Data Science and Context,
is that we feel that the answer to the question of what to build and how to build it etc is highly contextual. It's contextual to
the particular application. So right doing something which is going to be a
tool using AI for helping a creative writer is going to be very different
than using AI in self-driving cars or in political recommendations right just
very different and what we care about and are worried about will be different
for a creative writer maybe hallucinations are good for the writer and they're really bad in a medical
application that will give people bad advice. So that's one contextual thing. Second is there's
a societal difference and there are differences in views about the way the world should work and
the importance of pluralism versus societal coherence. Those vary around the world
and I don't think that we as technologists can force any particular opinion on the societies
in which we're a part. We have to be adaptive to the needs of the populations that we serve.
And then finally, there's even a temporal aspect to it. Perceptions of privacy
are completely different now than they were in say the late 1990s and they're an evolving
activity. So as just one example, but there are many more. So there's a temporal nature.
As Heraclitus said, the only thing constant is change. And that certainly covers the temporal
aspect of things. No, absolutely.
I 100% agree with you and all that you said.
Do you feel like there are enough forums for bringing these diverse thought leaders or
these disciplines together?
Alfred, do you think our universities are encouraging more of this?
Do you think industry is being more conducive to this? What is a good way for somebody who is wanting to sort of jump into this field,
whether they're from the computing side or from say an interdisciplinary area,
what would be the best way for them to engage? Well, I think it's a good question. I actually
think there probably isn't enough. I think that the computer scientists jumped on ethics.
And I think that has been quite a good set of discussions
with an enormous amount of thought done.
But I don't think it's as broad as it needs to be.
And I do think that is somewhat of a challenge.
I think all of us, however,
if we give ourselves permission to think broadly,
we can do it. And I would say that we permission to think broadly, we can do it.
And I would say that we need to think through what does it mean with AI being so important to societies with what is the impact of open source, which is
clearly something which contributes to the broad dissemination of AI around the
world.
I think there's a conclusion that can be reached to that.
And that is that AI will be everywhere and it will be extremely effective everywhere. I don't think open source
will go away, but it impacts how we think about the regulation of AI and such. I think we need
to think through the economic impacts. Mario Draghi talked about some of the perhaps risks of European structures in this, maybe limiting innovation or startup
opportunities in Europe. That was a good example of a start, but I haven't seen really too much in
the way of programs that are discussing the breadth of this. But I think as individuals,
if we allow our think-subs to think broadly, I think we can do so.
Absolutely.
I hope this conversation also inspires our listeners
to think about what it would mean to broaden their horizons
and engage and interact with a different set of people
to build the next set of solutions we have.
For our final bite, Alfred, I would
love to know what are you most excited about in the field over the next few years?
I think if there's anything that I and most core technologists are intrigued about is gaining a better understanding of why foundation models, large language models, large language models with reinforcement learning. Why all these
things work so well and how they can be better. I think particularly why they work. I'm really
intrigued by the research that's going on in the space and trying to understand how these networks
are pieced together and what's happening inside of them. I will feel a lot better when I understand how they work and I
think many people do. All of us have a, for the first time in our field we're
doing things we don't know why they work, which has never been the case with
computer science. It's an engineering and mathematical discipline which we could
characterize everything we did rather accurately. Now we can't and I think
that's what's, if there's any single thing that comes to mind, I think that's
just so exciting.
And I'm going to be watching that.
I wish I were a younger practitioner involved in it.
That's wonderful.
Alfred, thank you so much for sharing your rich and accomplished career journey with
us and giving us such deep, insightful thoughts on how to sort of navigate the world of research
and industry and being an entrepreneur.
This has been an amazing conversation.
Thank you so much for the time.
I'm glad to be part of this.
I do indeed hope it's helpful to some people.
Thank you.
Of course.
Of course.
Yeah.
Thank you for speaking with us at ACM ByteCast.
ACM ByteCast is a production of the Association for
Computing Machinery's Practitioners Board. To learn more about ACM and its
activities, visit acm.org. For more information about this and other episodes,
please visit our website at learning.acm.org slash bitecast. That's learning.acm.org slash b-y-t-e-c-a-s-t.