Not Your Father’s Data Center - Quantum Computing Vs. Classical Computing – The Tale of the Tape
Episode Date: July 12, 2021On this episode of Not Your Father’s Data Center, a Compass Datacenters Podcast, host Raymond Hawkins talked with Dr. Robert Sutor, Chief Quantum Exponent at IBM Research, about how quantum... computing will partner with modern computing and what changes this will have for the world.
Transcript
Discussion (0)
Well, thank you again for joining us. Welcome to another edition of Not Your Father's Data Center.
I'm Raymond Hawkins, your host. Today, we are joined by IBM Chief Quantum Computing Exponent,
Dr. Bob Suter. Dr. Bob, thank you for joining us again.
I'm happy to be here.
We're so grateful to have you back.
For those of you who don't know, we track all the statistics on our podcast,
and Dr. Bob has been our most listened to guest in this year, year and a half that we've been recording the podcast.
And so we're super grateful to have him on our first ever video version of the podcast.
So this will be fun.
He and I will be learning live on screen together with you guys, but excited to have him back.
For those of you who don't remember, Dr. Bob's primary function is to promote, talk, and understand how quantum computing is changing or going to change our world
and what it's leading that thought process and that thinking and talking about it inside IBM.
To say that you are getting the opportunity to listen to the global expert
on quantum computing, I don't think is too strong. Dr. Bob joined us for the audio version only and
really gave us some great insight into how that quantum computing isn't going to replace
traditional computing, but it's really going to partner up with it and be an extension of it,
and that it gives us computing capabilities that are
even hard for us to comprehend. Dr. Bob, before we jump into what's changed and where things are
and what's accelerating, can you give us your caffeine molecule analogy? I just think it's a
great way to understand the exponential difference between traditional computing and what we see in
quantum computing, and then we'll go from there. If you don't mind that analogy, I think super helpful.
Yeah.
In fact, a lot of people wonder about quantum computing
versus just classical computing.
And to remind your audience,
so classical computing is really what you do all day long.
So it's the chip, the memory, the storage that might be in your phone,
in your laptop, the servers in the data centers, and so forth. That's classical computing. Those technology, the ideas really go back to the mid-1940s.
And so I don't want to date it in some sense by calling it classical, like Beethoven or Mozart
or things like that. But in the sense, it's traditional type of computing. And basically, there are wonderful theoretical results saying that you can compute anything
you want with classical computing.
What they don't tell you is it might take 10 million years though to compute it, or
it might require so much memory and so much storage, it's completely impractical.
So people are always looking for new types of computing, new techniques,
things that are really fundamentally different from classical computing to say, well, all
right, can this new method solve some of the problems that classical computing, high performance
computing can't solve? So inspiration comes from nature. Because one way of thinking about nature, the universe,
the galaxy, everything like this is one massive computer. I mean, the data here is represented
in every atom, every molecule, every proton, every electron, right? Everything that composes you and
everything around you, right, is the data. And nature is a set of applications, programs, right, that makes it all work,
that carries out processes, that produces results. So, it's a good question to ask is to say,
can we use nature, the way nature works as a computer and apply that to some of our hard
problems? So, the way nature works when we talk about the very small,
so at the atomic level, things like this, is what's called quantum mechanics.
It's a very deep, very mysterious, very head-scratching, strange part of science,
yet it seems to be the way that describes nature and the universe and things like this.
So the caffeine example that you mentioned is this. When we talk about data centers and we
talk about capacity, how many cycles you can run, how much storage you have, how much RAM,
and then we talk about supercomputers and things like this, You want to say, well, can these solve really the
important problems? So I'm going to give you one very simple problem, and that is, it's a chemistry
problem, right? So we have these wonderful computers. I'm going to give you one molecule.
I'm going to give you one molecule of caffeine. Now choose caffeine because of all the molecules
with all the long names.
Caffeine, people know what it does, right?
It's something specific.
You know where you get it from, coffee, tea.
Widely used molecule.
Widely used and pretty much globally.
So it saves me a lot of trouble when I talk around the world.
And caffeine, of course, is not just a molecule. It enters your system and it goes to
your brain and it makes you alert, keeps you awake and things like this. So if all these classical
computers are so good, why can't we just take this molecule, right? And so instead of studying
the biochemistry, right, the way it works in your brain or think of a test tube or a laboratory. Why can't we simulate exactly the way that molecule works in one of these great big
computers or data centers or things like that? It seems like a modest proposal.
Well, here's the problem. If I were to write down all the information I would need to work with, with the caffeine molecule.
So we think of, again, you know, just getting slightly technical, the electrons, the positions, and the carbons, and the nitrogens, and the oxygens that compose it, and how they fit together, and what they're doing, and things like this.
The number of bits, so the number of zeros and ones, would be on the
order of 10 to the 48th. So that's a one with 48 zeros. All right? So a byte is eight bits,
right? And then you go from there, a megabyte is a million bytes and so forth like that. So it's
eight million bits, and we get bigger and bigger and bigger. But 10 to the 48th, it's one with 48 zeros, and people estimate that, scientists estimate,
that's between 1 and 10% of all the atoms in the Earth.
So that is, you would have to take, in the worst case, 10% of all the atoms in the Earth
and say, okay, you know, for working purposes, I'm going to assign you a zero, and I'm going
to assign you a one.
We don't have storage like that.
Nobody has data centers with 10% of the Earth.
There are not data centers that big.
Not data centers that big, and moreover,
if I give you 10 caffeine molecules, you'd use the whole Earth.
We're done after that, and that's to map one caffeine molecule.
One caffeine molecule at one instant, classically.
Now, quantum computers, because they're based on quantum mechanics and the way nature actually works,
you could represent that same information in 160 quantum bits or qubits, we call them.
Now, they have to be very good qubits.
And I'm fudging a little bit here by not defining exactly what very good means.
But we are on the road now.
We have a 65-qubit machine now.
Later this year, we'll have over 100.
In two years, we'll have, IBM Quantum will have over 1,000 qubits.
Now, those by themselves will not quite be good enough, but we are on track to scale and scale and scale. And we certainly do see over the next few years, the next decade or so, getting up certainly to
160 and beyond. So caffeine, classically impossible forever within our sites. And look,
do we care that much about caffeine? No, but we do care about antibiotics. We do care about
antivirals. We do care
more mundane, new materials, things like this. As I think about a practical application of that,
I love the caffeine molecule, especially in your situation, Dr. Bob, because it's applicable on
whatever continent you're speaking on. But I think about the really complex problems of
what do we understand about why a problem happens? What do we understand about why
something interacts? And what I think I hear you saying is problems that there's just not enough
brute force horsepower in traditional classic computing. We can solve that with quantum
computing because just the raw horsepower will be at such an exponentially different level.
That's what I think. I think problems that today we look and go, can't solve that one. We won't be
faced with that challenge in the future. Is that a simpler way to think about it?
That's one of the classes of problems. That's right. So the caffeine is kind of a... And
quantum computing is... Let's do apples to apples computing. Quantum computing for a quantum
mechanical problem. But yes, quantum computing does have this
exponential aspect to it. And frankly, the word exponential is used too much in terms
of marketing. People use it as, oh, it's just growing really fast, or it must be really
hard. It's an exponential problem. Well, in math, an exponent is something, right? So
two to the 10th, 10 is the exponent, right?
And that's a much smaller number than two to the 1,000th or things like that.
But every time you add a qubit, you double the amount of essentially working space you have.
So one qubit has two pieces of information.
10 qubits has 1,024 pieces of information.
It just grows and grows and grows.
And going back to caffeine, by the time we get to 275 qubits,
when the computer is running, it can represent more information
than there are atoms in the observable universe.
That sounds impossible, but that's why quantum computing is so strange.
Right.
You started to allude to where we are,
and I liked your phrase, 160 really good qubits when we get there. Could you take a minute before
we get down what's changing? Could you take a minute and talk to us? I come from the traditional
and classic computing world where Moore's Law is a thing, and we've observed it for almost four
decades, maybe five now. And I think my industry largely understands, or my space,
the data center space, largely understands how Moore has impacted the traditional class
compute world. Could you tell me what it looks like? And I know Moore's law doesn't apply in
qubits, but can you tell me what you're seeing in the early stages, how far we've come? I think
you mentioned 65 qubits today. What does that, is there a rule of thumb and what does
it feel like? And I'm not asking for any proprietary IBM information, but just what are
you guys thinking as far as how improvement is going to go? So once we establish this idea of
qubits, right? So a qubit is the fundamental piece of information. And we represent that somehow in a quantum computer,
a physical representation of a qubit. So we want two things. We want a lot of them,
and we want them to be extremely good. So we want quantity and quality. In fact, a lot of the games
that people were playing four years ago were saying, look at me, I've got all these qubits.
And they were the worst qubits you can imagine. It's like totally useless. They had a lot of really useless things, right? So it's really these two dimensions of being able to
increase the sizes of our machines while having them able to perform calculations more and more accurately. So too many bad qubits, who cares? Really great,
but fewer qubits and you need to do anything useful, who cares? So you've got to increase
them both in tandem. And so anyone who is following quantum computing, you really have
to look at both of these dimensions as I would call them. So your question,
I would translate as saying, how are things going in terms of quantity and quality?
Okay. That's a better way to say it. Quantum quantity and quality. How are we looking?
Quantum and quality. Right. We want a lot of really good things, which why not?
So we published toward the end of last year what we call the hardware roadmap.
And we showed how we were going to go from what was then the maximum 65 qubits to 121 qubits by the end of this year.
And we're on track for that.
Next year, we'll go over 400 qubits.
And then in 2023, over 1,100 qubits.
And that's a milestone because 1,000 people like
these round numbers. But the technology to get us to 1,000 is the same technology that will allow us
to get to many thousands. That is, we will have solved a lot of the technical problems just to
go from where we are now to over 1,002 years. That means that we can
continue to scale. Yes, we'll keep coming up with innovations. It's not just a question of
engineering. There's still hard problems. But it says that at least in that direction over the next
few years, we have figured out how to get through the fundamental roadblocks. And that's true of
any technology. And that is not true of all the different technologies
that people are doing.
So that's the good news,
that we will continue to increase to do that.
The next thing, as I said, is quality.
So quality has to do, and it's a strange concept
because people, you know, they think of,
oh, I run an app, and the app just does what I tell it to do and that's it.
Well, you know, if you're a hardware guy, which I admit I'm not, but if you go way back, right, you go way down in the innards of things, bad things happen in hardware.
I talked about those zeros and ones.
Well, occasionally a zero becomes a one and a one becomes a zero. Now, our hardware is very sophisticated. It does error corrections. It says, hey, that wasn't supposed to happen. I'm going to change that back, right? figuring out first how to decrease the errors down, and then the second half of this decade,
actually implement error correction and making things fault tolerant, which means that the thing
that you care about running, so your quantum application, whatever it may happen to be,
will go from beginning to end without errors from the quantum computer.
So pause there for just one second, Dr. Bob.
I think that you and I are both sufficiently seasoned to remember that when computers in the early days,
crashing was a normal thing, having memory faults was a normal thing,
and that all kinds of applications stuttered, stumbled, crashed on us.
And that was normal.
You accepted that as part of computing that, hey, something went wrong in there at the hardware layer or at the hypervisor
layer, at the operating system layer, at the application layer, and you just rebooted and
you accepted that as for all of this additional capability, you were going to have some glitches
and that was just part of it. And I think in the last couple of decades, the robustness,
you know, I think about my phone, right? That there's more compute function in here than what we put a man on the moon in,
right? That that's, it's such a well orchestrated device and we've hidden those errors so far below
and allowed the computer than the device to handle them that most of the younger generation doesn't
understand. There's things that break in there and not break because someone did something wrong. It's just that this is very sensitive activity going on at a very rudimentary level.
And one little mistake messes it up. And what I think I hear you saying is that we're a little
bit back to that stage in quantum, that we're at the very beginning of figuring out, well,
when something goes wrong, what do we do? How do we handle it? How do we make sure that it doesn't
impact whatever the thing three layers up my application that I want to work doesn't impact?
Is that a good way of understanding? I think for a lot of our listeners, they don't remember the
days when you just accepted your computer crash regularly. It was okay because you were getting
all of this unique functionality. Well, I think you hit the nail on the head when you said, oh, just reboot.
And if the problem goes away, don't worry about it, right?
Well, why did you have to reboot?
What was the fundamental problem?
So while there are differences, obviously, between classical and quantum computing, when you introduce computing technology, as you pointed out, there are certain standard
problems you have to tackle.
And so, it proceeds a pace.
You tackle this class of problem or this standard type of thing.
Let me throw you something out.
We're eventually gonna need something called quantum RAM, quantum memory.
We don't know exactly how to do that yet.
That will be a way.
So, that's down the road.
What is quantum RAM?
Why don't we just use regular RAM?
Well, it's weird stuff.
Turns out in the quantum world, you can't copy data.
And think about RAM.
When we first started computing, we didn't have RAM either.
I mean, that was something that we recognized we needed that, wow, yeah, just to think of. They're reminiscing about the 80s.
80s weren't that long ago.
That's right.
And things move faster and faster.
And that's what represented with Moore's Law, right?
The idea that every couple of years, things would get, roughly speaking, twice as good.
And we do have something like that.
It's called quantum volume.
It is a metric related to this quality. We said two years
ago, we'd be able to double it every couple of years. We doubled it twice last year, which kind
of leads me into this other statement, which is not only are we making progress with quantum
computing, we're going faster than we thought we would. Yeah. Well, that leads us nicely into this
acceleration conversation, the lessons you're learning, what's coming, how it's getting faster, what's changing.
We'd love to hear from you on the things that you're comfortable talking about at this stage about what you're seeing as quantum computing accelerates its development.
All right. So a few things. So let me just give you some sort of big statistics once we keep updating these.
So talking about data centers,
we now have 24 quantum computers on the cloud.
They are in IBM data centers.
We did announce, though, at the Cleveland,
that we will be putting a quantum computer
in the Cleveland Clinic next year.
They're starting a brand new research institute there, and we will be installing the latest and greatest quantum computer
there in Cleveland next year. So the significance of that is these aren't just living inside IBM
and won't just live inside of IBM. We have a machine in Germany that is now online.
We will have one in Tokyo in a few months.
Cleveland is the first announced on-premise quantum computer.
So everything has been cloud-based.
Because, you know, really the future is classical and quantum together, right?
I mean, what IBM calls hybrid cloud, right?
Computing is fundamentally hybrid, the best components to solve whatever problem you have to do. So that is news in a way
that represents our confidence that we can support further development of quantum computers when they're not just living down the hall.
Right. And that took a while to get that level of confidence. Right. And so as things evolve in this industry, it's very much, you know, this phrase I just use, levels of confidence.
So you have a data center. Do you want to install something there that is
going to crash and burn all the time because the vendors are... No. So you have to have a level of
confidence in them, but they have to have a level of confidence in their machine because it doesn't
help them as well. So during the evolution of a technology, some of it is just continuing a pace
and then there are jumps, right? So,
you've been working on a problem, you've been trying to figure out how to do this,
it's static, static, static, and then you try something, you say, wow, this gets much faster.
We showed, for example, a couple of weeks ago, we mentioned that experimentally,
again, going back to these qubits, whatever they are, you know, I mean, we could really go deeply on them, but these physical
things, a qubit doesn't last forever. You can compute with it for a little while, but then it
becomes kind of chaotic. We showed experimentally that we could produce a qubit that lasts nine
times longer than our previous challenge. Nine times. It's almost order of magnitude.
What do quantum computations do? How do they work? Well, a lot of computations these days,
in fact, involve many calls to a quantum computer. So you are sitting there at your laptop or in the
container in the data center or something like this. You're calling across the cloud. You're saying, ah, okay, now I need to do a quantum computation.
I call across the cloud. We had this chemistry example. I know chemistry is hardcore, but
you got to go back to my caffeine example. This is why people are working on this. If
we can tackle it for these types of problems, it'll work for more mundane and perhaps geeky things than this.
In 2017, we estimated that it would take 45 days to do a particular calculation on the cloud.
So if you're sitting on the laptop going back and forth, 45 days of full-time use on the cloud.
So you don't mind putting your laptop on the side, 45 days of full-time use on the cloud.
So you don't mind putting your laptop on the side,
45 days to compute one thing.
We've reduced that to nine hours.
So we've shown a 120 times increase
in this chemistry example,
and we expect this to be useful for other things as well.
So this is a huge jump, rather. So 120, it's about
100. That's two orders of magnitude. So we're not saying, hey, you know, this is 1.1 times faster.
You know, here's a little improvement. We're saying this is a breakthrough.
When you can do something two orders of magnitude faster,
when you can make qubits last 10 times longer,
nine or 10 times longer,
this means that you're getting these jumps,
which means the innovation,
the research that you've been doing pays off.
And this gets you that much closer
to putting quantum computers
in productive production use as well.
So growing them, let me rephrase,
not growing, extending their livelihood, extending their lifespan, that might be a better way to say it, their shelf life, that's a big one. And then their ability to, and I'm going to not do this
justice, but I know that there's an issue about how many qubits stay close to each other and how they impact each other's state and your ability to keep them in close confines and have them work in concert.
Could you talk to us a little bit about that as we think about, I know you said 65 to 120.
Is that part of the challenge of how do we get them close to each other and still being able to have reliable, good information that comes out of them? So let me give you a way of visualizing this because, you know, for most people,
this idea of qubits and quantum devices is pretty abstract. So think of a qubit as a little
computational unit, whatever happens to it. And we need to lay them out somehow. So imagine you're
putting them on your desk or on your table, and this is going to reflect the way they actually sit in the device.
So we're going to start with a hexagon.
And the hexagon has six sides, right?
And we're going to put a qubit on each of the vertices,
the points of the hexagon, so that's six.
And then we're going to put one in the middle,
so along the lines that connect the vertices,
so for a total of 12. Now, when I look at this, I observe certain things. Some qubits are next to each
other, and that's really good for certain computations because qubits that are right
next to each other can talk directly to each other, right? And that's required for quantum computation.
Ones that are further away,
you got to jump through some hoops with some software
and things like this.
You can make it work, but it's a little bit...
So let's say I have qubits 1, 2, and 3.
I got 1 and 2, and they're talking to each other,
but I'm also sending some information down to qubit 3,
saying, qubit 3, you need to be doing this.
I want you to be computing this thing over here.
Turns out if you're not careful, when you send that information down to qubit three,
because we're dealing in a situation of very, very low energy,
incredibly low amounts of energy, if it's a little bit too much energy
or if there are defects, it acts as a little antenna that's radiating out noise, static. So, qubit three may be busily doing
what it's supposed to be doing, but it's spreading this static in its immediate area. And that is
screwing up what's happening in qubit two right next to it because it's trying to do its work.
And it's getting, you know, imagine you're trying to listen to something and you're getting this static and
so if you and i are talking and i'm kind of missing the words raymond because you know there's this
noise in the background and and i completely misunderstand what you told me to do well so
this is kind of a normal type of thing and this is called noise mitigation uh they're called
spectator errors and it's yet a
way that quantum computing is different. So you learn how to control spectator errors. You do
things in a different way. First you control it and then you say can we do this in an even better,
smarter way so it doesn't happen as much in the first place. So this is what the evolution looks
like. Quality and quantity.
It's interesting to understand that they both are impacting where we can go.
I'm still moved by your 10 qubits.
You said, I think, 1,024 computations with 10 qubits.
Did I get that right?
Well, the number of pieces of information.
So, yeah, because it doubles.
So when you go from one qubit, you have two pieces of information, two qubits, you have four.
And then the magic happens.
Three, you get eight, 16, 32.
So all of our friends in the financial industry, the magic of compounding, right?
I think that's why you alluded earlier to Moore's Law as we've been able to double.
When we doubled in the early days, it wasn't a big deal.
When you double in the later years, it's a big deal.
And that's what I think we're already starting to see early in the life of quantum computing,
as we double the number of qubits.
I'm going to have to think of another word beside exponent, because I like your analogy that, yes, 2 to the 10th, 10 is the exponent.
There's some other way.
But yes, the incredible growth that you see as things double in out years.
And a lot of times when people, so your compound interest is exactly right. I wish more people
would realize that compound interest is exponential. As is half-life decay, radioactive
material, things like this.
When applied in a negative way, you say, oh, that problem is exponential.
That means that to solve a problem of a certain size, it gets really huge as the size increases.
It starts modestly.
It's the old hockey stick type of thing that people say.
You reach that point, and suddenly the problem becomes intractable.
It just gets harder and harder and harder.
So quantum has good exponential behavior.
Some problems have bad exponential behavior, including some problems in AI, machine learning.
Can you use the good exponential of quantum to control the bad exponential of certain types of problems?
And so that offers us an insight.
Dr. Bob, can I ask you to speculate for a little bit?
When I think of the ability to unlock complex problems and to be able to address them,
I like your 45 days down to nine hours analogy, right?
There's questions that we won't even bother to tackle when there's a 45 day problem, or they're only going to be tackled in a university or an
experimental environment. And it's not practical for the rest of us to use. But when I think of it,
and I'm going to do some bad analogies here, but when I think of when we first mapped the human
genome, and it took, you know, a couple of years. And I think a
lot of people said, well, I don't understand why that's a big deal. And now you can 23andMe your
DNA and you can get it mapped and sent back to you in a few days, right? And I think of that one
as a, you know, not a doctor and not a terribly well-educated person. And I can easily see why
that's important because the more times we've sequenced DNAs, the more times we can go,
okay, this group of sequenced DNA, these people all had lung cancer.
And is there something that we can look in their DNA and see that's the same across everybody that has lung cancer?
It could give us a fundamental way of understanding problems that because we didn't have the data and we didn't have a way to analyze it quick enough,
that would ever reveal itself to us. So I know that's a crude analogy and sort of a simplistic
person's understanding, but I think of understanding disease is made possible because of our ability to
map the human genome much more quickly and understand it. Could you take, speculatively,
because you live in this world, what kinds of complex problems that today we just look and go,
that's too high a mountain. Could this be applied to quantum computing?
So your problem, as you posed, is very interesting because, as stated, it's an AI problem. It's a
machine learning problem. So you have a whole
lot of data. You want to extract patterns. You want to learn things from those patterns. So
if someone else comes along, maybe they have a certain propensity to get a certain type of
cancer because of genetics or lifestyle or pre-existing conditions and things like this. So you can't cheat and go directly to quantum for
that, but you can say, well, can we have quantum do AI or machine learning better?
And down deep, all of machine learning, all of AI is just math, right? It's a collection of
mathematical techniques and actual calculations. So you can
say, can quantum help us do the calculations faster? So that would therefore deliver the AI
results faster, right? And get you what you would want there. The second way of saying is, well,
you know, as I've been talking about quantum computing really is just this very different thing.
It does not have counterparts in classical.
Can I use this radically different type of computing to find patterns in ways I can't classically?
So in that same data, right, can I examine it and say, here's a connection we never would have seen before, right?
And there are even other ways.
There are ways of saying, well, you know, keep doing what you're doing with AI.
Keep doing the calculations.
But there's this one little tricky bit in the middle, right, which performs an essential
function.
It turns out we're learning that quantum can compute much better little, if you will,
tricky bits, much more efficient, which you then plug
into the classical computing. And so quantum computing, we anticipate, can improve AI in a
number of different ways, enhance it in a number of different ways. And I point that out because
AI and machine learning is now becoming widely, widely used in many areas that
people never saw. And so that fundamentally is because quantum is just a very good computational
machine. There are other areas, simulating for risk, risk assessments.
So let's say you're buying and selling stocks.
Let's say you're building a factory.
Let's say you're building a new data center, right? So clearly you don't just sit around saying,
hey, I'd like to build a new data center over there.
Great.
You know, you do a risk analysis, right?
You try to figure out how much traffic is it going to get? Is it
well located with respect to where it has to grow? What about the power considerations? What about
the environmental considerations? And as much as you'd want to do a perfect risk assessment,
it's limited. By the way, you're three for three, Doc. You could do my job those are those are the top three okay network power and environmental risk yeah well what if I gave you the opportunity to do
even more fine-grained risk analysis because the way this is typically done
you'd say well you know there's this likelihood of this happening in this
likelihood of this or there's a likelihood in this range over here in
this moreover these things are related in different ways. So we believe that quantum will be able to allow you to simulate these much, much
more efficiently than you could before. So you may be able to do far more what-if analyses.
What-if analyses and risk assessments in seconds or minutes would have taken an hour?
We did some interesting work with ExxonMobil that was announced a month or two ago, a paper.
And obviously, they're an energy company, but they care about ships at sea because they have oil tankers.
At any time, there are roughly 50,000 ships at sea.
Well, they're at the mercy of weather. And the strange thing is, the week we announced this with them was the week that ship
got stuck in the Suez Canal. So, would you like to do an analysis of how to redirect all those
ships that were supposed to go through there, right?
To go different routes, right?
I mean, some of them started going south.
Some of them, you know, do I wait?
Do I go?
What are the odds?
How long will it take?
And things like this.
Would you like that calculation to take five days? Or would you rather have a good idea in 10 minutes, right?
Things like this.
So it's these sort of calculations.
Now, no promises.
I really want to be very careful.
We're learning a lot about this.
And sometimes we learn really good things, as I pointed out.
And sometimes we say, you know, that sounded good, but it didn't really work out so well.
But there's such active research in industry, in corporations around the world, in academic institutions, that it's really
heated up. And that's why it's important when I tell you we can do something 120 times faster.
I mean, it's all those people out there who don't work for us, right, can use these systems,
and they can accelerate their work. I think about you said that we looked and it said it didn't work that well.
We only solve the problems as best we can solve them today.
And I'm going to go back to my classical computing roots because it's the best analogy for me.
We thought that the best way to record information was on disks, external disks, just that we slid in a slot in the beginning of computing.
And then we decided, you know, we could take that platter and we could put it inside the computer,
right, and that the platters didn't have to be changed out and the floppy disk disappeared and
we had hard drives. And then we decided that same concept, although those were both the same concept,
but one was internal and one was external, then we decided, hey, that's not the most efficient
use of storage. We could do it in Flash and in the same types of devices that we did RAM on. And we did it with the best
economically viable, technologically capable solution at the time. And I think that's an
easy one for someone with my age to remember. I remember when computers had two floppy drives and
no memory, and that's how you could talk to the computer. And then we went to hard drives, and now we're at flash memory.
And I think of the same thing, I think those same kinds of things are what I'm hearing you say, is today we're good at quantum of what we understand.
And we're going to try to solve the problems, but we're going to solve the problems with the best capabilities we have today.
We can't even see what capabilities we're going to have in the future, what new problems and how we're going to solve them around quantum computing.
But just like the silly example of floppy drives to hard drives to flash storage, we're going to learn over time.
We're going to get better at it and more efficient at it.
Well, going back to, so there's evolution.
And it's somewhat, certain aspects are somewhat predictable.
Miniaturization. Miniaturization.
Miniaturization always happens.
So whatever we do, whatever you look at and say, oh, that's nice.
It's big.
Just wait.
It'll get smaller.
Now, as a purchaser of these different technologies, you may not have been aware that back in the labs of the people,
and we know the vendors historically who have produced the hard drives, produced them.
You know, they didn't always come out right the first time.
They said, well, let's make it this way with such and such oxide, and it didn't work that well.
So, you know, it's the scientific method.
You start with hypothesis, and then you gather data, and then you gather data and then you achieve.
So you had your choice of the best technology available at each point of the evolution.
But behind the scenes, there are a lot of starts and non-starts.
Pharmaceuticals, right?
How many drugs do they set off to develop that actually end up helping people in different
ways?
I mean, there are lots of variables there.
So there's a lot of work going on with AI, for example, to increase the likelihood that
they will get on the right path quicker so that they will produce medicines faster.
That also goes back to quantum.
How can quantum do?
If we can, going way back to caffeine, right?
If we can do more of that biochemistry in the computer instead of inside you, Raymond,
you'll probably be happier, but it'll also happen a lot quicker.
Right, right, right.
Well, awesome stuff.
Well, Dr. Bob, we always try to sneak in some trivia questions.
We usually make those an ode to our guests.
So we're going to sneak in three trivia questions for our listeners as an opportunity to win some money from Compass and to get to hear from our listeners.
So I've got three in honor of I went to the Harvard of Lee County.
For those of you who don't know, that's Auburn University.
Not quite as prestigious as Dr. Bob, who actually went to the Harvard.
But in honor of Dr. Bob's
education, we've got a couple of questions for you. Number one, and I'm going to have to look
down and read these. Number one, what year was Harvard founded? Now you can email these answers.
Dr. Bob doesn't get to answer. Who is generally considered the father of quantum computing?
Question number two. And what is the quantum computing equivalent of a bit?
That one was answered by Dr. Bob in the show. So those are our three trivia questions. You can
email me, rhawkins at compassdatacenters.com. You can tweet us your answers at Compass DCS,
or you can send them to answers at compassdatacenters.com. Also, I think there is a
text one, so I better read that one to get it right. You can text your answers to 844-511-1545, code word Dr. Bob. One more time,
844-511-1545, the code word Dr. Bob. So IBM's chief quantum computing exponent,
we love talking to you. We're always impressed with your understanding
and your knowledge and how you make it understandable for simple guys like me.
Really appreciate that. Dr. Bob, we're excited to see where it goes. I think the future is exciting.
I think that the world is coming out of a pandemic as we record here at the first week of June.
There's some unrest in the world as there always is, but I think the future is bright. The
changes that are coming in robotics and in AI and machine learning and in quantum computing and in
biomechanics and chemistry. And I just think there's an exciting time to be alive and there's
a great future ahead of us. And hybrid computing that embraces the power of quantum computing,
I think is going to be a part of it. And we're grateful to have you talk to us about it and help us learn a little bit about it.
Thank you for joining us. Thank you so much. Glad to. And I hope you all understand now my
title of Chief Quantum Exponent. It's a bit of a play on words here because exponent can mean more
than one thing. Raymond, I want to do a shameless plug just before I go here. Please do. Absolutely.
So last time I was here, I talked about my book about quantum computing, Dancing with Cubits.
The idea is that, look, if you're going to do quantum computing, you're going to have to learn some math.
Sorry, but I'm going to take you from beginning to end, and we're going to go everything you need.
It just came out yesterday in hardcover.
It's available on Amazon in the U. the US and soon other geographies as well.
I have another book, which is at the moment called Dancing with Python. So it is a book about
programming. It's an introduction to coding using the Python programming language. But what's
different about it is I teach you not only all the classical stuff, but I also start teaching about quantum coding.
So in one place, you can learn to code
and also start learning in quantum
because computing is computing, right?
And so with luck, that'll be out by the end of August.
So go check out Amazon, check out Dancing with Qubits.
I have the soft copy version.
I will be buying the hardback version and recommend it and looking forward to Dancing with Qubits. I have the soft copy version. I will be buying the hardback version and recommend it
and looking forward to Dancing with Python coming later this year.
My daughter keeps asking me, what's all this dancing going on?
Good stuff.
Well, Dr. Bob, it's been awesome having you.
We're so grateful.
Thank you for doing my first ever video podcast with me.
We are excited to learn about this new way to deliver content to our audience.
And we're grateful that you were with us for the very first one.
Thank you, sir.
Thrilled to be here.
Great to see you again, Raymond.
Thank you.
Bye now.