Your Undivided Attention - Future-proofing Democracy In the Age of AI with Audrey Tang
Episode Date: February 29, 2024What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan’s Minister of Digital ...Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. RECOMMENDED MEDIA Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. PageThis academic paper addresses tough questions for Americans: Who governs? Who really rules? Recursive PublicRecursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governanceA Strong Democracy is a Digital DemocracyAudrey Tang’s 2019 op-ed for The New York TimesThe Frontiers of Digital DemocracyNathan Gardels interviews Audrey Tang in NoemaRECOMMENDED YUA EPISODES Digital Democracy is Within Reach with Audrey TangThe Tech We Need for 21st Century Democracy with Divya SiddarthHow Will AI Affect the 2024 Elections? with Renee DiResta and Carl MillerThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Transcript
Discussion (0)
Imagine that 2024 was the year that democracies implemented a broad suite of upgrades
that made them future-proof to AI attacks.
Upgrades like verified phone numbers, a concept called pre-bunking,
implementation of multiple backup systems always assuming that you are going to be hacked,
and using paper ballots for elections letting citizens use their own video to verify all
accounting. These are the kinds of upgrades that can make a democracy resilient in the age of AI.
And the best living example of that is Taiwan. Because about a month ago, Taiwan had its major
presidential election in which everyone thought that China would be using the latest AI tools
to influence the outcome. And yet, Taiwan survived. In this episode, we're going to go on a tour
of what Taiwan has done under the leadership of Audrey Tang, who serves as the Minister of Digital
Affairs. We've had Audrey on the podcast before, but we wanted to have her back to talk through
how she understands this moment in AI development,
especially coming off of her party winning a new majority
in Taiwan's election at the start of this year.
So, Audrey, we are so excited to have you back again.
Welcome to your undivided attention.
Happy to be back.
Okay, so when we think of AI and election harms,
the first thing most people think of is deepfakes of politicians,
things that have the power to sway voters in an election.
How has this played out in the most recent elections in Taiwan?
So in 2022, I remember
filming a deep fake video of myself with our Board of Science and Technology.
This is called pre-bunking.
Before the deep-fake capabilities falls into the hand of authoritarian and so on,
already two years ago, we pre-bunked this deep-fake scenario
by filming myself being deep-faked and also showing everybody
how easy it is to do so in a MacBook and so on
and how it will be easy to do so on everybody's mobile.
phone and so with this pre-bunking the main message is that even if it is interactive
without some sort of independent source without some sort of what we call provenance
which is a digital signature of some sort do not trust any video just because it look like
somebody you trust or a celebrity now pre-bunking takes some time to take effect so we repeated
that message throughout 2022 and 2023.
But the upshot is that in 2024, when we do see deep fake videos during our election campaign season,
they did not have much effect because for two years, people already have built antibodies
or inoculations in their mind.
Yes, I love that example because you're pointing to both the need to understand the threat.
Just like in cybersecurity, you let the defenders know ahead of time so that you build up
the antibodies, you try to patch your system before the attack.
actually gets used. I could imagine people in our audience listening to the example you gave, Audrey, of like, all right, we need a pre-bunk by showing people how deep fakes work.
But I think the deeper point is if you don't already have a system that lets you do verification and content provenants, then you don't actually leave people with anything to do except to doubt everything.
So I'm curious, like, your philosophy there, and then how you go about doing that large-scale upgrading.
So in terms of information manipulation, we talk about three layers.
Actor, like who is doing this?
Behavior.
Are they millions of coordinated inauthentic behavior, or are they just one single actor?
Content, whether the content looks fake or true.
By content alone, one can never tell.
And so it's asking people to tell whether it is trustworthy by its behavior or by its actor.
Starting this year, all the governmental SMS short messages and so on from the electricity company, from the water company, from really everything, goes from this single number, 1-1-1.
So when you receive a SMS, whether it is the AI deliberation survey asking you to participate, we'll talk about that later.
or whether it is just to remind you of your water utility bill,
it all come from this single number, 1-1-1.
And because this number is not forgeable, in Taiwan, normally,
when you get a SMS number, it's 10 digits long.
If it's overseas, it's even longer.
So one can very simply tell by the sender that this comes from a trusted source.
This is like a blue checkmark.
And already, our telecom companies, the banks, and so on,
are also shifting to their own short codes.
And so it creates two classes of senders.
One is unforgeable.
One is guaranteed to be trustworthy.
And the other class, you need to basically meet face-to-face
and add to your address book
before confirming that it is actually belonging to a person.
How about disinformation? How do you target that?
We have co-facts, collaborative fact-checking,
where everybody can flag a message as possibly scan.
or spam to your chat group.
And so what it does is that it has a real-time sampling
of which information packages are going viral.
Some of them are information manipulation.
Some of them are factually true.
But nonetheless, we have a real-time map of what's going viral at this very moment.
And by crowdsourcing, the fact-checking, think Wikipedia, just in real-time,
we now have not just the package of information,
but also the pre-bunking and debunking that goes with it.
And with newer methods of training language models like direct preference optimization,
it figures out the logic of what's approved and what's rejected.
And even newer methods like spin just show it the way that the fact-checkers do their
painstaking work and it just learns from that train of thought.
Using these ways, our civil society has been able to train a language,
model that provides basically zero-day responses to zero-day viral disinformation before any
fact-checker can look at any viral message. So we only really have to focus on the three
or four things every day that is really going viral. So I guess I have a question about that.
I mean, isn't it possible that, I mean, AI will enable a much more diverse set of future deep-fake
categories and channels so I could say, what are the political tribes that I want to attack,
use AI to generate
10 different kinds of deep fakes to 10 different kinds of tribes
and there's different little realities like little micro realities
for each of them
and so we might have previously lived in a world
where most of the world's attention in your country
is like on a handful of channels
and a handful of stories are going viral
so we have to sort of expand the horizontality
of the defenses, no?
Yes, it does enable
a new kind of precision persuasion
attacks that does not rely
on the share or repost
buttons. Instead,
that it just relies on direct messaging, basically,
and talks to very individualized people
with a model of their preferences.
On the other hand, the same technology can also be used
to enhance deliberative polling.
Polling is when you call a random sample of, say, 4,000 people
or 10,000 people,
and ask them the same set of questions to get their preferences.
It is used during elections, of course,
but also during policymaking.
What polling did not do previously
is to allow the people picking up the phone
to set an agenda, to speak their mind,
to show their preferences,
and to let us the policymakers know
what is the current fears and doubts
and also personal anecdotes
that may point to solutions by each and every individual.
So we're also investing in deliberative polling technology
that use precisely the same kind
of language model analysis tools.
that you just talk about, but not to calm people, not to scam people, but to truly show
people's preferences so that when we pair the people who volunteer to engage in this kind
of face-to-face or online conversations of a group of 10 people each, we ensure that the group
of 10 has the diversity of perspectives and the sufficient number of bridging perspectives
that can bring everybody together to some place where people can live with.
enough consensus. And so if we do this as skill, we are no longer limited by the amount of
human facilitators, which is very important and very treasured, but cannot simply scale
to tens of thousands of concurrent conversations. And then we can get a much better picture of
how to bring people together. One thing I find frightening is that we're not just talking about
an influx of AI content or copy-page spots, as we've seen in former elections. We're also
talking about AI that has the capacity to build individual, long-term, intimate online relationships
with entire swaths of the population in service of steering or amplifying their beliefs.
We're headed towards a world where you'll meet a person on a dating app, you know, start messaging
them, search, find their online profiles, and they'll even send you selfies and introduce you
to their friends. We become like the people we spend time with. What's more transformative
than a relationship. And because you'll have come to trust this person and maybe even their
friends, you will be influenced by their beliefs. It's how the social psychology of belonging
works. And all this time, those people you've been interacting with were never real people at all.
First of all, I think pre-bunking also works on this in that if you let people know that there is
this kind of attack going on, there will be wary of it. And second, I think instead of point-to-point
relationships, people want to belong in communities. Some people have the communities that
they worship together or they practice art together or to do podcasts together and so on. And so
in such communities, generative AI can also help find the potential people that may want to
form a community with you instead of just satisfying all your preferences catering to your every
need and so on. It's the other way around. It is show.
each and every individual. The thing that they care about, there's some existing community
and then leads to much more meaningful ties among multiple people. So when people enjoy that
kind of community relationships, including actually participating in fact-checking communities,
it is much, much more meaningful than an individual-to-individual companion that may cater to your
need, but does not connect you to more human beings. So just to make sure, I think, for listeners
to get this. So it's like, okay, so people are vulnerable to a one-on-one attack of one person
creating a fake relationship to influence another. But then what you're saying is, well, what if
we group people together into communities where they feel this deep care and this deep reflection
of their values, which is what your deliberate polling type systems and structures do, is they
invite people into seeing the people who agree with them, not just agree with them on some
like hyper-polarized outrage thing far and left field, but who agree with them about this
more bridging consensus.
That is right.
And it's not in the content of my speech, but rather in the care and the relationship that it fosters.
It reminds me of Hannah Arendt's point that totalitarianism stems fundamentally from loneliness.
And so what I'm hearing you saying is that there is a solution not just to better voting.
You know, humans just give one bit of information every four years to decide, like,
which way the country goes.
We could be doing it at a much higher bandwidth.
But it also brings people together in this deliberative polling
actually puts people face-to-vace into community to work through problems.
Yes, it builds both longer-term relationships, not just transactions,
and also it deepens the connection.
So it was more diverse set of people, but also deeper.
So Taiwan is constantly under the threat of some kind of intimidation from China.
You know, the threat of combined cyber attacks with information that tries to bring people into confusion.
At the same time as they're doing flyovers of its Air Force,
so it makes you feel like, you know, your island is under attack.
And Audrey, you have a huge amount of practical experience and how to fight back against these kinds of things.
Can you give us an example of how that's worked?
So in 2022, August, just before my ministry started, the U.S. Speaker, House Speaker Nancy Pelosi, visits at Taiwan.
The Chinese have tried to isolate Taiwan. They may try to keep Taiwan from visiting or participating in other places.
But they will not isolate Taiwan by preventing us to travel there.
And on that week, we have seen how cyber attacks, along with information manipulation, truly work from PRC against Taiwan.
Of course, every day we already face millions of attempts of cyber attacks.
But on that day, we've suffered more than 23 times compared to the previous peak.
So immense amount of denial-of-service attacks that overwhelmed not just the websites of our ministry.
of National Defense or the president's office websites,
the Ministry of Transportation also saw that the Taiwan Railway Station
has their signboards, the commercial signboards,
outside of rail stations compromised,
replaced with hate messages against Pelosi.
Not only that, but also the private sector,
the convenience stores signboard were also hacked
to display hateful messages.
And when journalists want to check what's actually going on,
going on? Was it really true? They've taken over the Taiwan rails. They didn't, but
rumors as they did. They found the websites of ministries and so on very slow to respond.
And that only fueled the rumor, the panic. And concurrently, of course, missiles flew around
our head. So the upshot is that each of those attack vectors contribute to the amplification
of other attack vectors. The goal, the strategic goal of the attackers, of course, is to
to make the Taiwan stock market crash and to show the Taiwanese people.
It's not a good idea to deepen relationship with the US.
But first, it didn't work.
We very quickly responded to the cyber attacks.
People did not panic.
And we very quickly reconfigured our defenses against this kind of coordinated attack.
But all in all, the battlefield is in our own mind.
It is not in any particular cyber system, which could be fixed and patched and so on.
and so on. But if they create the kind of fear, uncertainty, and doubt that polarizes the
society and make part of the society blame the other side of society for causing this kind
of chaos, then that leaves a wound that is difficult to heal. And so we'd be mostly working
on bridging those polarizations. And I'm really happy to report that after our election this
January, all three major parties, supporters feel that they have won some and there's actually
actually less polarization compared to before the election.
So we kind of overcame not just the threat of polarization or precision persuasion
of turning our people against each other,
but also we use this experience to build tighter connections,
like a sheer peak experience that brought us together.
One of the ways you've worked to heal polarization
is implementing what you call deliberative polling.
I sort of wish there was a better term for it.
But that's where you synthesize input from a large number of Taiwanese citizens in a very clever way,
and then take it straight to policymakers.
When we look at why people don't trust democracy,
I always think of this very telling graph from the political scientist Martin Gillens and Benjamin Page.
It plots the average citizens' preferences versus what policies actually get passed.
And there's no correlation.
Everyday citizens' preferences makes no difference.
in the agenda of what government cares about.
But of course, there is a correlation for the preferences
of what economic elites and special interest groups care about.
So, of course, there's low trust in our institutions.
This is obviously a huge problem
and one that deliberative polling seeks to address.
So can you explain how it works in more detail
and then give us an example of what it looks like in practice?
Sure.
So the first time we've used collective intelligence systems
on a national issue was in 2015.
When Uber first entered Taiwan, there were protests and everything, just like in other countries.
But very differently, we asked the Uber drivers, the taxi drivers, the passengers, and everyone really,
to go to this online pro-social media called Polis.
And the difference of that social media is that instead of highlighting the most clickbait,
the most polarizing, most sensational views, it only surfaced the views that bridges,
cross differences. So, for example, when somebody says, oh, I think search pricing is great,
but not when it undercut existing meters. This is a nuance. And nuanced statements like this,
usually in other antisocial social media, that just get scroll through. But Polis make sure
that it's up and front. The same algorithm that powers Polis would eventually find its way
into community notes, kind of like a jury moderation system for Twitter nowadays.
X.com. And so because it's open source, everybody can audit to see that their voice is actually
being represented in a way that is proportional to how much bridging potential it has. And also,
it gives policymaker a complete survey of what are the middle of the road solutions that will
leave everybody happier. And much to our surprise, most people agree with most of their neighbors
on most of the points, most of the time. It is only that one or two most polarized points.
that people keep spending calories on.
Now, because of that peak experience,
we've applied this method also to tune AIs,
working with the Collective Intelligence Project,
we worked with Anthropic, work with Open AI,
with Creative Commons, with GovLab, and many other partners,
and the resulting matrix,
when we use that to train Claude, that's Anthropics AI,
it is as powerful as the Anthropics original version,
but much more fair and much less discriminatory.
So, Audrey, how do you get over the selection bias effects
that there's going to be certain kinds of users,
maybe more internet-centric, digital-centric,
digital-native users who are going to use this,
but then it leaves out the rest.
How do you deal with the problem of selection?
In Taiwan, broadband is a human right,
and broadband connectivity is extremely affordable,
even in the most remote places.
For $15 U.S. dollars a month,
get to access unlimited bandwidth.
And because of that, we leave no one behind.
And so we just randomly sent using the trusted number 1-1-1
to thousands and tens of thousands of people's SMS.
And the people who find some time to answer a survey
or just to listen to a call can just speak their mind
and contribute to the collective intelligence.
So while, of course, this is not 100% accessible,
there still are people who need, for example, sign language translation and so on, which we're also working on, and translation to our other 20 national languages, I think this is a pretty good first try, and we feel good about the statistical representativeness.
Audrey, the nuance in how you create spaces in which conversation happens, I think is actually critical and deeply thought through. For instance, there is no reply button.
in your systems.
And you're like, okay, how do you have a conversation
without a reply button?
So, yes, Polis, the new petition system,
community notes on X.com,
they all share this fundamental design
that there is no reply button.
And through this bridging bonuses algorithm,
we bring the bridging statements
into more and more visibility.
So people can construct longer and longer bridges,
that bridge across higher and higher
differences between people's ideologies, tribes, experiences, and so one. I mean, it's mentally
very, very difficult to bridge long distances. This is true for anyone. But just to, you know,
explain an idea to somebody who have slightly less experience, well, that's just sharing your
knowledge, right? That kind of bridging everybody can do. And so by visualizing which gaps
still remain to be bridged.
It turns it into a game almost
to challenge the people
with a knack of crossing bridges
between left and right
that could be made.
This system that gamifies
this bridge-making activity,
I think, is very, very powerful
and is at the core,
regardless of which kind of space
we choose design.
And just to link this for
listeners who know our work on social media,
this is instead of rewarding division entrepreneurs
who are identifying
new creative ways to sow division and inflammation of cultural thought lines, this is rewarding
the bridging and synthesis entrepreneurs. And per our frequent referencing of Charlie Munger's
quote, that if you show me the incentives, I'll show you the outcome. What I love about
Audrey's work is she's about changing the incentives so that we get to the different outcome.
Now I know election officials from all over the world ask you for advice on how to make
elections more resilient to cybertax and disinformation. What do you?
What is one takeaway you can give to other countries undergoing elections?
The one takeaway that we would like to share in addition to this is controversial
from our January election is to only use paper ballot.
We in Taiwan have a long tradition of each counter in each counting station
always raise above their head each and every paper ballot.
There's no electronic telling, there's no electronic voting, and YouTubers from all three major parties are practically in every station with a high-definition video cam recording each and every count.
So by using cutting-edge technology, broadband, high-definition video and things like that, only on the defensive part, that is to say to guard against election fraud.
So far, there is no other better technology than ask each of our citizens if you want to bring your high-definition camera, which may just be your phone, and to contribute your part in witnessing the public counting of a paper-only ballot in your nearby station.
The information manipulation attack do not seek to counter a platform.
What they seek is for people to no longer trust in democratic institutions.
So we entirely pre-bunked the election fraud deepfakes that did appear right after the election.
There is no room for it to grow.
Whatever the accusation was, you can find in that particular counting station,
three different YouTubers belonging to three different parties that did have an accurate record of the count.
And still, we get a result within four hours or so, so it's not particularly lagging.
I'd say there's a grand irony in saying here's the 21st century upgrade plan for 18th century democracies and one of the big takeaways is...
But it's, I mean, it is both ironic, but, you know, to Audrey's point, it's using the technology in a defensive way.
It's saying, bring in the 21st century technology to make sure that everyone sees the same thing at the same time.
It creates a shared reality to fight disinformation and other attacks against legitimacy of the election.
Really, just trust the...
citizens. The citizens mostly have already figured out the right values, the right steering wheel
points of direction for AIs and our technologies and our investment to go to. It was just the citizens
have very small amount of bit rate of essentially just a few bits per four years to
voice their concerns. So simply investing in increasing that bit rate so the citizen can speak
their mind and build bridges together, does wonders to make sure that your polity move on
from those isolated void vacuum of online meaning so that they do not get captured by those
addictive, persuasive bots, but can instead move on to alignment assemblies, to jury-like duties,
to participate in deliberative polls, to crowdsourcing, fact-checking, and many, many other things.
So Taiwan has a bigger role than most countries in the development of AI,
which is that 90% of the world's supply of GPUs, which are the chips that power AI,
they come from one company, which is TSM, based in Taiwan,
and is partly controlled by the Taiwanese government.
So that gives Taiwan enormous leverage in the development of AI,
and also some responsibility to make sure that AI is developed safely.
So to what degree is that burden being discussed in Taiwan?
First of all, it is true that Taiwan produce chips, powers pretty much everything from advanced military to scientific, to artificial general intelligence, to everything really.
It's one of the most general purpose technologies imaginable.
And because of that, I think people trust Taiwan to protect TSM and its supply chain against cyber attacks and so on.
And so we enjoy the trust of people around the world, and we take it very seriously.
We just established the AI Evaluation Center to test the most broad range against potential AI risks.
We test not just privacy, security, reliability, transparency, explainability, which is a standard, but also fairness, resilience against attacks.
Safety, we're taking our burden quite seriously.
In that, yes, we did produce the chips that could potentially lead to the weaponization of artificial general intelligence.
But we're also taking our very seriously in making sure that we invest more to the defensive side,
the evaluation and certification eventually side as compared to the offensive side.
So it's great to hear you're making that investment into AI safety in Taiwan.
What about international cooperation?
What we're advocating is a race to safety, a race to not increase the speed, but rather the steering wheels capability, the horizon scanning capabilities, the threat intelligence network, so that we can let people know when a small scale disaster is just about to happen.
And so we're like crossing a kind of frozen sheet of eyes above a river, and we don't quite yet know, which is.
place in that ice sheet is fragile. And in the worst case, the ice sheet explodes and everybody
fell down the water, right? That's the worst case scenario. And we correspond closely with the
US AI risk management framework and its task force and European counterpart with the Essex
guidelines for trustworthy AI, the UK counterpart, the AI Safety Institute. The list goes on. And so I think
we are cautiously optimistic
in our horizon scanning capabilities
so that for each harm that is being discovered
we design the liability framework
and if it doesn't work,
then we design the countermeasures defensively
and only when that fails to work
do we talk about more drastic measures.
And if we keep pushing this all the way to the extreme,
we're saying, okay, we're on this thin ice,
it's getting thinner,
but no one knows exactly what the thin breaking point really is.
But there's this point
where we don't know where the ice is going to break underneath our feet.
And is there some critical point, Audrey,
where there's something else that would need to happen,
some other emergency break,
whether it's TSM shutting down the flow of chips in the world
or something else.
How do you think about that question?
Because that's not that many years away.
Yeah, and this did happen, right?
This did happen.
People saw very clearly back when I was a child
that the ozone layer is being depleted
by the refrigerators of all things,
because the frioms, the chemical compound used in it,
is rapidly depleting the ozone protection layer.
I think the point I'm making is if we're racing blind,
if nobody knows that ozone is being depleted back then,
then yes, drastic measures are called for
when we suddenly discovered that we're all going to die from cancer, right?
But because people did invest in the sensing capabilities,
and also the commitment across the political spectrum.
Through the Montreal Protocol, basically they set a sunset line
so that by year this and year that we are committed to find commercially viable replacements.
And so we need more Montreal protocols against specific harms that AGI could bring.
And I totally agree with you that we need to continue our message of basically treating this
as seriously as the pandemic or the proliferation of nuclear arms
or even all the way to climate urgency.
And only if we continue to do that,
do we create a moral pressure on the top labs
to commit to this kind of sensing and safety measures.
And I think we deeply agree
that we are currently racing towards a very dangerous,
uncontrollable dark outcome.
And we agree that there needs to be some form
of international cooperation, international agreement.
and agreements between really any of the top labs that are racing to that outcome
so that we know how to manage it as we get close.
And the difficulty, I think, is the ambiguity of where those lines are
and the many different horizons of harm and the different kinds of risks,
the range of risks that occur as you get closer.
Because some could argue that without the Audrey Tang upgrade plan to democracies,
that the existing AI that we have proliferated is enough to basically break
nation states and democracies already.
And so there are already risks that have the ability
to break the very governments that we would need
to be part of those international agreements
and their legitimacy.
And so I think part of what I want to instill
hopefully at least my sort of take in this conversation
is, and in listeners, is that we need to create
a broader sense of prudence and caution
and correct that into safety, coordination,
care, and an understanding of those risks.
And your work is an embodiment of, you know,
foregrounding and making primary the vulnerabilities, the fragility of society so that we can
care for that and instead focus the incentives on the bridging, the health, the transparency,
the strengthening aspects of society. Audrey, you're a genuine hero in the fact that your work
is not only an actual plan and possibility space to upgrade democracies, but is also factoring
the race for AI itself and what it'll take to correct that. And so my hope is that people will
share around this episode as a blueprint for what it could take to get to that place.
Thank you so much, Audrey.
Thank you.
Two things we didn't get to discuss in this podcast, but that we've discussed with
Audrey before are design criteria for how democracies can be future-proofed.
One of them is that the capacity of governance has to scale with AI.
Otherwise, the engine in your car is going faster and faster and faster, but your steering wheel
just isn't keeping up and that car is going to crash. So that's one. Two is that our collective
intelligence has to scale with AI. Otherwise, you know, human will collectively. Our collective
intelligence will be dwarfed by AI. And that's another way of saying humanity has lost control.
So as we think about future-proofing democracy, these are two criteria for all technologists to keep
in the back of their mind.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit working to catalyze a humane future.
Our senior producer is Julia Scott.
Kirsten McMurray and Sarah McRae are our associate producers.
Sasha Fegan is our executive producer,
mixing on this episode by Jeff Sudaken,
original music and sound design by Ryan and Hayes Holiday,
and a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
You can find show notes, transcripts, and much more at HumaneTech.com.
If you liked the podcast, we'd be grateful if you could rate it on Apple Podcasts because it helps other people find the show.
And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.
