Orchestrate all the Things - Pragmatic AI adoption: From futurism to futuring. Featuring Elise Lindinger, SUPERRR Co-founder
Episode Date: October 2, 2025What does it mean to be pragmatic about AI adoption, while staying true to the values and mission driving people and organizations? When Elisa Lindinger decided to talk about AI, her intention w...as to say what she had to say once, and then move on with her life without having anyone ask about AI ever again. The plan backfired heavily, but somehow, that turned into a good thing. Lindinger is the Co-founder of SUPERRR, an independent non-profit organization. SUPERRR was created to serve the thesis is that digital policy is social policy, and it needs bold visions and feminist values. Like most other individuals and organizations today, Lindinger's inbox has been flooded with new invitations every day. Invitations to discuss AI, to facilitate workshops on feminist AI, or the inevitable coaching offer to finally learn how to prompt properly. This made Lindinger feel that other topics that are just as crucial are disappearing from the conversation. "AI and Unlikelihood" was an attempt to situate how the people at SUPERRR view the phenomenon of AI, and why they believe it’s essential to return our attention to other topics as well. What happened instead was that SUPERRR's post got viral on LinkedIn, reigniting the topic of AI and stealing the limelight. An algorithmic glitch? Perhaps. But SUPERRR's stance of rejecting the narrative of blind adoption of generative AI resonated with many people. We met with Lindinger to explore the nuance behind what some might superficially call a Luddite approach, and to talk about setting priorities right, imagining futures people want to live in, and how AI may fit there. With cracks in the AI narrative beginning to show, the backdrop could not be more timely. Article published on Orchestrate all the Things: https://linkeddataorchestration.com/2025/10/02/pragmatic-ai-adoption-from-futurism-to-futuring/
 Transcript
 Discussion  (0)
    
                                        Welcome to orchestrate all the things.
                                         
                                        I'm George Anadiotis and we'll be connecting the dots together.
                                         
                                        Stories about technology, data, AI and media and how they flow into each other, saving our attacks.
                                         
                                        And that is another skill set, actually, because we are so used to, you know, living in highly structured environments
                                         
                                        that do not really give us a space to be creative, to kind of break free from the things that we have learned
                                         
                                        and imagine something radically new
                                         
                                        that we then can figure out
                                         
                                        whether we can actually reach it and work towards it.
                                         
    
                                        So this is actually what we want to work on more.
                                         
                                        It's future's literacy, we call it.
                                         
                                        So again, we use the term literacy here as well.
                                         
                                        And of course, AI plays a role in that
                                         
                                        because right now hardly anyone can imagine a future without AI.
                                         
                                        And what we do is question, yeah, but what for?
                                         
                                        You know, like it doesn't have to be there.
                                         
                                        You can make the decision.
                                         
    
                                        What would your decision look like?
                                         
                                        What does it mean to be pragmatic about AI adoption in 2025
                                         
                                        while staying true to the values and mission driving people and organizations?
                                         
                                        When Elisa Lindinger decided to talk about AI,
                                         
                                        her intention was to say what she had to say once
                                         
                                        and then move on with her life without having anyone asking about AI ever again.
                                         
                                        The plan backfired heavily, but somehow that turned,
                                         
                                        into a good thing. Lindyger is the co-founder of Super, an independent non-profit organization.
                                         
    
                                        Super was created to serve the thesis that digital policy is social policy and it needs bold
                                         
                                        visions and feminist values. Like most other individuals and organizations today, Lindyger's
                                         
                                        inbox has been flooded with new invitations every day. Invitations to discuss AI to facilitate
                                         
                                        workshops on feminist AI or the inevitable coaching offer to
                                         
                                        finally learn how to prompt properly. This made Glyneger feel that other topics that are just
                                         
                                        as crucial are disappearing from the conversation. AI in unlikelyhood was an attempt to situate
                                         
                                        how the people at Super view the phenomenon of AI and why they believe it's essential to return
                                         
                                        our attention to other topics as well. What happened instead was that Super's post got viral
                                         
    
                                        on LinkedIn, reigniting the topic of AI and stealing the limelight. An algorithmically
                                         
                                        Blitz, perhaps, but superstance of rejecting the narrative of blind adoption of generative
                                         
                                        AI resonated with many people.
                                         
                                        We met with Leninger to explore the nuance behind what some might superficially call
                                         
                                        a Luddite approach, and to talk about setting priorities right, imagining futures people
                                         
                                        want to live in, and how AI may fit there.
                                         
                                        With cracks in the AI narrative beginning to show, the backdrop could not be more timely.
                                         
                                        I hope you will enjoy this.
                                         
    
                                        If you like my work and orchestrate all the things,
                                         
                                        you can subscribe to my podcast, available on all major platforms.
                                         
                                        My self-published newsletter, also syndicated on Substack, Hackernion, Medium, and D-Zone,
                                         
                                        or follow, orchestrate all the things on your social media of choice.
                                         
                                        Hi, my name is Elisa Lindinger.
                                         
                                        I'm based in Berlin.
                                         
                                        And six years ago, I co-founded an organization
                                         
                                        called Super. It's a non-profit organization and we work on the digital transformation,
                                         
    
                                        so all things tech and tech policy. And we try to figure out how we can use the digital
                                         
                                        transformation to advance social equality. Okay. That was actually very, very fast. I always
                                         
                                        ask people to do it in three minutes or less, but I think you may have done it in under a minute
                                         
                                        or even under 30 seconds. So thanks a lot for the interest.
                                         
                                        production and precisely you did it really, really fast, I think it's maybe worth clarifying
                                         
                                        a few things about you and the organization as we progress through the conversation.
                                         
                                        So I was wondering if you wanted to specifically tell us, so you are one of the co-founders
                                         
                                        of Super.
                                         
    
                                        So I was wondering if you wanted to share with us what was the motivation for founding the organization
                                         
                                        and when and how, under what circumstances, let's see, did you found it?
                                         
                                        Right.
                                         
                                        So maybe I can use the opportunity to share a little bit more of my journey,
                                         
                                        what brought me towards founding super.
                                         
                                        My path in life is not very straightforward.
                                         
                                        I didn't have the one clear-cut career.
                                         
                                        After, you know, like graduating from school,
                                         
    
                                        I studied a prehistoric archaeology, which is very fringe in itself, right?
                                         
                                        It's a very niche topic.
                                         
                                        It's also a very data-driven topic.
                                         
                                        So that's when I first got in contact with, you know, database, database work, structure of data.
                                         
                                        Also, all sorts of statistics, statistical methods, computer applications in archaeology.
                                         
                                        So that was basically my first, I would say, professional touch point with technology.
                                         
                                        I've always been a tinkerer.
                                         
                                        So, you know, that also kind of compliments where I am today.
                                         
    
                                        And after graduating and also working for a couple of years in archaeology and also on these computer applications for archaeologists, I decided to switch sites and joined an institute for computer science, worked there, did some consultation for archaeological field work as well on work processes, on also databases and these types of questions.
                                         
                                        And then, you know, at some point, I guess every academic cares to ask themselves whether they want to remain in academia or whether they want to pursue a different career.
                                         
                                        And for me, what has gotten more and more attractive was civil society work.
                                         
                                        So value-led work that focuses on advancing social justice, on, you know, like dedicating your work to the more maybe, as I personally see it, impact.
                                         
                                        ways of influencing people's lives, together with many other people who are already working in
                                         
                                        this space. And I ended up in civil society organizations who also work on open data,
                                         
                                        on tech, open government, these topics. And I found that very interesting. There was one thing
                                         
                                        that kept bugging me, though, and that was the question, what do we have open source, open data,
                                         
    
                                        all these open concepts for? And for me, those were always valuable. But the question for
                                         
                                        still remain. So like what's the, what's the goal? Like openness is not a goal for me. It's a tool.
                                         
                                        It's a, it's a value that we have to apply for something. And that is actually the reason why
                                         
                                        my co-founder, Yulia Kloiber and I, we were co-workers at the time at a different organization,
                                         
                                        why we decided we wanted to, yeah, we wanted to found our own organization because our ideas of
                                         
                                        advancing a more just and equitable society with a focus on Germany at the beginning. We didn't
                                         
                                        have a space in an existing organization where we could do that. And the second thing is that we
                                         
                                        both identify as ardent feminists, so feminist values guide our work. And we also thought it might be
                                         
    
                                        worthwhile, not just to apply that in our work and not, you know, just demand feminist action from
                                         
                                        others, but also figure out how we can actually run our own organization along feminist values. What does
                                         
                                        that even mean? What does that look like? There are other organizations to do that, of course,
                                         
                                        I guess not so much, at least back in the days in the tech field. Yeah, but that is why we opened
                                         
                                        our own doors and tried to build up our own structure. Great, great. Thanks. Thanks for elaborating.
                                         
                                        And yeah, I have to say, just as I was looking myself into the sort of work that the organization does
                                         
                                        and the sort of background that you have, what struck me was precisely this.
                                         
                                        I mean, the combination of this value-driven, let's say, attitude that you bring,
                                         
    
                                        but also with a very strong technical background.
                                         
                                        And that seemed rather unique to me because there are many non-profits,
                                         
                                        many organizations in general who work in technical, in the technical domain,
                                         
                                        and also others who do like social work.
                                         
                                        let's put it broadly but the intersection I think is rare and I guess this this sort of
                                         
                                        justifies let's say the unique perspective that kind of led you towards founding this organization
                                         
                                        and I guess it's also driving the work that you do within the organization so I guess it's a good
                                         
                                        point to bring up the occasion so to speak that that connected us in a way a while ago I
                                         
    
                                        I believe it was possibly a couple of weeks back, you published an article on the stamps of your organization,
                                         
                                        and I guess to some extent at least your personal as well, towards the use of AI.
                                         
                                        And the reason that this somehow functions to connect us was that it got potentially unexpectedly lots of traction on social media.
                                         
                                        so it was re-shared and uploaded, let's say, by many people,
                                         
                                        then they also seemed to share that perspective.
                                         
                                        So with that in mind, I was wondering, well,
                                         
                                        before we actually get to the core of your thesis
                                         
                                        and the different points that you make,
                                         
    
                                        I was wondering, perhaps it's worth also sharing
                                         
                                        the process through which you arrived at this public statement.
                                         
                                        Because, you know, having said what you just did,
                                         
                                        I'm just curious, so how does the team building and the team decision making work within an organization like yours?
                                         
                                        We are a very small organization and we all have our own focused topics that we have built up some expertise in over the years.
                                         
                                        And I guess to some degree the things that relate to AI are not just my topic, but I did work in research projects.
                                         
                                        back when it wasn't cool, you know, before they, the current AI hype in the 2000, like around 2007 to 2010, I worked in a machine learning project.
                                         
                                        So I am somewhat, you know, acquainted to the underlying assumptions, methods and the state of the art back then because I, of course, I also followed the topic, right?
                                         
    
                                        Because I find it very interesting. I'm not, you know, I'm not against the mathematics behind it.
                                         
                                        So, yeah, so I guess that is why the decision fell mostly to me.
                                         
                                        There was another movement that happened in the team over the past, let's say maybe one or two years,
                                         
                                        because the whole topic of artificial intelligence has gotten so much traction in media.
                                         
                                        Also, of course, in civil society, because it seems like everyone now has to somehow have an answer ready.
                                         
                                        Like, what do you think of AI?
                                         
                                        And are you using it?
                                         
                                        And, like, what do you do with it?
                                         
    
                                        And what are the challenges or the opportunities that you see?
                                         
                                        And we kept being approached by many different organizations to, you know, to say, can you do a workshop?
                                         
                                        Do you want to write an article?
                                         
                                        And we usually, not always, but like the larger percentage, and we turned down because we thought, you know, like there's so many also very capable organizations working on AI.
                                         
                                        We don't have to necessarily step into that field because it seems like there is a lot of research going on.
                                         
                                        there is a lot of also public debate and we more likely want to work in the margins where nobody's
                                         
                                        exactly looking and AI didn't at that point seem to be the field that is getting not enough
                                         
                                        attention right so but then you know when we got more and more requests there was a need in
                                         
    
                                        our team to like we want to we want to respond more than sorry we cannot we cannot do this or we
                                         
                                        don't work on AI also like that seemed too simplistic to answer. So the idea was to come up
                                         
                                        with something that kind of frames our perspective on artificial intelligence so that then
                                         
                                        people can make more, make a more informed decision, whether they want to approach us and want
                                         
                                        to work with us on the topic or whether they say, you know, like we don't share your perspective
                                         
                                        and maybe we look for another partner, which I think is totally valid. Okay, okay. Great. Thanks.
                                         
                                        thanks for the background so if i were to summarize this the stance that the organization is
                                         
                                        taking it would be something like saying no to mindless mindless adoption let's say so not completely
                                         
    
                                        rejecting out of hand but being cautious about the specific use and also being aware of what comes
                                         
                                        as part and package of so-called AI.
                                         
                                        And I think it's actually the first point to clarify,
                                         
                                        and the first point you also make yourself.
                                         
                                        So the term AI is actually a bit misleading in a way
                                         
                                        because it bundles together many different things,
                                         
                                        many different practices,
                                         
                                        and so it doesn't really offer much clarity.
                                         
    
                                        Right. I think you put that perfectly.
                                         
                                        there is quite some debate what AI actually is and I think I know why people turn to the term artificial intelligence because it is somewhat descriptive it is a unique term but the methods behind it are far from unified you know they are far from similar it's a broad spectrum and what we've seen over the past years and I would love to hear
                                         
                                        whether you've seen something similar, is that once you shift from talking about specific
                                         
                                        applications, which is totally valid, to just talking about artificial intelligence as kind
                                         
                                        of a placeholder, it's suddenly, like, the discourse tends to shift into the, you know,
                                         
                                        the all overpowering artificial intelligence, the singularity that Ray Kurzweil proposed to
                                         
                                        happen, I don't know, I guess, sometime in the past even. So yeah, I think that is our main
                                         
                                        criticism that by talking so much and using the term artificial AI, we are actually kind of
                                         
    
                                        making it more opaque what we are talking about. And if a topic becomes opaque, it's really
                                         
                                        difficult to voice valid criticism, to find a way forward. And that is, you know, yeah, so that's
                                         
                                        why we
                                         
                                        criticize that
                                         
                                        first.
                                         
                                        The term
                                         
                                        artificial
                                         
                                        intelligence,
                                         
    
                                        I think,
                                         
                                        is not
                                         
                                        helping to
                                         
                                        have a
                                         
                                        nuanced
                                         
                                        discussion.
                                         
                                        So this
                                         
                                        is more of
                                         
    
                                        a narrative
                                         
                                        criticism,
                                         
                                        right?
                                         
                                        Not so
                                         
                                        much
                                         
                                        criticism of
                                         
                                        the technology
                                         
                                        itself.
                                         
    
                                        Exactly,
                                         
                                        yeah.
                                         
                                        And I think
                                         
                                        it's a
                                         
                                        very valid
                                         
                                        point.
                                         
                                        And what I
                                         
                                        find useful,
                                         
    
                                        personally,
                                         
                                        when having
                                         
                                        these types
                                         
                                        of conversations
                                         
                                        is starting
                                         
                                        by a
                                         
                                        necessarily
                                         
                                        lightweight,
                                         
    
                                        because if you
                                         
                                        want to
                                         
                                        go like
                                         
                                        over the
                                         
                                        entire history of AI and the different types, it's a, it's a class, it's a semester, it can
                                         
                                        be a semester university class in and by itself. But, you know, like the three or four minute
                                         
                                        version, like, look, when we talk about AI, there are different types, there is the kind
                                         
                                        of probabilistic machine learning AI, there is the type of like more deterministic rule-based
                                         
    
                                        AI and what we usually mean, the term has sort of been hijacked in the last few years and what
                                         
                                        we usually mean when people talk about AI these days is the Gen AI and all of these things.
                                         
                                        So you can kind of clear the ground and then try to have a positive grounded conversation
                                         
                                        based on that. But I think you're right. If you just start talking about AI, then people start
                                         
                                        thinking about like, I don't know, Skynet or whatever. Exactly. Terminator all the way down.
                                         
                                        Yeah. Yeah. So getting the terminology agreed, let's say, kind of
                                         
                                        establishing common ground is the first place.
                                         
                                        And then moving from there, you also refer to the fact that,
                                         
    
                                        well, you and I guess other people as well, myself and I guess others as well,
                                         
                                        have been sort of inundiated, let's say, by and by the ever-growing offer
                                         
                                        of different offers around AI, like take the scores,
                                         
                                        learn how to prompt, be a successful AI engineer in
                                         
                                        in two hours or two days, I don't know, all of these things.
                                         
                                        And you make the point that a lot of that,
                                         
                                        people's decisions around that seem to be driven by the fear of missing out to a large extent.
                                         
                                        And I have always had this feeling, which I guess we share,
                                         
    
                                        but what's interesting is that lately, the tide in terms of like the public perception,
                                         
                                        at least in the parts of the public that are more advanced and more exposed, let's say,
                                         
                                        to the latest research findings starts to be changing.
                                         
                                        So there were, for example, in the last period, we had some results published in which basically
                                         
                                        the results say something like, well, you know, there's been lots of noise, lots of hype,
                                         
                                        but the actual research that we've done in the field doesn't seem to corroborate the promises
                                         
                                        about super productivity or, you know, most projects tend to be stuck in the proof of concept
                                         
                                        phase and most of them fail.
                                         
    
                                        So should the decisions that we make as, you know, private people and organizations be driven
                                         
                                        by promises and fear of missing out?
                                         
                                        I don't know.
                                         
                                        I mean, I agree with everything you said.
                                         
                                        And I think to answer to your questions, we don't even have to look at the end.
                                         
                                        AI discourse. It's it is a narrative tactic that big tech has deployed in so many different
                                         
                                        fields, right? This is not the first time that they want to to kind of, you know, drive us to what's
                                         
                                        like you have to adopt this or you will be ultimately left behind on the labor market as a company,
                                         
    
                                        as a researcher, as a maybe even just a family. Like if you don't, you know, live up to the modern
                                         
                                        technology, like how are your kids ever going to succeed in school? Like there's so much fear mongering.
                                         
                                        And for me, like, you know, I come from Tinker and from hacker culture.
                                         
                                        Like, for me, technology is ultimately enabling.
                                         
                                        It's ultimately something that is fascinating, fun that I want to understand.
                                         
                                        And everything, you know, the whole fear of missing out narrative is cutting all of these things short.
                                         
                                        It doesn't leave any space for the fun, for the joy and also for the thorough understanding and for the valid criticism.
                                         
                                        And that's always, whenever any, you know, whenever I, you know, whenever I'm.
                                         
    
                                        any organization, any company, any sales pitch wants to put me under pressure, my alarm bells
                                         
                                        go off because I think there is something that you have to hide because you don't want me to
                                         
                                        have a closer look. And I think, as you already said, we see that it's not as easy. We should
                                         
                                        have a closer look and have come to a thorough understanding and kind of an agreement of what
                                         
                                        kind of technology we want to use in what place, to what end, and also at which cost,
                                         
                                        because these technologies come at a cost. And, you know, we don't assess that thoroughly
                                         
                                        if we follow the fear of missing out, because then we think, you know, not following the whole
                                         
                                        hype is the ultimate cost. And I would say it really isn't. The one danger that I really see,
                                         
    
                                        and it also connects to the first criticism around the whole, you know, AI, narrative AI, as a
                                         
                                        placeholder as a vague idea is that I think it catches really well with policy makers.
                                         
                                        Policymakers are now so responsive to, let's fund the whole AI sector when actually
                                         
                                        maybe our social services would need some more money, where preventative work, you know,
                                         
                                        street workers would actually deserve better payment and to invest these incredible amounts
                                         
                                        of money into a tech sector that, of course, does deliver tools.
                                         
                                        I'm not negating that, but just invest in it on the promise that they might come up with
                                         
                                        solutions for the problems we face today.
                                         
    
                                        That is actually kind of a wager that I wouldn't follow through.
                                         
                                        And that is the problem of fear of missing out, that we bet our money on causes that we
                                         
                                        haven't fully understood.
                                         
                                        I think there's a name for what you just described.
                                         
                                        So betting everything in the vague promise that, well, more technology is the answer to whatever the question is.
                                         
                                        It's called solutionism.
                                         
                                        Absolutely, yeah.
                                         
                                        And that is also not new, right?
                                         
    
                                        I mean, we've kind of crossed it for years now.
                                         
                                        And it is a problem of the tech industry, like always overselling their products.
                                         
                                        This is not specifically new to AI.
                                         
                                        It's just the next installment that we're seeing.
                                         
                                        Yeah. However, you did also mention, and I don't think like any reasonable and knowledgeable person can deny that, well, you know, despite all the benefits that you also, by the way, despite, I'm sorry, all the shortcomings that you also criticize, which we didn't actually refer to so far. So things like, you know, exploitative practices, underpaid workers and the environmental effect and copyright issues and all of these things that, I guess,
                                         
                                        guess by now every person, if they have like even a vague, let's say, understanding of the AI
                                         
                                        narrative must be kind of familiar with. So there's a bunch of negative side effects that
                                         
                                        come with that. However, all that said, we can still not deny that, you know, in some
                                         
    
                                        circumstances and for some use cases and under certain conditions, let's say, these tools
                                         
                                        can bring some utility. And I think this is the core of your thesis, like,
                                         
                                        we're not blindly saying, no, but we are saying we must make an informed choice.
                                         
                                        So where do you draw the line and how do you get to the point where you say,
                                         
                                        okay, well, this I think should work, this makes sense, this we can adopt and the rest
                                         
                                        we're not interested in.
                                         
                                        I would say there are two factors to make these decisions and one of them is a very
                                         
                                        like a value-based perspective on the whole topic.
                                         
    
                                        And that is, you know, as a feminist organization, we do.
                                         
                                        have responsibility of all the things we put out, you know, like the texts, the images that we
                                         
                                        create. We want them to convey our values and also the topics and the content that we work on
                                         
                                        very clearly, very distinctly. And I just think we cannot do that right now with AI. AI as a
                                         
                                        probabilistic model, talking about Gen AI here, obviously. See, I'm stepping into my own
                                         
                                        into my own trap here.
                                         
                                        So when we talk about the capabilities of Gen AI,
                                         
                                        there are some things like, you know, a quick translation.
                                         
    
                                        Like, for example, I cannot, I don't know,
                                         
                                        my French is very, very rusty.
                                         
                                        So it helps if I can translate a text from French into English or German very quickly
                                         
                                        to get a gist of what's in there to see if I need to dig deeper.
                                         
                                        And of course, like, I use that.
                                         
                                        for our results, our work results, to, you know, to present them in different languages that
                                         
                                        we don't have the capabilities within our team. We actually work with professional translators
                                         
                                        who also are aware of discriminatory language, who, you know, like, also understand where
                                         
    
                                        we are coming from and cannot just word for word translate the things we have written, like
                                         
                                        also probably an LLM could, but who also understand what we do not want to say. Why we do not, for
                                         
                                        example, speak so much about citizens because citizenship comes with certain rights and many
                                         
                                        people are excluded from citizenship. And so we don't use that word, but LLMs try to sneak those
                                         
                                        words in back into our text and then we have to go through and cut them out again. And that is
                                         
                                        more work actually than working with the translator who just knows their craft. And yeah, they do
                                         
                                        amazing work. It's similar for design and also arts, illustration. So that is, you know, when it
                                         
                                        comes to translating our values into the products that we publish. We work with experts. We work
                                         
    
                                        with artists. And that is where we, as an organization, draw the line. And I think if you look at
                                         
                                        our channels, at our website, you can see the look and feel that we have in our communications.
                                         
                                        It is very distinct. You don't have to like it, but it's true to our nature. And I think that is a
                                         
                                        value in itself, right? And the money that we spend on working with these experts, because obviously
                                         
                                        we pay them, is money well spent. I stand for.
                                         
                                        that. That is our decision, our internal decision, and we're happy about it. The other thing,
                                         
                                        maybe just, you know, the other reason why we draw the line is, of course, compliance. Like,
                                         
                                        we work with data from partners and funneling those potentially into LLMs, into applications,
                                         
    
                                        where we don't know what exactly happens with the data. We kind of lose control coming from a
                                         
                                        data protection perspective. That is just a sheer horror for me.
                                         
                                        my team would agree.
                                         
                                        Okay, thank you.
                                         
                                        I think both are equally valid points.
                                         
                                        And I would like to be, the second point is, I think, pretty much clear.
                                         
                                        So because of that, I don't really have much to say then.
                                         
                                        But I do have something to add on the first point, which I would categorize or summarize
                                         
    
                                        under the finding your own voice plus the work equilibrium.
                                         
                                        sort of in a way.
                                         
                                        And again, I would point to just very, very recent research
                                         
                                        results that were published.
                                         
                                        So the kind of results that are generated
                                         
                                        by Gen AI models, well, some of them can be interesting.
                                         
                                        Some of them may look kind of glossy,
                                         
                                        but be very, very misleading and written by hallucination.
                                         
    
                                        So what people are coming to realize is that, well,
                                         
                                        you may get a result out of these models
                                         
                                        that looks kind of okay at first look,
                                         
                                        but then if you scrutinize it a bit,
                                         
                                        you find out that, well, you need to correct lots of points.
                                         
                                        A similar process to what you mentioned
                                         
                                        that you would need to do for getting a translation
                                         
                                        that is precisely true to the voice
                                         
    
                                        that you want to adopt as an organization.
                                         
                                        So what people are finding is that this trade-off,
                                         
                                        it doesn't really work, and maybe it's actually better,
                                         
                                        if you skip the AI involvement in this process and just do it the old-fashioned way from the
                                         
                                        beginning involving like human experts as you also mentioned because well in addition to having a
                                         
                                        kind of dubious trade-off in terms of value for money if you want to put it that way there's also
                                         
                                        something else that gets lost within this process if you decide to outsource it so it's the process
                                         
                                        of learning. So by doing the work, you engage with whatever it is that you are working on and
                                         
    
                                        you improve as a professional as a human being in the end. If you outsource it, something is
                                         
                                        definitely lost and you are kind of stuck if you decide to do that. Absolutely. I couldn't
                                         
                                        agree more. Coming from Germany, you might have heard that the state of digital affairs in our
                                         
                                        public sector is horrific. We don't have digital services basically very few. They don't work
                                         
                                        work very well and they are this is due to the fact that the whole digital knowledge that all the
                                         
                                        capacity has been outsourced you know to consultancy so this isn't even an AI problem but it just
                                         
                                        showcases that outsourcing not core expertise and core competencies that you need to fulfill
                                         
                                        your your goal your purpose as an organization or even as a person just doesn't get you closer
                                         
    
                                        to the goal.
                                         
                                        You know, it's just, I know it might feel like it works in the short term,
                                         
                                        but as humans, I love the idea that we are learning,
                                         
                                        playing and learning beings for all of our life, basically.
                                         
                                        And to take that away from us,
                                         
                                        it feels kind of we are limiting ourselves to agree where we don't have to.
                                         
                                        I agree.
                                         
                                        Which I guess brings us to
                                         
    
                                        since we are pretty much in agreement about, well, this is the kind of stance that we think
                                         
                                        is the right one in terms of AI adoption. It brings us to the point like, fine. You could argue
                                         
                                        that between the two of us, we are in a position to make that judgment because both of us
                                         
                                        have been exposed to the technology, because we do understand how it works, and therefore we can
                                         
                                        make this informed judgment. But this is actually, you know, beyond our bubble, beyond the people
                                         
                                        who are technologically savvy and literate, this doesn't necessarily apply. So for others to be able
                                         
                                        to make that judgment for themselves as well, we need something that has come to be known by
                                         
                                        the name of AI literacy. And interestingly, it's also part of actively enforced
                                         
    
                                        regulation. So there's an EU-wide regulation that's called the EU AI Act, which basically
                                         
                                        says it's valid from this year from 2025. And it basically says, one of its clauses says that,
                                         
                                        well, any organization that is involved in using or deploying AI systems, and that would
                                         
                                        basically mean the majority of organizations today has to make sure that its employees are
                                         
                                        AI literate. It doesn't really give much qualification to what this literacy may include
                                         
                                        and what it means in practice, but it does set at least a certain requirement, let's say.
                                         
                                        So my question to you is, what do you think actually constitutes AI literacy and how would we
                                         
                                        possibly get there? Yeah, it is a great question.
                                         
    
                                        especially because the whole literacy concept is also not new in the digital field, right?
                                         
                                        We used to talk about data literacy.
                                         
                                        We used to talk about digital literacy and now we talk about AI literacy.
                                         
                                        And I think there are some parallels and some takeaway, like shared takeaways from these different discussions.
                                         
                                        First of all, like what would constitute AI literacy for me personally is hard to spell out because I'm a bit at odds with the concept.
                                         
                                        because it does put a lot of pressure on the individual.
                                         
                                        And I'm not sure that is, at least like the way that it is conceptualized now, is ultimately fair.
                                         
                                        Because the first step would be to be AI literate, to be able to see whether or what kind of AI you're using.
                                         
    
                                        And can you do that?
                                         
                                        Can you look behind an application?
                                         
                                        Can you see what actually runs?
                                         
                                        No, because these things are proprietary, because they are like closed box system and you cannot look into them.
                                         
                                        We have an accountability problem on the provider side, on the sales side, that now, like with AI literacy, individuals and their workplaces are meant to compensate for.
                                         
                                        And I think that is actually setting them up to lose.
                                         
                                        I'm not sure that is also bringing us forward towards AI being deployed in a more, you know, like just reflected, rights conforming way.
                                         
                                        If we set that aside, like in an optimal world where people could, you know, see how an AI system,
                                         
    
                                        it could just, you know, understand whether something is an AI system or whether it isn't,
                                         
                                        because so many tech things now get labeled as AI and it really isn't, like it really isn't.
                                         
                                        So let's assume that they can look into a tool or it's labeled, you know, it's labeled in a way.
                                         
                                        So this is, you know, the degree of automatization or, you know,
                                         
                                        know, kind of more free, stochastic outputs are generated.
                                         
                                        You know, if there was some sort of number to the product that you use,
                                         
                                        I think people could act accordingly.
                                         
                                        But that, again, that demands or that needs and calls for transparency on the tech side,
                                         
    
                                        on the seller's side, basically.
                                         
                                        On the other hand, I do think that, and maybe that is again because we talk about AI
                                         
                                        and not specific things, that I think AI,
                                         
                                        You know, the different methods and the different approaches from the whole artificial intelligence research field.
                                         
                                        They're so interesting.
                                         
                                        And I would love for people to understand more, like what is behind that.
                                         
                                        And I feel that to be somewhat AI literate, people have to want to learn more about actual statistics.
                                         
                                        Because I think that's also where we are already lacking as a society.
                                         
    
                                        like what is what is the mean you know like what does it even how do you calculate that you know sometimes
                                         
                                        it's the very fundamental things and if the statistics aren't even clear like probabilities and these
                                         
                                        things how can we even address AI's for me it's a more fundamental competence gap I'd say that we would
                                         
                                        have to build or like you know like alleviate in some way before we come to actual AI literacy and I mean
                                         
                                        And then again, I somehow doubt, you know, let's, let's, I'm German, so I have to talk about cars at some point, right?
                                         
                                        We can do a driver's license.
                                         
                                        And, you know, for that, we have to understand basic functions.
                                         
                                        And, of course, like, mostly we don't really learn about functions of a car engine, but we learn about safety regulations around it.
                                         
    
                                        And that kind of makes us car literate.
                                         
                                        It makes us able to, and also, like, you know, legally able to drive a car.
                                         
                                        And I think that is more how I would approach AI literacy.
                                         
                                        It's not about necessarily understanding all of the underlying technology and data and concepts,
                                         
                                        but it's about knowing which questions to ask,
                                         
                                        knowing what to look for to make sure I can use it and deploy it in a safe way
                                         
                                        so I don't harm myself and I don't harm others.
                                         
                                        Okay, yeah, it's an interesting approach.
                                         
    
                                        And also, again, a valid point.
                                         
                                        that you raised initially.
                                         
                                        So the fact that it's very clear to actually distinguish,
                                         
                                        because of all the misleading marketing that's going on,
                                         
                                        it's very clear to actually begin with the first pillar,
                                         
                                        which is distinguishing whether you are dealing
                                         
                                        with an AI powered product or service or whatever or not.
                                         
                                        Because in many occasions, you may get both cases,
                                         
    
                                        like something being labeled as AI when it's really not,
                                         
                                        and also something that's actually using some kind of AI model underneath,
                                         
                                        and it's totally opaque, and you would never guess without.
                                         
                                        Which one do you think is worse?
                                         
                                        That's a good question.
                                         
                                        I would probably say that the latter is worse,
                                         
                                        because not knowing that where your data is going
                                         
                                        and that the decisions that are taken and may actually influence your life,
                                         
    
                                        like I don't know, credit scores or what?
                                         
                                        whatever other important decisions like this are being made by an algorithm, you should have the
                                         
                                        right to know it.
                                         
                                        And I think it can be more harmful than, you know, being sold something as AI when it's
                                         
                                        actually not.
                                         
                                        True.
                                         
                                        I would agree.
                                         
                                        So one concept, this is a topic on AI literacy.
                                         
    
                                        Again, I agree.
                                         
                                        You also have to, before we step into the arena, let's say, of AI literacy.
                                         
                                        there's a certain there are certain preconditions and I think data literacy is clearly one of those
                                         
                                        and things like you know statistics and the topics that you mentioned because it's a topic that
                                         
                                        I have developed a very strong personal interest in and also I have developed my own syllabus trying to
                                         
                                        cover that to address requests for AI training I tried to do a little bit of literacy literature
                                         
                                        research and one framework that
                                         
                                        I found, which I really like, because I think it has a layered approach, it's called the
                                         
    
                                        six pillars of AI literacy.
                                         
                                        And actually it starts with precisely what you mentioned, so recognition.
                                         
                                        So the pillars that it mentioned is recognition, so the ability to differentiate between
                                         
                                        tools that utilize AI and tools that don't.
                                         
                                        Then there is knowing and understanding, so the ability to understand fundamental concepts,
                                         
                                        then using and applying.
                                         
                                        so the ability to use these tools and achieve results with them, evaluating, navigating
                                         
                                        ethically and what stands at the top of these pyramids, creating. And I don't know if you have,
                                         
    
                                        obviously you must have had some kind of training yourself because you mentioned that you've
                                         
                                        participated in projects. I don't know if you have attended courses or even potentially
                                         
                                        delivered courses yourself, but what I found very interesting in my
                                         
                                        experience as a tutor is that creating, I find, is that one part that is the most decisive
                                         
                                        than anything else, because you can talk, you know, about theory, you can talk about,
                                         
                                        you know, statistics, probability and frameworks and ethics and whatever. But when you get
                                         
                                        people to actually train their own model, then it's really, everything that's abstract before,
                                         
                                        then it gets really concrete and then they get to see, ah, aha, so this is how it works. And,
                                         
    
                                        you know, this is what I should be aware of.
                                         
                                        I wonder if you have similar experiences as well.
                                         
                                        Absolutely.
                                         
                                        I mean, again, like moving away from the specifics of AI,
                                         
                                        coming just from a, you know, like a person who enjoys technology.
                                         
                                        And I think it applies to a lot of, you know, different applications.
                                         
                                        You know, do you want to know, do you want to better understand electronics?
                                         
                                        You know, like it would be similar, you know, read up on it,
                                         
    
                                        try to understand the basic functions.
                                         
                                        Then, you know, try to try to, you know, kind of,
                                         
                                        play with it in a sense that, you know, like if I take something away, does it still work?
                                         
                                        How would it, you know, work differently?
                                         
                                        And also be able, I think what I like in the six pillars is that evaluation comes before
                                         
                                        the creation, which I think is actually very important.
                                         
                                        So also to know what to look for, like how do you actually, how do I actually measure not
                                         
                                        necessarily my progress, but also whether something, like what are the, what the ramifications for
                                         
    
                                        is something still working or is it not working?
                                         
                                        Like, how would I define an output?
                                         
                                        How would I classify an output?
                                         
                                        And I think that is also very interesting because it kind of forces you to flip the coin
                                         
                                        on the whole system.
                                         
                                        You don't look at the outputs, but you start making assumptions about the outputs
                                         
                                        before they actually happen.
                                         
                                        And I think that is actually when you get really empowered.
                                         
    
                                        And it prepares you well also for asking, as you said, the ethical or maybe even legal
                                         
                                        questions, right?
                                         
                                        Not everything is ethics.
                                         
                                        There's laws around this.
                                         
                                        And then creating, of course.
                                         
                                        I mean, that's the fun part, right?
                                         
                                        Yep.
                                         
                                        So with that in mind, and bringing, kind of reminiscing everything you've mentioned about the organization and its mission
                                         
    
                                        and also the position that you have on AI, do you see yourselves stepping into AI literacy,
                                         
                                        whether for internal purposes to educate people on what can be useful for you,
                                         
                                        but also for compliance potentially because I assume it also applies to organization.
                                         
                                        I mean, the AI literacy mandate probably also applies to organizations like yours.
                                         
                                        And that's part A of the question.
                                         
                                        And part B, do you think that engaging, creating your own courses to empower hours
                                         
                                        would be something that would fall within the scope of what you're trying to do?
                                         
                                        You mentioned that you are working on these corpora yourself
                                         
    
                                        and also have come up with a syllabus and a course work.
                                         
                                        And I know like you, there are very capable people out there who are working on this.
                                         
                                        We at Super, we always try to find kind of a niche that we think,
                                         
                                        where we can step into with the capacity,
                                         
                                        and with the competencies that we have
                                         
                                        and fill a need that, you know,
                                         
                                        we don't want to compete with people
                                         
                                        who already offer something that is very good.
                                         
    
                                        So I would describe the niche that we can imagine stepping into.
                                         
                                        We're not necessarily doing that now,
                                         
                                        but this is more of a prospective thing.
                                         
                                        We try to work with people to imagine futures they want to live in.
                                         
                                        And what would they feel like?
                                         
                                        What would they, you know,
                                         
                                        How would life look like?
                                         
                                        What kind of applications would they have in there to enable them, to live that life?
                                         
    
                                        So we try to think, we use futures and futures methods to come up with a vision that we want to work towards.
                                         
                                        And this is something that we can offer.
                                         
                                        So not necessarily AI literacy, but futureing and then questioning what role do we want artificial intelligence
                                         
                                        you know, just in general, technical digital applications play in these futures.
                                         
                                        And that is another skill set, actually, because we are so used to, you know, living in highly
                                         
                                        structured environments that do not really give us a space to be creative, to kind of break free
                                         
                                        from the things that we have learned and imagine something radically new that we then can
                                         
                                        figure out whether we can actually reach it and work towards it.
                                         
    
                                        So this is actually what we want to work on more.
                                         
                                        It's futurist literacy, we call it.
                                         
                                        So again, like we use the term literacy here as well.
                                         
                                        And of course, AI plays a role in that
                                         
                                        because right now hardly anyone can imagine a future without AI.
                                         
                                        And what we do is question, yeah, but what for?
                                         
                                        You know, like it doesn't have to be there.
                                         
                                        You can make the decision.
                                         
    
                                        And what would your decision look like?
                                         
                                        Okay, I think that's a great approach and yes, I do agree that most of us are used to kind of short-term thinking and, you know, getting from where we are to the next day, possibly, but not kind of envisioning like, okay, eventually we would like to get there and then kind of retracing back, like, okay, how do we get from where we are to where we want to be?
                                         
                                        So I think that's a very valuable lesson that one can learn.
                                         
                                        Okay, so I think with that in mind, you have kind of bridged the gap.
                                         
                                        So starting from the use of AI and your position in it towards your strategic mission.
                                         
                                        So obviously for you AI is one tool that you could potentially use in your broader vision,
                                         
                                        not something that you want to focus on.
                                         
                                        So what's the next step that you are taking as an organization
                                         
    
                                        towards your strategic goal?
                                         
                                        So the idea was, I write this post, we publish it,
                                         
                                        and then nobody asks us about AI ever again.
                                         
                                        That did not work out.
                                         
                                        I can tell you that backfired heavily.
                                         
                                        So yes, we will continue working on the broader topic
                                         
                                        of artificial intelligence and its impacts.
                                         
                                        on social equality and also global equality.
                                         
    
                                        But on the nearer horizon are more fundamental matters of social cohesion,
                                         
                                        especially when we talk about, for example, hate speech or, you know,
                                         
                                        like the protection of children online.
                                         
                                        I think those are very important topics, not connected at all to artificial intelligence.
                                         
                                        It's more nitty-gritty, involves much more humans and their disregard for each other.
                                         
                                        And we work on these topics, but we are also critical to employ technology to solve them.
                                         
                                        So we think more about, again, if you want to have a future in which we all can thrive online
                                         
                                        with the technology that surrounds us, what would we have to put in place, not, you know,
                                         
    
                                        just to whenever anything bad happens to then be able to, you know, send in the police and
                                         
                                        then nothing ever happens again, you know, because deterrence does not work.
                                         
                                        we know that, but what kind of safeguards can we come up with, what kind of designs can we
                                         
                                        cannot come up with for digital systems, online platforms, all the technology that surrounds us
                                         
                                        to make it actually work in our interest. So it's very much a low-tech question, I'd say,
                                         
                                        but honestly, I think low-tech has a charm in itself.
                                         
                                        I do, I do agree with you. And thanks a lot for actually bringing
                                         
                                        bringing to the public at large an example of an organization that does not lose sight of its
                                         
    
                                        core mission for things that kind of get in the way, even though the piece, as you say,
                                         
                                        in a way, backfired because it sounds like your intention was some more like, okay, so this is
                                         
                                        what we do about AI. Please don't bother us. I guess people in a way kind of bothered you, but I
                                         
                                        think you can serve as a good example for others to follow. So I think, you know, all things
                                         
                                        considered it was a good kind of backfiring. And bothering in the best way possible. So thank
                                         
                                        you so much for inviting me. Thanks for sticking around. For more stories like this,
                                         
                                        check the link in bio and follow link data orchestration.
                                         
