Tech Over Tea - Filling An Important Void On Linux | Cuperino
Episode Date: December 6, 2024Today we have Cuperino on the show the developer of Qprompt an open source teleprompter written in QT and available on Linux and other platforms, beforehand there weren't really an options available. ...==========Support The Channel========== ► Patreon: https://www.patreon.com/brodierobertson ► Paypal: https://www.paypal.me/BrodieRobertsonVideo ► Amazon USA: https://amzn.to/3d5gykF ► Other Methods: https://cointr.ee/brodierobertson ==========Guest Links========== Qprompt: https://qprompt.app/ Github: https://github.com/Cuperino/QPrompt-Teleprompter Patreon: https://www.patreon.com/qpromptapp ==========Support The Show========== ► Patreon: https://www.patreon.com/brodierobertson ► Paypal: https://www.paypal.me/BrodieRobertsonVideo ► Amazon USA: https://amzn.to/3d5gykF ► Other Methods: https://cointr.ee/brodierobertson =========Video Platforms========== 🎥 YouTube: https://www.youtube.com/channel/UCBq5p-xOla8xhnrbhu8AIAg =========Audio Release========= 🎵 RSS: https://anchor.fm/s/149fd51c/podcast/rss 🎵 Apple Podcast:https://podcasts.apple.com/us/podcast/tech-over-tea/id1501727953 🎵 Spotify: https://open.spotify.com/show/3IfFpfzlLo7OPsEnl4gbdM 🎵 Google Podcast: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8xNDlmZDUxYy9wb2RjYXN0L3Jzcw== 🎵 Anchor: https://anchor.fm/tech-over-tea ==========Social Media========== 🎤 Discord:https://discord.gg/PkMRVn9 🐦 Twitter: https://twitter.com/TechOverTeaShow 📷 Instagram: https://www.instagram.com/techovertea/ 🌐 Mastodon:https://mastodon.social/web/accounts/1093345 ==========Credits========== 🎨 Channel Art: All my art has was created by Supercozman https://twitter.com/Supercozman https://www.instagram.com/supercozman_draws/ DISCLOSURE: Wherever possible I use referral links, which means if you click one of the links in this video or description and make a purchase we may receive a small commission or other compensation.
Transcript
Discussion (0)
Good morning, good day, and good evening.
Okay, you didn't say you were going to do this.
Welcome back to the show.
I'm as well as your host, Brody Robertson,
and today we have the developer of QPromptDone.
I don't exactly know what he's doing.
Introduce yourself.
Yes.
So you have managed to hijack the channel. Is that correct?
Good.
This is the choice of the Steins Gate.
Yes!
Hello, Brody.
That would have worked much better if Discord didn't cut out
that entire last bit where you were making noise.
The Discord audio suppression just completely killed that.
Oh.
It happens.
Yeah, it is what it is.
Well, let people know who you are, what you do,
and then we can get into the software, I guess.
All right.
Well, I'm Javier. I go online
as Cooperino.
I'm the developer of QPrompt
and Imaginary Teleprompter.
Yeah.
Software developer,
work for KDAB, mostly
developing Qt software
and also trainings.
Mm-hmm.
So, I guess we can just get right into the software itself.
So I'm sure a lot of people have probably heard about QProm
because I know there's been like,
there's like,
It's Foss has done an article at one point,
DT's done a video,
there's a couple of other videos out there.
So even if a lot of people haven't used it,
I'm sure they've at least like come across the name.
But what is the software?
We're going to talk about why you have the two different ones as well and how that came about.
All right.
So in the media production space, there are various specialty kind of programs, applications that people use.
And one of them is the teleprompter.
The teleprompter is a device that is typically used to show text on screen.
And that text will be displayed in front of a video camera like this one.
in front of a video camera like this one and the idea is for people to be able to look at the camera and at the same time be able to see the text that is that they're supposed to be reading
um right now i'm using a teleprompter not for the purpose of reading text but for the purpose of
looking at you while i'm looking at the camera. So yeah, multiple purposes.
So what, before you made QPrompt,
what was actually like available in the FOSS world,
if there was anything available?
And I guess with making QPrompt,
I'm sure you didn't just make it in like a vacuum without knowing what any other teleprompter software did.
So if there was something you maybe took inspiration from, I would like to hear that as well.
Sure. QPrompt, to get the full story, we have to go back to the first teleprompter software that I wrote.
And that one's called Imaginary Teleprompter.
the first teleprompter software that I wrote, and that one's called Imaginary Teleprompter.
Back when I was in university, I was doing a combination of two degrees,
mass media communications and computer science. While I was in one of those degrees,
a friend and I wanted to go to a hackathon and we were looking for project ideas, things that we could do. And during my television production class, I realized that there was a need in the
open source space for a teleprompter app. All the free applications that were available were quite
bad. This was around 2015. And the one that we were using at university, it cost $300
at the time.
Do you recall what one that was?
US dollars.
No, do you remember what the software was called?
Oh, that one, I think it was called FlipQ. And it did serve as an inspiration. Particularly, there is one feature that all of my teleprompter programs have,
which is that the text, it doesn't increment the speed linearly.
As you increase the velocity, it follows a curve,
and that curve grows exponentially.
And the reason for that is that the faster the text is going,
the less susceptible you are to the speed increments.
So for you to feel like it's actually going faster,
I have to increase it by twice the speed, much more than what
I would have at the start of the scroll.
So I learned that by using that other software.
And I tried to copy its formula for acceleration
as close as possible.
Other than that, it was pretty much original.
I try not to take inspiration from other programs
when people go into GitHub and submit feature requests.
I'd rather have them explain to me their problems and what are they trying
to achieve to find a solution from there.
Because what I realized, and also there's a bit more history to this, but early, close to QPrompt's release, I was taking also a course called iCorps, which teaches people how to do customer research and essentially find what are the problems that people need to be solved.
And so by understanding the problems first,
you are actually able to come up with better solutions than by simply copying what another software is doing.
Right.
So rather than just taking what something else does
because that other piece of software does it,
you want to know,
okay,
there's a reason for why you want this
because you can look at things like,
obviously I'll talk about Xorg and things like that,
but Xorg,
you can use a printer
as a display.
Why do you need that? Is there a reason. Like, why do you need that?
Like, is there a reason for this feature?
Are you just doing it because it's something that other things have done?
Right.
In the case of XR, you know, there's a lot of historical context to it,
which brought about all of that flexibility.
But is it really necessary to have a printer display? Is it really
necessary to have an equations, a math equations editor inside the teleprompter?
Yeah. Sometimes we do things because we can, but if we want to
develop software that is really well aimed towards people's needs, it's best to start from the need itself.
Right, right.
And so, actually, before we get any further, I kind of want to explain my jank teleprompter setup.
Because I don't use a QPrompt myself.
But it is a really cool program.
And I probably should use it.
really cool program and I probably should use it.
Um, so the way that I
have things set up right now is
I have a vertical monitor
behind my main
monitor. So you can only see the top half
of the monitor. Um,
and I have my Vim
wiki open and then
I just occasionally scroll
through lines manually in that
and that does the job.
Like I probably should put it into Q prompt and then just you know have the scrolling be done
like automatically for me, but I don't script things out fully. I just have
like basically
dot points I want to hit on. So sometimes I rearrange them, sometimes I expand on them more than I think, sometimes I
throw in like extra points in the middle if I think of something on the fly. So at least for me
having
like a teleprompter in the- like using it in the way that most people would traditionally use it
doesn't really fit with the way that I'm trying to do
things but I can definitely see the value in there even for my use case I think there are
possibly ways I could make it work albeit it might not be the most sensible way to use it
right um one of the things that I try to achieve with QPrompt is make it flexible so that people
can adapt it to their needs. I incorporate a lot of features that are from professional
teleprompting environments that has attracted an audience that is very aimed towards a live event production.
But when designing it,
and the reason why I, for example,
chose the Kirigami framework,
which is a KDE framework to make QPrompt,
is because I wanted to target the individual YouTubers,
the people who essentially are using the prompter on themselves, not
having a dedicated prompter operator.
So it's a bit of a mix, like trying to bring those features to the masses and making it
more flexible that way.
There's a feature that is an opacity slider. If you move that opacity
slider, you will see that it makes the background semi-transparent, but the text is still fully
solid. So that allows you to use it, for example, in a video conference, which is super convenient.
And in your case, I can imagine setting up a bunch of...
This will be more like in the future
because I haven't finished adding this feature.
But a feature that I want to release at some point,
hopefully next year,
is some level of automation control
so that you can script your automations in QPrompt itself.
Say you add a bookmark anywhere along the script, and then suddenly when that line is
crossed in the section that you're supposed to be reading at, you will suddenly trigger
something in OBS and maybe have a camera angle switch at the moment that you are hitting the line that you're supposed to be saying something very specific.
Stuff like that.
It's meant to be as flexible as it can be, but at the same time be fairly simple to use, which is a hard combination to achieve.
But that's like the goal.
But not everyone needs a teleprompter.
In doing that early research to understand the needs,
one of the things that I realized is that a lot of people,
in a way, I'm competing against memorization.
And I'm competing against simply reading off-camera.
A lot of YouTube channels, they don't display their faces, people talking.
They just do a bunch of visuals.
You can read everything from a piece of paper there.
You can read everything from a piece of paper there.
So overall, I do think that you should use whatever best fits your use case. And the tools just be available for anyone who wants them.
You mentioned scripting there.
What language would that be being done in?
that be being done in? Well, so far, as far as I'm aware, with OBS it will have to be WebSockets.
I was thinking of also doing like HTTP requests. As for programming it, it will be the very language that kubeprompt is developed in, which is a combination of C++ and QML. Okay, okay, fair enough.
Yeah, with OBS,
because there's WebSockets, basically anything they're making web requests,
you're pretty much good for. Yeah. It's just how easy do you want it to be to
actually, to work with, so that people can actually effectively use the feature, I
guess. Right, right. There's still a bit of the sign to be done to figure that
part out but yes so why is it that there is these two different teleprompters why why is imaginary
teleprompter not like the the project that you know is still the main thing. Yeah, that's a long story.
But essentially,
imaginary teleprompter came first because I saw this need
for a teleprompter software
that was open source.
And then eventually,
as we kept adding features,
we started hitting blockers, things that were preventing us from adding the
actual fish feature due to limitations with the technology imagine a teleprompter is a web app
it's a simple web page that runs on a browser it made to run locally with electron. It's as simple as that. The code is very
feckety-like, to be honest.
It was my
second public release step.
The thing is that
eventually
it became clear that we needed to switch
technologies. We needed to move to something else.
And the person who created Imaginary Teleprompter with me,
10X developer Victor Ortiz, a great person,
he and I started a company after Imaginary Teleprompter
and after we graduated.
And in this company we
offered services a consultancy uh develop a lot of stuff and uh but eventually but we weren't doing
very well when it came to economics and eventually when the pandemic hit we essentially went on
hiatus and then eventually decided to close. So during that time, because of the economic difficulties,
our relationship wasn't at its best.
And the future of Imaginary Teleprompter actually became uncertain.
I can honestly say this.
The only reason Imaginary Teleprompter isn't closed source today
is because we chose the GPL and because we chose CKEditor,
which is also GPL, as our editor inside imaginary teleprompter.
So that acted as something that would prevent us from taking a different direction.
And it was intentional. Yeah, so it's a good thing to sometimes pick a GPL library.
Anyways, and eventually, so each one of us took their own ways.
And I had a lot of free time during that pandemic's time.
And in that time, I thought, you know, this is a good time to start writing that software.
And coincidentally, at the same time that I'm having that idea and that I have decided that I'm going to essentially move forward into a different direction,
this company called KDAB started releasing a series of videos on QML trainings and they start posting
all of these videos in their YouTube channel and they go from the very basics...
Oh yeah, I think I've seen one of these, yeah yeah yeah.
They're really good, but disclaimer, I work for KDAB nowadays, so I'm biased.
I don't, so I think they're good videos.
But back then I didn't.
Back then I was just learning QML.
I had used Qt before, I liked it a lot,
but only the widget side of things.
And suddenly I see this great resource
on how to develop with QML, and I realize, you know, this is exactly the kind of technology that I need to develop the teleprompter app.
And then the next thing I did was start to make QPrompt.
So you had some experience with Qt before that. It wasn't like you were just learning it for QPrompt.
Right. I was only learning the Qml language for q prompt a cute i
had used previously in two applications a both a one a personal project a very simple thing called
dual print um which is essentially for you give it a set of parameters and it generates two lists
that you can copy paste into your printer app configuration settings and then have it print all the pages that belong on one side and then all
the pages that go in the other even if this is an ancient printer that does not support
dual printing duplex printing so that was the first app that I did with Qt. And then the second one was meant to be a follow-up to that,
very close to what BooMaga ended up doing,
which is a really cool application also made with Qt
that it's meant to generate booklets.
It acts as a virtual printer, and you send your print to it,
it generates the booklets, and then you send it to the actual printer.
So I was doing something like that,
but then instead of focusing solely on Linux like they did,
I wanted to make it a cross-platform.
And that was what essentially killed it
because I wasn't ready at that point to do that
with that level of complexity drivers and stuff.
at that point to do that with that level of complexity drivers and stuff um but uh yeah those were my first two cube projects so it sounds like you've always had sort of an interest in
like the media side of things right you did mention you're doing a what was it computer
science and what was the other degree? Mass media communications. Right.
How did that come about?
Like, that's a weird combination of degrees.
Yeah, it is.
It comes from the fact that I was struggling very hard to decide what I wanted to do with my life.
I'm the kind of person who likes many things. I enjoy doing many things.
I'm the kind of person who likes many things.
I enjoy doing many things. And eventually it boiled down to two things that I'm really, really good at and that I really, really enjoy, which are video editing and software development.
But that aside, I was having a really hard time choosing what to major in or what to actually study.
And in the end, when it came to choosing,
I ended up choosing computer engineering,
and I hated it.
But I hated it because the university where I was studying that
was very focused on the hardware side of things,
and I wasn't really into electronics. I played played with them I dabbled in it and I do enjoy doing that
but it's it's it's not something that I'm very passionate about and um so I then in those
university years decided to switch to a mass media communications because it was the closest thing to filmmaking.
And while I was there at this new university studying mass media communications,
suddenly somebody tells me, but we have a computer science degree here.
And I was like, what?
And suddenly it was like, well, I need to try it out.
I need to try this because otherwise I will never know if this is something for me, if programming is something that I would like.
And fortunately, by that point, I already had an idea that I would because I had started programming at a very high level with an open source application.
It wasn't open source at first, but with an application source application that wasn't a business at first but with an application
for visual software development um q prompt my career and all of these things are due
to content creators like you because uh back then uh this was like back in 2010, 2008, I was listening to the podcasts from Jupyter Broadcasting.
And they made the Linux Action Show and other various shows.
And in around 2012, Brian Lundek, who was in that Linux Action Show at the time, he released this app called
Illumination Software Creator.
It's long gone, no longer developed.
He sold the rights, but the open source code is still GPL, so everyone can fork it if they
want.
But it's a very old codebase at this point, so it's not really worth the trouble in my
opinion. But it was very helpful to me
as someone who wanted to dabble into programming to see what could I do, and
that's how I ended up making the first version of Dual Print, which I later redid in Qt.
print, which I later redid in Qt.
And thanks
to that, I realized, you know,
I like programming. I like
computer science.
Engineering, maybe not for me, but
definitely the
programming aspect, it absolutely
is. So
that led me to pursuing a double degree.
Why a double degree and
not just switch from one major to the other?
Um,
uh,
I was,
uh,
determined in making sure that my family knew that if I finished the communications degree,
I wasn't going to die of hunger.
Um,
yeah.
Yeah.
Yeah.
A communications degree does,
uh, you know, there are people that use it but like you know it's a computer science degree you know there's a that sounds a lot more reassuring
yeah and and it is really in practice um a lot of the of my peers i later found out while I was doing a bit of research with iCorps to create QPrompt
or to release QPrompt, I reached out to a lot of them and I realized that many of my peers had,
by this point, moved away from mass media communications and communications overall,
and were focusing in other fields. And some of that has to do with the economics.
And that's the main factor.
And the other reason was because sometimes people found a job that they really liked while they were still in college.
And once they graduated, they ended up deciding,
you know what, this is pretty good, I'm going to stay here.
And it worked for them. So, yeah, a lot of people end up going in crazy different directions.
How was your computer science degree actually handled?
Because computer science can mean a because computer science can like it can
mean a couple of different things like you mean very theoretical or like I did
a software engineering degree and that was it wasn't at all hardware related
like entry-level courses like the first year stuff there was some like generic
stuff like networking it's a bit of hardware but later on it specialized purely into programming so how exactly was like
yours done um it was a mostly software very theoretical um we learned c++ early on
which ended up helping me a lot nowadays um but back then i was you know i thought it was outdated
and that we should have been learning python um i i ended up learning that on my own i absolutely
loved the language um but uh yes it was very focused on the fundamentals, which is very good for understanding how computers work.
But when it came to real-world preparation, I think we all felt it was a little bit lacking.
I had to complement that with courses from Coursera primarily in order to have more real world skills that I could use at my company
Imaginary Sense. I've told this story on the podcast before but when I was going through my
classes a lot of them it's now part of LinkedIn learning at the time time it was Lynda.com. A lot of my classes, they were just like,
watch the Lynda.com videos and do what it says in that. Like, why am I paying you then?
Why did I just get a Lynda subscription?
Yeah, it's so frustrating when they do that. It's like, okay, you are giving me a really good video,
but I would like to hear it from you because then i can maybe interrupt you
ask a question and get some irrelevant response to what i'm thinking right now for context this
is like i was in university before covid my final year was covid so the first three years they were
doing this oh wow oh wow that's a lot yeah i i had some good classes but i know it's weird right like it's
when when you when it's up to the lecturers to decide what the course content is going to be
you can get some incredible classes and then there's some other ones where you're like i don't
entirely know like why it's like this. I feel like
my web development
class, even though the tech we used
was relatively outdated,
we were using
at the time, it was
jQuery
Angular, not
the like, not AngularJS
but Angular
and.NET what is their like what's their
um their web dot net core no it was before dot net core um i don't it was some dot net framework
for for doing stuff and it's like even though i feel like the content itself was like not
or like the um the tech we used was not great,
I feel like at least we learned some important fundamentals
on how the web functions.
And then there were other classes where I'm just like,
what are we doing here?
What is happening?
I absolutely had that feeling with my...
I don't even remember the full name of the class, What is happening? I absolutely had that feeling with my...
I don't even remember the full name of the class,
but they were teaching us essentially
how to write the byte arrays
into floppies and hard disks.
Okay.
Oh, yeah.
It was very focused on
this is how you store on this ancient media formats.
And I'm like, okay, why do we need this?
That year was the last year that they gave that class, fortunately.
It was removed after that.
But yeah, university can be a little bit slow to catch up.
At the same time, as I was studying this degree,
my best friend was in a different university
and they were still teaching like Visual Basic 6 over there.
So yeah, it can't get worse.
Yeah, my degree, when I was going through it,
it was, you had a first year Python class
that was your introduction to programming.
And then after that, the majority of it was Java.
Because that university has a lot of military contractors.
So they love Java.
And also, what is it?
DXC?
Something.
There's a big company in Australia that hires tons of university graduates,
and they love Java as well. So that's what they were doing at the time. They've now migrated
most of the degree over to Python, which I think makes more sense when you're teaching
core concepts, right? If you're trying to teach how data structures work, like linked list,
binary trees, things like that, I think getting the, the language out of the way,
and just focusing on the core concept of what you're trying to teach makes more sense then.
Uh, I don't know anyone going through the degree now, but I'm, I would sort of worry that
going from Python, and in like third year,
there's the C++ course, like making that jump,
it was bad enough going from Java to C++,
but going from Python, two years of that,
straight to C++, like, I feel like a lot of people
are going to struggle with that course
even more than they already did.
Yeah, it can be quite a big jump,
especially coming from Python.
In that sense, I do nowadays,
in retrospective, you know,
hindsight is 20-20.
I do end up agreeing with my professors
at the time that C++ was a good language
to start with.
I am still not 100% decided, I will admit,
because I do think that there's value in understanding logic first before you start
turning that logic into machine code. But yeah, regardless, it's a matter of how do you ease people into the um machine code
eventually you know like not take too long for that but have make sure that they're
anything that is related to to to logic is well covered because in the end, that's what they're going to use the most.
It's not the mathematics, it's the logic.
Yeah, once you have those core fundamentals down,
yes, there are different languages you can use
and obviously if you want to start talking about
functional programming with Haskell and things like that,
your brain breaks then but normally if you're you're not talking about those
languages once you've learned those core foundational skills it's just a matter
of learning some syntax syntactic differences between languages like once
you've if you want C++ and you want to learn rust for example yes the borrow checker does weird things and it's a weird thing to wrap your head around
but most of the rest of the language just makes sense once you've got that syntactic knowledge
down absolutely absolutely that is that is definitely key um i've been, not lately,
but I was learning last year Rust as well.
And I realized that my colleagues
who had more experience with C++
versus me who, you know,
did a bunch of Python after university
and a lot of JavaScript,
they were having an easier time
understanding the more complex stuff about the language.
But yeah, it is something that it is quite...
The more exposure you get to different programming languages,
the easier it becomes to switch from one to another.
I remember what language it was. I think it was when I started with Python.
I literally watched a one-hour video on how to Python and just hyper-focused
on all of those essentials and that's how I got started with it.
And it was quite easy for me
at that point because i by this point had a lot of javascript experience and uh and c++ not a lot
but i had some and all the other languages that they do teach in university uh like a bit of
prologue and sql and all the other stuff yeah yeah i that prologue, I don't see it used anywhere,
but I think it's a powerful system,
a powerful thing that should be used a bit more.
It's just not really relevant nowadays,
but when you think about the kind of problems that can solve,
like recommendation systems, it can be easily developed with that. It's something that we
can still borrow from old technologies and bring it maybe in combination with the new
AI stuff, machine learning that is being developed.
We're like way off topic now. So let's just keep going down that route. Um, what is your,
your thought on like this general direction we're going with like LLMs and AI, whatever,
whatever term that you want to use for it, whatever, whatever you call it, this is the route we're going down. There is clearly these big data systems that are sort of displacing a lot of...
We're not even just going to talk about jobs,
but displacing a lot of internet content, right?
Have you ever seen the chart of Stack Overflow usage
as ChatGPT came in?
Because it's going like...
It just plummets into the ground
and SAC Overflow is basically dead at this point
if you go to the site
a lot of the nuances are just AI generated
and most people aren't even asking questions there
because you can now get
basically the level of response you would get
from one of these systems
yeah basically the level of response you would get from one of these systems.
Yeah, I will admit that for several reasons,
and one of them being that I tend to be the kind of person who likes to get really good knowledge and control over what I'm doing.
I haven't ended up using LLMs very much in anything that I do.
But I do see the usefulness of them,
and I do think that it's something that should be embraced
as a tool that you can have in your tool set
and make the best out of.
The reason I don't use them as much
is because I think that if I'm able to
solve these complex problems myself
without relying too much on them,
or at all right now,
then I should be able to help my customers better at KDAB, where I work at.
But yeah, overall thoughts on LLMs, machine learning, and these things.
I'm more concerned about the economics problems that arise by having these technologies around. things that they're being used for. Essentially, they're being trained to generate anything,
images and videos and stuff.
And all of this is done based on online content.
And I'm a bit concerned that by making some of these resources very accessible to everyone
at the same time we might be
devaluating
the
worth of the people who
make the art and the people who are making
gathering
all of this content
so
a lot of times
in the tech industry we like to focus on using tech to solve problems
but sometimes there's a better way to tackle the problem and in this case, I think that to provide better solutions, we have to also think about economics and try to see what areas is there like an actual need for something to exist and then try to focus on that when it comes to developing these models.
and then try to focus on that when it comes to developing these models. For example, this is one that I have in mind and one that I've had in my board of ideas
and crazy things to make for quite some time, and I just never get around to doing it.
But I would like to see, for example, a series of models that can combine a translation between speech and sign
language so i could suddenly um be looking at you and you are talking to me in sign language
and we are i don't know saying some things i my sign language is super rusty my american sign language is super rusty right now but i could see you and then hear you in my ears and then i can reply to you in speech
and this model cool using one of the fancy glasses like the snap one and and the meta one
that they just showcased
and they probably won't be released for a few years.
Using one of those, the other person could actually see
an overlay of your arms in sign language doing the translation.
And that is a use case that I would love to see.
And that is a use case that I would love to see.
And yeah, while I was doing the iCorps program for QPrompt, I also learned that one of the tasks, probably the only task that has to do with software to any extent
that people are really frustrated with is video editing.
Unlike me, most people don't like video editing.
It's tedious work.
And if there were more AI assisting tools that could make that job a little bit easier,
I think that's something that would be very welcome.
And I think Adobe knows that.
So they're aiming in that direction.
Oh, I have no doubt they're trying to do that
with like Premiere and things like that.
Because I know they're already adopting a lot of things
with the Photoshop side, right?
But images are an easier problem to solve
because it's easy to understand what people might want from an AI-assisted tool, you know,
like AI background removal, AI, uh, color balance, like, these things make sense when it comes to,
when it comes to video, obviously, like, AI background removal also makes sense in a video
context as well, but I'm sure there's a lot of things that people have thought about but maybe
are not i guess computationally reasonable to do at this point at least doing like locally on your
system that i'd be curious to see what like what people can come up with in this space because i've
not really thought deeply enough about the problem to
really even consider what I might
want to have outside of, you know, the basic stuff like background noise removal and things like that,
but I don't do a ton of complex editing anyway, so for me, it's probably not something I would heavily consider.
I think spaces where LLMs are doing, like, really
good work are in, like, in medical digital imaging. I'm seeing a lot of interest in using
this tooling for, like, cancer detection, or not just detection by itself as like an assistant tool with a doctor
so the doctor can analyze a an x-ray or some cut they have and you can then also have this system
that does a secondary analysis that might spot something that maybe the doctor had missed or
maybe you know gives them idea that they can further analyze and have other people look at as well
in this space i think it's pretty hard to argue against it being a useful tool and once it gets
better and better and better i i don't think there's anyone that's would say oh i don't want
the ai tool to analyze my x-ray to see if there's something there, if it has maybe a higher detection rate than a,
like, an actual doctor would have, and there's also cases in, um, like, with, um, legal cases
as well, where there is just a lot of legal code to go through, where it might not, obviously,
lawyers manage to do it, and that's why they get paid so much money to do so, but there might not, obviously lawyers manage to do it and why they get paid so much money to do so, but
there might be like weird esoteric cases and interpretations of law that you might not
initially consider, and in cases like this I also see a lot of value. Where I, where I don't see
as much value coming from is a lot of what you're seeing in like interest in the programming space and like
the marketing space and things like this it is basically just taking the human work out of the
equation right it's replacing the junior developers replacing the like the copy paste marketing where you know you have
these these marketing pieces we like I can see that your marketing department
are not the most creative people right and in cases like this like you know
there's the that corporate art style where they have like the the really long
bodies and the tiny heads like replacing art like that that's what i'm seeing a lot of the interest right
now in it being uh used for but i am very much in agreement with a lot of people that we are in an
ai bubble right now and it's kind of like the dot-com bubble where you had this time where people were just putting dot-com in everything in you know you have just
let's name a town.com let's let's talk about how we're going to use the internet with every little
thing and you're seeing that right now with ai as well where it's like taco bell puts ai in their
their financial report it's like what are you doing with it? I don't know,
just put it in there. And for a while, that was just being used as a way to raise up company values.
And I think what we're going to see happen is like what you saw happen with the dot-com bubble,
where companies like Amazon lost like 90% of their value, but Amazon is a real business. Amazon has a purpose. It just at the
time was severely overvalued. And I feel like a lot of what we're seeing now is going to happen.
Nvidia right now, I believe is the most valuable company in the world, which is crazy. And most of
it's being driven by AI, but Nvidia is a real company, right right they make it a real product that people want it's just i don't think they're gonna stay there
much longer if we keep going down this route of put ai into everything there's gonna come a point
where that investor interest dries up and the next big exciting thing comes along right there's a few things that like for example um
i i do think the investor interest is going to dry up in the sense of they're going to become
more savvy when it comes to what to invest in the ai isn't going to go anywhere because it is something that people will continue to improve upon.
So it has like a real measurable use case, which is different to like, you know.
Right.
And it did before this hype cycle started happening.
For many years, since like 2012, people, you know, could use predictive text in their phones and stuff.
People could use predictive text in their phones and stuff.
And so forth.
AI is imbued.
It's embedded into so many aspects of our life.
And we just don't notice it.
So investors are going to stop thinking, worrying, being concerned with AI being used in the names of things and products.
And they're going to probably focus a bit more on the actual solutions. Of course, investors come and go, new generations come in,
and they aren't as aware of the importance of value propositions,
and they will fall on the trap for the next fad.
propositions and they will fall on the trap for the next fad um but um yeah as as far as ai goes you know let's go with machine learning because that's what it's really uh what's really behind
the scenes would it be ai is a weird term because depending on who you ask people will consider
classification systems ai but that wouldn't be what the general
public considers AI.
Right, right. And yeah, classification systems are the base of all of those examples
that you gave in the medical space. Almost all of that is through classification. So it might not be what people think about, but in the
practice, when it comes to development, it is a great deal of what's
being used. It's a bit older than the newer LLMs, for example. Transformer-based
models have become a lot more popular, And it's because they are good at solving more complex tasks.
I think the way forward for AI might be
finding more types of solutions to complex tasks
and then applying that to a greater set of things.
tasks and then applying that to a greater set of things yeah i think one of the one of the things that i made me realize that this definitely has a lot more value than i initially thought is when i
realized how normal people use a search engine like i i see why companies like google are so
invested in these systems because they know it's going kill search. I, you know, I use a search engine and I'll be like, I want to look up
C++, so I'll be like, C++ docs, or I want to know something about the prologue
language, I'll look up prologue language. I see the way that normal people use a
search engine and they just write, like, giant sentence questions in it and I get it I understand like it's
it's hard to know how to use a search engine but this makes sense why tools
like chat GPT and all that gains so much popularity so quickly because if you're
asking questions in that fashion you're gonna get a much better answer because you just don't know
how to use a search engine that's true that's true and and the very fact that they're using a
question means that the generative model will predict an answer so it's very likely to it's
more likely to get things right simply because you formatted it as a question
Or at the very least
I should say
It's very likely to sound like it's getting things right
That's yes
That's a very good point
They are very good at
They're very good at
How would you say it
Like
Being convincing I guess yes absolutely
that's that's one of these problems that i i know there's a lot of interest in resolving how to
how to embed an underlying system of logic and a system of truth into these systems because as it currently stands
You can even if it's gonna give you the right answer
you can gaslight any of these systems into telling you something that's not true like a basic one is
How how many
how many E's are in the word strawberry for example or how many R's are in the word strawberry, for example, or how many R's are in strawberry,
and it'll, like, it might get the answer right, and then you just tell it, no, that's wrong,
and it'll just come up with some other answer, because, especially with, um, well, the, the
counting letters one is an interesting one, because it, it, it's a, it's just, it's a thing
that makes sense to you, if, if I asked you how many R's are in the word strawberry,
you're going to think about the words like,
okay, yeah, I understand how many letters there are.
But most people don't seem to understand how these models actually function.
And they're not actually dealing with the raw text that you're sending them.
It's a token-based system.
These phrases you're putting in have been tokenized
and have been dealing dealing with
magically in the system which there's no point getting into how these systems work right now
it'll take too long but basically the point i'm getting at is what you're writing isn't what the
model is seeing or what the uh what the the system is seeing so it makes sense why it just cannot do questions like that or another one is
when these systems started adding in um the ability to have them draw based on what you write
you know generate a generate a circle and sometimes they just refuse to make a circle, they'll make a sphere, or a, a, um, like, a, a spherical-like shape,
an oval, or anything close to a circle, and together they'll actually generate a circle,
when that seems like such a basic thing, but, I don't know, I, I, I think the important thing is,
I don't know, I think the important thing is anyone who thinks that,
anyone who thinks these systems aren't going to get better, I think has just not been paying attention, you know, you'll see people say, oh, AIR, it's not going to happen, it can't do hands,
like, that's been resolved for a while now, and you always have these, like, little issues that
people point out like oh
it can't do this it can't do that but i think what's important to remember with any of these
systems is the way you're seeing it right now is the worst it's ever going to be
yeah that that that is that is true i mean the tech generally progresses. It ends up getting better as a result of all the improvements, discoveries that accumulate over time.
Right now, I was recently watching some videos from a lab at MIT, and they were discussing intelligence.
Like, how do we know the system is intelligent?
And how do we make sure that we understand and that there are steps of reasoning in the system?
And that's where all of the math is currently focused at.
All of the engineering is gearing towards, all of the research efforts are gearing towards.
Let's try to understand where the knowledge is encoded,
where everything is happening, so that then we can make the most out of that technology and
apply it in other ways. So as that is better understood, we are bound to get better at all
the other tasks that we're giving to machine learning models.
Maybe this is kind of out there, but what are your thoughts on...
Remember, like, maybe six months or a year back,
you had, I think it was like an engineer from Google or one of the other companies coming out to try to argue
that the model they had was sentient.
the model they had was sentient
it was
it was displaying
what seemed like
seemed like something
that the lights were on
now my thoughts on that was
it passed
the Turing test that's my
thoughts but
I don't know if you have any
different thoughts on that.
Well, I felt sorry for the guy, to be honest.
I thought he was seeing something,
and he was too quick to make an assessment of what it was.
Wait, it was that long ago?
Okay. Jeez, time's gone by.
Time passes quickly.
Yeah, okay.
Yeah, sorry, I cut you off there.
Oh, don't worry.
And so, yeah, I felt sorry for the guy because I thought, okay, he is probably assessing
that there is knowledge, which is true, you know, these systems do contain knowledge,
it's encoded into the statistical patterns that are used to model the model, but
from making the leap from this has knowledge to this is sentient is something that, you know,
sentient is something that you know
you'd have to
go into a greater level of
understanding of the
system itself to
maybe
identify
what are the measurements
for the emotions in this
system. Let's go fully hypothetical
here. If a
system was trained
to
in its
training develop some kind
of sentience, there
will be a rational
process that would
result in the equivalent
of
perhaps various different
emotions being encoded.
And that's not something that unless you're able to measure,
you will be able to make the assessment.
Yes, this system is actually thinking in the way that we,
similar or close enough to what we could consider sentience.
So for that, I think we're pretty far away still.
But when it comes to the developments,
we are seeing a lot of interesting stuff
in how the math can be employed.
We were talking about the universities
and what they're teaching.
I think the way forward,
if some university wants to make the most
out of this vertical,
the way forward will be,
how do we understand the actual systems?
What's this going on?
What's the math math and why does these
statistical patterns create something that we can employ in a way that seems intelligent or
specialized that it encodes this knowledge and that we can apply it correctly but yeah the key is in the math uh nevertheless uh over at the engineering side
you know we'll focus on the applications of it and uh over at the marketing and the business side
people will be like okay let's uh you know let's whatever we have now let's use that and market it and try to sell it.
And they don't always understand the true value that needs to be satisfied before they are already releasing a product.
So that's the part that ends up frustrating all of us
when we see a program that makes you think
why?
So yeah.
You're always going to have this
mismatch between what the engineers know
the system can do and
what the marketers
will sell the
system as. An example
I've brought up a couple of times before is
Chevrolet?
Ford. One of them.
Where they were using a chatbot
for selling their cars or as part of like their system to sell cars and
the chatbot agreed
that the price for the car was a dollar.
And I believe they had to honor that.
Like, the guy took them to court, and they actually had to honor that price.
Yeah, yeah.
And there was a similar case as well.
I don't recall what it was right now.
But yes, if you're going to put one of these systems into production,
like Deja Vu, this is from last episode,
then you have to make sure that it has been battle tested,
because otherwise the trolls are going to come around and mess it up for you.
Yeah, I think, again, it goes back to what I was saying before. I set up for you. Yeah. Yeah. Yeah.
I think again,
it goes back to what I was saying before.
There's a lot of companies who are just deploying this because it's the
exciting thing.
It's Hey,
let's,
let's just get AI out there as quickly as possible.
And I,
it seems a lot of that stuff has been sort of reeled back in once companies have realized
maybe there are things we can't actually do with this yet.
As much as they would like to fire every single drive-thru worker
and replace them with an AI system.
Like I've brought up before,
if you want to speak to a person
and you're at an AI
drive-thru, just order
100,000 straws
yeah
I
as always there's multiple solutions to a problem
I personally like the way
that a local
restaurant chain has done here where i live
they are called el meson sandwiches and their approach is there's an app you can go ahead and
make your order there and then you you just walk into the restaurant and pick it up when it's ready. And a while back,
another YouTube channel, Food Theory,
made a video on the chances of getting orders wrong
when you order in person inside the store,
when you order through the drive-thru,
and when you use a menu system of
some kind whether it's an app or a a big kiosk that you press things on and whenever people
made the order themselves on the kiosk the the mistakes were lower so um, in a way, if you want to make sure that you
caught some costs,
just have people make the order themselves.
They will
take longer to make that order. That's a fact.
There's studies
on the...
These are quite old,
but there are some studies on
the kiosks for supermarkets to pay and go out.
And what they found is that people are about one and a half to two and a half times slower,
maybe three, I don't quite remember the real number right now there there are a lot slower than the actual worker at processing the order so the lines end up being longer
but yeah when it comes to an order that has to be customized and it's very specific
then you do benefit from having the the the customer be the one who creates the order
themselves.
When it comes to checkout at a cashier, then you're just taking away jobs then.
I always like to bring this up whenever the idea of um these order systems come up because
if you are going to have customer ordering systems you need to make sure you actually
write the software correctly um there was this there's a really really uh famous video from
mcdonald's i think like seven years ago now um someone realized that through the addition and removal of certain items
it wasn't adding up the cost of the burger correctly so you could get it to the point
where it was offering you free cheese so you could just order hundreds of slices of cheese just entirely for free. That's crazy.
You know, I wonder...
That sounds like a video game that I played a while back.
I love point-and-click adventure games.
And in one of them, it's a German game,
I forget its name,
something Deponia.
It's from a series called Deponia.
And in one of the entries,
there is this
little puzzle where you have to get them to give you a free cookie but in order to do that you have
to make this order and customize it and the way that you do it is that you take the most expensive
item you make it as expensive as it can be and then uh no i think you take the the cheapest item you make it
as expensive as it can be and then you remove all the items that you had that
with the free with the free the free cookie so yeah that's a pretty fun game
I highly recommend it I don't usually play point-and-click games but I'm I'll
have to have to have a look at that one yeah it's a it's
a three-parter but there's one version that consolidates all the games into one and you can
get that one for cheaper it's also the most bug ridden version so make sure to save often
so let's get somewhere back towards the main topic of why you're here um right
somebody asked me to include a yang in q prompt did they yes okay um apparently the the audience is divided into like it's let's say 20 40 40 the 40 percent is individual users people
like you who don't have a dedicated prompter operator controlling the prompter then you have
dedicated prompter operators controlling the prompter for other people their customers
and then the remaining 20% is probably distributors
who package up QPrompt and then just sell it or include it with whatever they
sell. Which is great and definitely a thing under the GPL, so I'm happy with
that. And the people who aren't prompter operators, the people who do it themselves,
they are seeing that other applications
are starting to offer that text scrolling
happens automatically based on what they're reading.
And that makes them want that.
But by wanting that, they're ignoring the fact
that one of the purposes
of the teleprompter is to control the speed at which you're speaking. If you go very fast,
the text will kind of force you to slow down into a more intelligible speed.
And the same goes if you're too slow, then it kind of forces you to read a little bit faster.
And the same goes if you're too slow, then it kind of forces you to read a little bit faster.
So whatever I end up implementing, I wanted to satisfy that need of matching a desired words per minute or lines per minute measurement. And for that, what I figured so far is that this system needs to be divided into a few parts. All processed in parallel via multi-threading and there will be first
a speech-to-text model, probably based on Whisper. That's going to be making all the text as you're talking.
And that's going to be in one thread.
Then on another thread, you are going to have a comparison
happening between the actual script and that generated text.
And then it's going to loosely match things
to give you a rough
estimation of what have you read so far.
And then
that result will be used
to highlight or color
text as you are speaking.
And
once
and then enters
the user. If
the model hangs and it isn't updating fast enough, then enters the user. If the model hangs and it isn't updating fast enough,
then on the user UI thread,
the user will be able to manually move the cursor
a few words forward or a few words back,
depending on where they are actually at.
And then by forcing the cursor into that position,
that makes the model that takes
care of matching the generated text with the actual script, it makes them a resync at that
point. And then once we have all of that, we end up with a position that the user is reading, and then it's a matter
of moving the text to reach that position in their reading area towards the center of the screen.
So then for that, there will be a separate model, this will be a second or third model,
probably a second one because the comparison
part will probably just be heuristics.
But the other model will
control the speed based on
the cursor placement and based on
other parameters such as text
font size and the dimensions of the
screen overall.
QPrompt has a lot of parameters when it comes to how you can make that text appear on screen.
You can adjust the size to be very tiny or humongous.
You can adjust the corners to essentially make it towards the center of the screen
and then you can move that to anywhere on the screen
and that allows you to have blank space around the corners and stuff,
which is really good if you have the teleprompter way close to you.
If you push that prompter back,
then you typically want larger text
and use the full width of the screen.
And if you're an XOR user, you might
want to rotate that display 45 degrees,
and then you have more horizontal space
on the diagonals.
So yeah.
Easter egg.
Anyways, hopefully.
I've yet to make that one.
But yes, and the idea is to have...
The idea is that in the end, okay, we have those two models.
And if the user is going too fast, the prompter will go at the necessary speed to make sure that they are reading where they're supposed to be reading.
to make sure that they are reading where they're supposed to be reading. But at the same time, we are going to be counting how many words are they speaking
or how many lines are they scrolling.
And based on that, we compare that to the desired amount.
And then we can essentially give them some graphical feedback,
a border around the screen and have that border be a different color and gradually intensify or reduce in intensity
to essentially provide that feedback telling them you're going too fast or you should probably go a
little bit faster. And yeah, so that's where I want to go next with QPrompt basically.
Where I want to go next with QPrompt, basically.
That and the automation features, like you press a button and... I mean, not you press a button, you scroll through an area,
and then suddenly that sends a signal over to OBS to trigger a different camera.
So, yeah, stuff like that.
But in the end, where we'll actually end up going will depend on where the Patreons want me to go,
because I do a poll,
and everyone who's a Patreon gets to vote.
You don't even need to be a paid subscriber.
You just have to be in the feed,
and you get to vote.
And yeah, I have a few features from a previous pull in the backlog,
so I probably won't start working on this
until I've worked through at least a couple more of those from the backlog.
But that's the goal.
Hopefully, we can get more convenience
for everyone who wants to use QPrompt.
Here's a question when it comes to feature additions.
I'm sure you've had people ask you
just for things that are kind of crazy.
We're like, yes, I could do this,
but this would take a lot of work to do.
Or this would be cool,
but it's, you know,
it's just not viable, at least right now.
If there's anything, like, fun and notable
that you can think of,
I definitely would like to hear about that.
Okay, well,
there have been a few things like that,
but I think almost everything stems from a need at some point
i'm trying to let's we'll probably go and take a quick look at our at our issues that
can take a moment but there have been almost everything that people do want to see
is because of some personal need that they've had.
But I think some of the most specific things have to do with support for very specific hardware.
And some people want maybe a particular device to control the prompter.
And I'm a bit hesitant when it comes to adding support for specific devices.
Because not everyone's going to
have the same device and I don't want to promote
one device over another.
Let's see over here.
Oops.
Is everything okay?
Okay.
There's a slight green flash for a second, but no, it's fine.
Okay.
Yeah.
Let's see. Okay. Yeah. And let's see.
I think
this one's a little bit
sad. There is
one feature that I do have
in the backlog that I
know that
I have to work at some point.
And I want it to be in the program, but I honestly don't have to work at some point. And I wanted to be in the program,
but I honestly don't want to develop it.
And that is...
And I'm sorry because I know that this guy...
I have a supporter who very much wants this feature,
but that feature is MOS support.
MOS is a protocol
used in
studio environments, and
the protocol specification
is open source.
Media object servers.
Yes.
But the actual
implementation
is something that I would have to develop from scratch,
and
honestly, I'm not much implementation is something that I will have to develop from scratch.
Honestly, I'm not much of a network protocols guy, so that
is something that I'm a bit hesitant to add.
The way that I'm tackling it, because I am eventually, hopefully, going
to add it, that's my goal. The way I'm tackling this is I'm splitting some of the
layers to the specification into actual problems that need
to be solved. And for some of those, I'm
just solving them in a more
generic way. I don't have to be MOS compliant, but I
can still solve the problem.
And once I have that,
then I try to adapt
the solution to make it fully compliant.
So that's
how I'm going
with that solution.
And the other
things I have
is really a few things that are just very complicated to develop.
Right now, we have one feature that about 50% of people are using.
It's a really rough estimate.
I don't really have statistics for this.
I've yet to add telemetry.
I'm going to add it, but I've yet to add it.
Anyways, of course, it's going to be opt out.
It's not going to send any data until you've been prompted.
And you will have the option to opt out. It's not going to send any data until you've been prompted, and you will have the option
to opt out right when you start the program before anything actually gets collected. So that's the
way I'm doing this. But since I don't have telemetry now, I rely on direct conversations.
And a lot of people use this feature, which is, I call it screen projections.
Basically, you have QPrompt and the text scrolls, but sometimes you want it to scroll on a separate display and have that separate display scroll a flipped image, an image that has been flipped horizontally. horizontally for the purpose of that when this that image goes off the screen into the teleprompter
the teleprompter has a beam splitter glass that reflects that 90 degrees and then it goes towards
the viewer and when it does that inverting it's inverting all of the reality that is being presented on the set axis,
which is equivalent to flipping the image horizontally.
Okay.
Which makes it impossible to read.
If you are controlling someone else's prompter,
you are essentially looking at everything in reverse while they're looking at
in the right direction.
Because the video output from the computer is actually reversed horizontally.
So the workaround to that is to have multiple displays,
have some be reversed and others not.
have some be reversed and others not.
And the best solution for this problem is to use dedicated hardware.
There is hardware that will take your monitor output
and invert that image for you
and send it to three, four prompters at the same time.
And you can combine hardware.
You can have the thing that inverts it and then the
duplicator to have the multiple outputs.
But if you don't have access to the hardware,
or you don't want to spend the money on the hardware,
then you can do this feature in software
by rendering the same text contents to multiple windows
and inverting it in some, but not in others.
The move from Qt 5 to Qt 6 broke this feature in QPrompt.
And we are very close to a Qt 6 release,
but I'm forced to
not include the feature
because
I have no idea how long it will take me
to finish porting that feature because I
basically have not to rewrite
the feature but rewrite pretty much
big chunks of the entire
program to get that working
and
that takes a lot of time and I sometimes just don't have that
much time. So the solution that I ended up taking, thanks to Neil Gumpa, who I met at Academy, and
he just told me, release it. You're doing a disservice by not releasing it you can go add the feature later um
what i'm doing because of that is i'm just going to call it 2.0
so then it's a 2.0 release you know it has this feature was removed from 2.0, but it will be added afterwards. And at the same time, I released 2.0.
I'm also going to be releasing a more recent version of the 1.x, but that one won't be supported.
And the idea with that release is to release some binaries that I already have made from several
months ago, which are the latest binaries
made with Qt 5.
They contain most of the features and bug fixes from 2.0.
Not all of them, especially the bug fixes,
but they are very close.
So those people who are still in need of that feature,
they can at least get most of the functionality through this more recent build
that isn't the latest one. But yeah, at this point I'm gonna essentially diverge and all the focus
will go on 2.0. That release is just there to satisfy the needs of those who need the feature
until I finally figure out how to do it
i have a solution in theory it just doesn't work when i program it
i see i see yeah it's it's complicated but yeah i'll get to work at some point
it sounds like um there's a lot of cool stuff that you want to do with this it's just
once you've done like the core fundamental stuff now you're on the
the complex stuff right like now that you have a working teleprompter that
does the job as a teleprompter that does the job as a
teleprompter now you have these additional use cases that come along where you're like
that's cool i don't know how to do it
yeah there's a few of those um and in other cases i have this really cool feature that I found a library to use and implement,
which is global hotkeys. As you're aware, Wayland doesn't really support them. They do now, but
the apps need to add support for that. Oops. Okay, resetting. Can you see me?
Yep, yep.
Okay, cool. Anyways.
resetting can you see me yep yep okay cool anyways my okay and the um the the
global hotkeys a feature i'm adding through a library um but that library it doesn't have an active developer it has an active maintainer who will look at your merge requests if suggest fixes and then if they're good once
it's good it makes it into the library um but there's no one working on it to add the wayland
support so i don't like the idea of adding a feature that isn't available on all systems
equally um unless it's something very specific like like, for example, global menu support on MacOS
and Linux and the touch bar support on MacOS. Those are very system-specific features
that the program has. But when it comes to a feature that should be available in all platforms the same,
then I feel bad putting the feature in some platforms but not on others.
And that's what prevents me from adding,
that's another reason that prevents me from adding support for specific devices
that control the prompter.
And in this case, it also prevents me from
finalizing the hotkeys.
I will
likely add the hotkeys at this point,
using this library that I found that is
really good, and
that will add Windows, Mac, and
X11 support.
As for Wayland,
I'll probably
put it on a poll for the patrons to decide. Wayland, you'll probably put it on the poll
for patients to decide
Wayland you're still on the desktops to implement their side of her
that's the problem that we have
because we have the portal exists in the API
but I don't think
maybe KDE has an implementation of the hotkey portal
but I don't think Gnome has one
Cosmic doesn't have one
Wroots definitely doesn't have one w roots definitely doesn't have
one so there's not a ton that you can really do from your side to really resolve this problem
I think honestly the the easiest way to do it is just offer web sockets and let someone write
a controller application for it like that's honestly the the easiest solution yeah i will probably do that you know like in addition to to to implement the library because
the library is pretty straightforward but when it comes to adding this library uh while i was
planning out how it's going to be done i realized i should abstract this a little and then have interfaces that can be derived from the same class.
And from there, you could have the remote control.
You could have the hotkeys.
And yeah, so it will share some of the code, which makes it scale easily and makes maintenance a lot easier.
So yeah, I'm looking forward to that.
So a lot of people talk about the fact that there are a lot of developers just,
you know, they make things entirely.
Most people make things like, you know, for their, for their own use case,
but it sounds like a lot of the stuff that you do for this, it's just,
it sounds like you just want to make the application better. it's not necessarily even just something you would personally use yourself
it's just right i guess you know you're in this deep might as well keep going
yeah um it comes from the desire you know that i had a long time ago to fill the need for applications on the Linux space.
Because I've been using Linux for a very long time, probably.
My first experience was maybe 2005 with those Red Hat disks
that came on the little books and stuff for dummy series.
Oh, yeah, I put it right here.
Oh!
Yeah, here we have it.
Hey!
And yeah, I no longer have the disc.
My grandpa has that one.
But yeah, I have some copies.
What version was it?
Was it Red Hat Linux 2?
Red Hat Linux 8.
8.0, if I'm not mistaken.
Not Red Hat Enterprise Linux.
Red Hat Linux, yeah.
Yeah, I love reminding people
that was the thing that existed.
Back when Red Hat didn't...
It was weird,
because their desktop offering
and their commercial offering
at the same time
were both the same thing.
And then early 2000s, they wanted to split them up so
that's how we got fedora linux which became the desktop community offering and then rel became
purely the like commercial corporate offering right and uh they they they saw the need for
that move because at that point with the advent of broadband internet,
it was becoming far easier to share the media online and you didn't really have the need to purchase the CDs anymore.
Back then I wanted to use Linux.
I had my machine running XP and I dual booted to Linux at first.
And it was just...
I just loved the system a lot, but at the same time I missed my software.
And I wanted to do video editing on Linux.
And the best thing available at the time was Sinalera.
wanted to do video editing on Linux and the best thing available at the time was Sinalera and it's a cool app but it takes so much longer to do things there than it does on Adobe Premiere
um so yeah I that that led me to want to make software for Linux and it's it's essentially the
the first thing that led me down this path path aside from the fact that I started working with computers
since I was very young, breaking them since I was three.
So yeah, I have to thank my grandfather for that.
Why did you actually start using Linux? How did that even become a thing that grabbed your attention?
the discs that my grandfather received and my aunt from different sources primarily a technician computer technician that was friends with my grandfather and that had worked with him
and he gifted these discs he gave him copies and he told them, these are legal, by the way.
And then my grandfather and I have always been very close.
And he put Linux on my computer with me.
We had a lot of fun installing many different operating systems, dual booting, triple booting. And one of the first one that I
tried, I think it was
Mandrake,
followed by Sousa,
and then
came Red Hat 8.
And
I tried it.
I thought it was super cool,
but I couldn't really do much with them.
And what I liked the most of the Linux distros back then was the games that they had.
You know, I was just a kid.
And if you've ever read Little Brother from Cory Doctorow or heard the audiobook,
from Cory Doctorow or heard the audiobook.
There's a chapter dedicated to when the protagonist discovers Linux and how he was at first enthralled by the fact
that there were all of these free video games
that he could play with it.
That chapter is very accurate,
a very accurate representation of what that experience
really was like back in 2005.
accurate representation of what that experience really was like back in 2005. But you know the games were only that many and I didn't really know how to
connect Linux to the Internet back then because I didn't have broadband. So this
was all from the CDs and I ended up going back to Windows. And then
Windows got slow because I didn't have much RAM. And
XP has this issue
with pagination. If
there is too much RAM consumption in XP,
the page sizes will
increase, and it will double every
time. And if it doubles beyond a certain
point, and you just don't
have enough RAM on that system,
it will crawl to a
halt. And then at that point
you basically... you have two options. You can reinstall the system, or if you know
that that's what's causing this, you can manually reset the pagination
size on the registry. And that's it. And I obviously didn't know the registry hack back then.
I didn't even know what the problem was.
I ended up resetting my XP system multiple times
every six months, and it was every four months.
And then at some point,
I had just finished setting it up,
and it was suddenly slow again
after I finished editing everything.
And at that point, I was so frustrated. and i had found this um this was before blogs um but this was like a blogger
online his name is daniel clemente um he's from spain and his site is still up danielclemente.com
um and he posted about linux and how he was using it and how Windows was really bad.
And there was an entire section dedicated to Windows crashes and bugs.
Yeah, and that convinced me.
He convinced me through his website that I should give Linux a go and not keep trying to make Windows work for me because I was just sick
of it at that point. I jumped over to Linux and I will say I never looked back. But I'm the kind
of person who likes to keep a pulse in all tech. So every time there was a new Windows release and
eventually Mac as well, I will be running old systems
nowadays I still run old systems
but Linux is my main one
it's where I spend like over 95%
of my time in
but yeah
I still run old systems
and at work
I'm known for being that guy who
if you have some
problem for a specific system,
you know,
you probably can probably give that to me and I might be able to figure it
out because I,
I I'm,
I'm used to working with everything.
I probably should spend more time using other systems.
Like I've,
I've kind of gotten,
I've gotten really comfortable with the setup that I have on arch.
It's,
it's been a while since I've sort of,
I guess, put myself out of my comfort zone,
out of my comfort zone, I guess.
And I used to do that a lot.
Like when I first started using Arch,
like I use i3 and BSPWM
and other like little window managers.
And I don't know.
You know, it's once you get to a point
where everything just kind of works,
there's not really a reason to stop using that.
As much as it would be cool to see, like,
what actually is the state of Windows today?
Or what is the state of macOS?
It's just...
I don't know.
I'm just happy where I am.
I would like to mess around some more stuff and cosmic
sort of has reignited some of that flame but besides that i'm kind of just i'm kind of just
chilling where i'm at yeah i i i like to chill where i'm at you know it, it's a good thing to do, a good experience to have.
For that reason, you know, I no longer am distro hopping anymore.
I might check a distro, like, if it's something very, very, like, someone has talked to me about it, like, 500 times, then maybe.
But right now, like, my setup is very fixed when it comes to
how I work on Linux.
I use an Ubuntu
base. I also like my
Arch and my Jira bases, but
for commercial
work, Ubuntu is
the most practical for me.
So Ubuntu base,
and then on top of that,
I install the repos for KDE Neon. So it's
not like I install Neon from scratch. I go from an Ubuntu, which I have modified to have
something like ButterFS with full disk encryption, while booting with Windows with full disk
encryption as well. And then, well, the primary machine is just linux uh that's for the
other computer um um and the the full disk encryption and then uh then i turn it into a
kd neon so that i can have the latest plasma environment right right okay that makes sense
actually i would i just want to know about um why kirigami grabbed
drawings because i know you're using cute but why do you want to actually like start messing around
with that for anyone who doesn't know kirigami is basically katie's answer to libidwaiter that's
effect it's not exactly the same but they serve a similar function they're both ui toolkits they do
kirigami came first but they absolutely yeah they they are in the in in solving a very similar set
of needs um my reason to choose kirigami is that when i this to be a linux app and it being a cute app was not enough um it needed to
use linux a libraries that could be um that could work in other systems as well um but really be
something that i can call this is a linux app And that was just a caprice, something that I wanted.
It wasn't something that had a practical reason for being. And that's how I ended up choosing
Kirigami. I liked it because I thought, okay, this is designed in such a way that it can run on
mobile phones and it can run the desktop and it adapts because of it's it's like a responsive design but it's done the KDE
way. I like that. But in the end most of the development for it really goes to
the desktop. There are some areas that you realize it lacks a
bit of polish when you use it on a phone. I have a PinePhone Pro up there somewhere,
and I was testing QPrompt there and I realized, okay, I need to tweak a few things here and there
because it doesn't just work out of the box. You really need to put some attention.
Otherwise, it's a bit frustrating to use.
But that aside, I thought it was a really great value proposition.
In practice, the mobile aspect is frustrating.
So I will say that as a warning to anyone wanting to use Kirigami to make their the next a ios application or android application
um because it's c++ getting it to work on android and getting it to work on ios and ipad os and all
of those other ways it's pretty tricky and i haven't been able to figure it all out um
i'm still figuring out because i'm redoing the entire toolchain for the setup, installers and everything right now.
That's what's holding me back from releasing 2.0 right now.
But when it comes to Android, it's something that you kind of need to be very familiar with C++ and how apps are built there before you can actually distribute on any platform.
Because each platform has very specific requirements, and these are usually taken care of in other programming languages.
But in C++, it's a bit more manual. So if you want to make a mobile application that also runs on the desktop,
it's probably better to start with Flutter and then bring it to the desktop. If your application
needs to be a desktop application and then you port it to mobile, then you can also go with
Flutter. But if there are some desktop-specific requirements
that Flutter isn't able to meet,
such as having multiple windows,
which is the case for QPrompt,
then Qt is the best option.
I've heard there's some work
for adding multiple window support on Flutter.
It hasn't happened yet, as far as I'm aware.
But, yeah uh at the time
that q prompt was being developed uh getting started with it um cute ended up making the
most sense because of the performance and because it's linux and uh so that's why i ended up making
it with cute and kirigami if i do something I briefly looked at not too long ago,
I am really lazy when it comes to actually putting more effort into development.
I used to spend a lot of time writing code,
but nowadays I really don't do as much as I should.
I started learning Rust, but I kind of got lazy with it
and stopped doing that.
And I don't know.
Like, I really wanted to do more.
I think what I need to do
is just come up with an idea
for a project I want to work on.
So try to come up with an issue I have
and then work on that.
Because if I don't do that from the start,
I'm going to get distracted
and do something else.
Yeah.
I found that approach works.
That approach works really well.
And once you have found that project that scratches your itch,
the challenging part is to keep going with it.
I will not lie, you know, like QProm,
there are times that I just want to take a break from it.
Sometimes I do. I just disable the subscription for the patreons so that they're not charged
during that time um but uh overall uh uh once the project grows to a certain level
uh it stops being about you which is frustrating because then you kind of like have to keep
maintaining it or find someone else to maintain it, which is very difficult if you don't do that
from like early on. But, you know, right now, even though I'm at a state where I do want to
move on to other projects, I do also still want to continue working in KubeProm because there's
a lot of cool features that I want to add that I haven't gotten the chance to. And I'm holding myself from adding
them just so that I make sure that I add the features that are more frustrating to add
so that the entire user base needs are met.
Patrons I use for feedback. It's a good way to get an idea of where the people who are willing to support you morally and economically, where are their needs, so that you can make sure to serve those needs.
But yeah, overall, I think the main reason why I tend to stick to maintenance and QPrompt and all of that is also the fact that I very much like the experience, the interactions
that come with being a project maintainer um a lot of developers they don't like
the social aspect of so much um i i i i'm the kind of person that if you reach out to me
asking a question i will give you full support as if you were like paying a monthly subscription for
lots of money for that support you know know, I don't care about that.
Let's just make sure that this is something that you can use
and that it helps everyone as much as possible.
I did get tired of answering the same questions,
so I stopped taking them via email,
and now I make sure that everything's public.
Because by making it public,
now other people can find the answers via their search engines,
and then I get new questions.
Yeah, that
that is
pretty important.
Even then,
you're still going to have people that just don't
search for something, and you're still
going to have people asking the same question, but at least
if it is public, you can direct
them to the answer you've already made
before.
That too. I end up not doing that because i do give like giving personalized answers but i do realize that if q prompts user base ever grows past a certain point that it's too big
to be manageable i'm probably gonna end up doing something like that have a a more organized knowledge base and then just
direct people there um though sometimes i do realize if you need to answer the same questions
a lot it's probably a design problem and it should be addressed in the software right
yeah right no that's actually a good point. Yeah.
There's, how would I say?
There are some issues that are definitely going to be from your side,
and there's going to be other things where people just,
no matter how much you put it in front of them,
are still going to get confused.
But I think if you attempt to mitigate if you attempt to mitigate people legitimately getting confused by things that
are actual problems it's probably for the best
probably unless it takes a real long time to redesign the entire ui to make it so it fits a
whole new system um yeah right sometimes it's a design problem that that could have been addressed
from the start i have one of those in q prompt um i've realized and i realized pretty late in
its development that a lot of people have the need to be able to edit the scripts
as they are being prompted.
This is very specific to those professional prompter operators, not so much the solo users.
The professional ones tend to do multitasking and go ahead and prepare what's going to be prompted next while the the current
script is going and in order to do that you need to decouple the editor from the prompter
that's not the case in q prompt right now um i added something that no other teleprompter
application has which is a Star Wars mode.
I'm sorry, that's an Easter egg.
OK, I added the ability to double click anywhere
on the prompter, and then you can start editing right there
the text that's on screen.
But that's a bit inconvenient, because it
can be useful for editing small things in what you're
looking at right now.
But if you have
to keep prompting and then add something at the end of the script you can't really do that and
for that i need to have that decoupled interface which is something that right now it's just not
feasible with the current way that things are organized in QPrompt. So for that, I'm thinking of developing an alternative user interface, and then have
these two user interfaces come with very different defaults.
One of them, the current one, focused more on the solo prompters, and then the other
interface focused more on the needs ofpters, and then the other interface focused more
on the needs of in-studio operations.
That interface will have things like you
can move tabs around the screen to organize your layout
a little bit better.
And to do that, I could's a library that KDAB develops
it's called KDDockWidgets
again, reminder, I work for KDAB
but KDDockWidgets is this really cool
tabbing configuration application
for docs, essentially windows
that can be docked into the main window
and into other windows
and it's super flexible and the advantage that KDDockWidgets has essentially windows that can be docked into the main window and into other windows.
And it's super flexible.
And the advantage that KDDockWidgets has over all of the other alternatives that are out there,
as far as I'm aware, is that it has QML support.
All the other ones are for widget applications.
But QPrompt is a QML application first, and it does use widgets in one spot and that's it um yeah but everything else is done with widgets so because it's a
widgets application it makes sense to use katydog widgets for this so that might be the path that i
end up taking make two essentially two two different applications aimed at different audiences
I don't like the idea of
maintaining two different applications
but if I
were to take that route
because this one will be aimed towards
the
studio environment
I'm probably gonna
sell that one, it still going to be GPL
but I'm going to sell it
so it's going to be part of
this experiment
that I would like to make which is
can free software
be self-sustaining
and so far
QPrompt I think will be a
terrible example for this because I've invested already so much time in it that I don't think I'll ever be able to recuperate back if I cared about the money.
But it's an open source project meant to satisfy a need that wasn't being satisfied a few years ago now there are other applications as well like if you need a
very simple teleprompter that's minimalistic and just prompts for yourself and you know you don't
care about all the other features um i forgot their name uh nosca 32 made a a teleprompter in
gtk um and that one looks really cool And I gave him my equation that I derived from QFlip a while back.
And I told him,
hey, if these programs use this,
it's very useful.
Go ahead and incorporate it.
And they did.
So each program has their own variation of the function.
They're not exactly the same,
but they do provide that smooth acceleration curve.
So yeah, there's that one.
And there are other open source applications
that didn't exist when QPrompt
and Imaginary Teleprompter were created.
But some of those are like web apps
that don't run locally.
Locally for Linux,
it's QPrompt, Imaginary Teleprompter
and the GTK app that I just mentioned.
And if you need one for the terminal, you can check out my Teleprompter for Terminals video on YouTube.
That was an April Fool's joke. Oh, is it? Okay. It's real.
Is it?
Yes.
Yes? Okay.
I said I just use VimWiki.
You could use it via Telnet.
I disabled that because it was...
Yeah, all the bots online were targeting my server
because they saw...
Oh, open Telnet connection.
Let me try to hack that.
Yeah, bad idea. Don't do that.
No.
Yeah, that's gone.
One last thing I want to ask you about
is why...
You've mentioned using GPLv3
a couple of times. Why
that license? Why specifically that?
Okay.
Teleprompters
in the traditional sense, they are hardware.
And when you plug a computer monitor or a tablet to provide the screen functionality, you're using the software there.
But nothing is preventing anyone from embedding everything into a package that is very tight and hermetic
so that the software is part of the prompter itself and that you can't really repurpose it
or reprogram it. And because of that, although it's not a very likely thing to happen, but if
that were to happen and
somebody decided, hey, there's this open source application, let me use that to provide
the prompting features.
And they used my open source application, I would want people to be able to thinker
with that hardware.
And the T-Voyization problem that happened with GPLv2 will allow them to get away with that and make it hermetic if they use GPL2 soft press the base.
But GPLv3, because of its anti-t-voidization clauses, it's better for this particular purpose because it will protect the users better.
So that's why I chose it. And the other reason why I chose it is that it will protect the users better. So that's why I
chose it. And the other reason why I chose it is that it's much easier to read. Yeah, fair enough.
Fair enough. Totally understandable. I thought maybe there was some more thought to it there,
but that's pretty much just the standard reason we're using Gpl or specifically gplv3 over gplv2 right right it's
those two essential clauses the very reason why it was created in the first place um yeah
the civilization clause is interesting because like linus torvalds has specifically commented
on that and how he doesn't actually care about this issue and that's the reason why he never even considered it being a problem and why he didn't want to move the project
over to gplv3 but i guess it depends on i guess it just depends on what you're trying to do with
the project and what your what your your goals are right like if your plan is to monetize it at any
point that probably makes more sense going down
that route, but, you know,
obviously, the thing that would make the
most sense is not open source it altogether
and just make it proprietary, but
I guess it's the
it's the
best of both worlds in this case.
Right, yeah,
I think so, and
yeah, in think so. want to prevent people from
indiscriminately
using it in ways that
will go
against that freedom, which
is helpful for
certain paid
and commercial applications as well.
Because a lot of people, you know
this, and a lot of people in this channel
know this, but the GPL is a commercial license.
It's not prohibiting that use.
And it's really about defending freedoms and freedom to sell.
It's not really a freedom explicitly, but it's one of the things that you can do because it's not prohibiting what you can do in that regard.
All the prohibitions the GPL has
are entirely about protecting the rights of the people.
And that's what I love about it.
Well, on that note, it's basically time to wrap things up here. So I guess let people know
where they can find the project, get involved, support the project, things like that. Absolutely.
So qprompt.app is where you can download QPrompt. From there, I have a bunch of links that take you pretty much everywhere
on the QPrompt space.
We have a place for translators
to contribute their translations.
There's a place where we are writing
the documentation for it
that we're just getting started with that.
So it's a really good time to contribute.
Better if you're familiarized with it,
but it's a really good time to contribute it better if you're familiarized with it but you know
um it's a really good time to to work on that um and we have a telegram chat and a discord chat
and a forum the telegram chat is the most popular one for those interested
a discord it's not that popular but you know i'm here. I'm there always anyways. And in the forum, same goes for that.
That's the least popular of the three.
But yeah, if you like a project that is kind of easygoing
and that a lot of people use in professional sense
and those kinds of things, you like like Qt overall then
consider contributing to QPrompt
Is that all you want to mention?
Nothing else?
Yeah so far
I mean thanks to everyone involved
and who has helped over
the years
I think this is a good time to give tribute
to Victor Ortiz and Rafael, who helped
create Imaginary Teleprompter, and K-Van as well. He also contributed to Imaginary Teleprompter
early on, doing R&D. QPrompt, you know, I mostly did that myself, but I have a whole community of people backing it.
Not developers, but a lot of translators, documentation writers, active beta testers,
Videosmith, very, very helpful.
You've probably written more issues on GitHub than I ever have.
So yeah, that has been extremely helpful
for keeping the project organized
and directed in a way that it can be helpful
to other professional users.
So thank you.
Awesome.
So my main channel is Brody Robertson.
I do Linux videos there six-ish days a week.
I've got the gaming channel, Brody on Games.
I stream there twice a week.
Kingdom Hearts and Black Myth Wukong is probably still going.
It might be pretty close to finishing.
I have no idea what's going to replace it.
Check it out.
See what's over there.
I've got the React channel where I upload clips from the stream. So, if you just want to see clips, go check that out, see what's over there, I've got the react channel where I upload clips from the stream,
so if you just want to see clips, go check that out, if you're watching the video version of this,
you can find the audio version on YouTube at Tech Over Tea, if you'd like to find the video version,
um, wait, if you'd like to find the, oh, I forgot my outro, what, let me try that again,
if you're watching the,
if you listen to the audio version of this,
you can find the video version on YouTube at Tech of a T.
Yeah.
Yeah.
Okay.
Now,
now we're back on track.
If you want to find the audio version,
it is available on basically every podcast platform.
There is an RSS feed and I've done this outro like 200 times and I still can't get it right any single time.
Um,
I'll give you the final word.
What do you want to say?
Anyone sign us off?
This is the choice.
The Steins Gate.