Software at Scale - Software at Scale 45 - Q/A with Jon Skeet
Episode Date: April 20, 2022Jon Skeet is a Staff Developer Platform Engineer at Google, working on Google Cloud Platform client libraries for .NET. He's best known for contributions to Stack Overflow as well as his book, C# in D...epth. Additionally he is the primary maintainer of the Noda Time date/time library for .NET. You may also be interested in Jon Skeet Facts.Apple Podcasts | Spotify | Google PodcastsWe discuss the intricacies of timezones, how to attempt to store time correctly, how storing UTC is not a silver bullet, asynchronous help on the internet, the implications of new tools like GitHub Copilot, remote work, Jon’s upcoming book on software diagnostics, and more.Highlights[01:00] - What exactly is a Developer Platform Engineer? [05:00] - Why is date and time management so tricky?[13:00] - How should I store my timestamps? We discuss reservation systems, leap seconds, timezone changes, and more.[21:00] - StackOverflow, software development, and more.[27:00] - Software diagnostics[32:00] - The evolution of StackOverflow[34:00] - Remote work for software developers[41:00] - Github Copilot and the future of software development tools[44:00] - What’s your most controversial programming opinion? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.softwareatscale.dev
Transcript
Discussion (0)
Welcome to Software at Scale, a podcast where we discuss the technical stories behind large software applications.
I'm your host, Utsav Shah, and thank you for listening.
Hey, welcome to another episode of the Software at Scale podcast.
Joining me today is John Skeet, a staff developer platform engineer at Google,
working on the Google Cloud Platform client libraries for.NET.
He's also been called the Chuck Norris of programming,
according to a BBC article that I've seen,
because of his 11 million points reputation on Stack Overflow.
As I saw...
1.3 million.
1.3.
We're not up to 11 million.
That would be a while.
Yeah, I messed up the comments.
Yeah.
Thank you for joining me.
My pleasure.
Good to be here.
Yeah.
So maybe we can start with what do you do at Google?
Like what is a developer platform engineer?
So to sort of make it almost tautological, I am an engineer working on a developer platform.
So in particular, Google Cloud Platform.
At the moment, I'm actually also within the DevRel developer relations organization and
things change a bit. I might end up being reclassified as a straight software engineer,
as it were. But fundamentally, I work on the developer platform and developer relations engineers, developer platform engineers tend to have a little bit more contact with customers, whether that's some people being directly involved with specific customers, maybe providing prototypes or working with them on requirements and things.
I don't tend to do so much of that, but I'm active on the GitHub repositories where I'm providing client libraries.
I speak at developer conferences. I obviously chat to people at developer conferences.
I'm active on Stack Overflow, as we'll talk about later on.
But apart from all my personal interests on Stack Overflow, I do answer some Google Cloud Platform questions as well. So I'm a regular software
engineer in that I write software, but it's also a little bit more sort of customer facing than
at least some software engineers. There are plenty of software engineers who would not see themselves
particularly in a DPE role, but still talk to customers all the time as well.
So there's some fuzziness there, but it sort of gives a little bit more nuance
around the customer facing.
I try to really understand what makes for a good client library
for the APIs that we're supporting.
Yeah, and maybe this is going to be a slightly naive question, but I can see that Google's APIs are defined generally through a proto-buffer, like a gRPC spec, similar to an open API spec.
And you can probably auto-generate client libraries based on that spec.
And I interned at Google like five years ago, so I think that's somewhat how it works. Why do we need to have
additional client development or like client libraries on top of that auto-generated code?
So largely we don't. Most of the client libraries that I'm responsible for are directly
auto-generated. Some have additional manual code to just make things that little bit easier so a protobuf
can't say anything about well this represents the content of a file on disk for example so
where we've got the cloud vision api often you will want to populate a protobuf with the bytes
from a file on disk so it makes sense for our libraries to have just a little helper method to load the content of the file,
put it into what's called a byte string in protobuf, and it just makes things that much simpler.
So a large part of it is auto-generated, and I work on the generator as well.
So if there are changes for generation, then I can work on those. And there's
also an underlying support library for, well, how do you create one of these clients? How do you
populate credentials? How do you decide whether at the moment I'm working on, do you use the grpc.core google provided uh grpc transport which uses a native binary or do you use the
microsoft well microsoft and google collaboration of grpc.net.client that package as the basis for
the the messaging and how you configure that what that looks like like in ASP.NET Core, that sort of thing makes all the difference between,
okay, well, I shall just dump you with a library with no documentation
and good luck between that.
And yeah, I feel that this has been thought through.
We're really trying to help our customers be productive
as quickly as possible, save them from breaking changes.
So I spend quite a lot of my time not just in the.NET client libraries, but thinking about
versioning across Google Cloud Platform as a whole. So there's a lot involved beyond running
the generator, if you see what I mean. And there's day-to-day stuff in terms of reviewing changes,
pushing them out as new Nougat packages, et cetera.
So there's a lot of stuff that is automated,
but quite a lot is automated with some human intervention where necessary.
And judging where that intervention is necessary is a significant part of my
job, really.
So in a nutshell,
you're basically trying to improve the experience of any developer using C Sharp to interact with GCP.
Yeah, absolutely.
Yes.
Maybe a slightly related question.
So you're the maintainer of Node.im, which is a date and time library in C Sharp.
That's right.
Why is managing date and time so hard?
What's your perspective?
Because humans suck um
fundamentally uh if i would say if time had been designed by software engineers it would be much
simpler uh looking at the designs for almost anything that software engineers have designed
i'm not sure that's actually the case but uh humans make life complicated and we do this for uh you know
language things and you know character sets and words in general language and dates and times
they have a very rich cultural history i can't think of anything else in uh software engineering, which is quite so religiously based, if you see what I mean.
So the language, sorry, the calendar system that a user uses may well depend on their country,
but also potentially which religion they follow or the religion that is dominant in their country,
if any.
Most countries do use the Gregorian calendar,
at least for business purposes, but many will use other more religiously oriented calendars
for personal life and interactions.
And even the Gregorian calendar is named after Pope Gregory.
And we have things like Guids.
Who would have thought that Guids would be religiously based?
Well, they have a timestamp that is based on the cutover from the Julian to Gregorian
calendar as declared by Pope Gregory.
So it's sort of, it weaves its way into everything.
So there's a lot of cultural baggage there, even as simple as even if we're using the Gregorian calendar to format your date as year, month, day, like ISO 8601 or day, month, year, as most people in most countries do, or month, day, year, which the US and probably Canada does. I'm not sure where else
on the planet does. It seems like a very odd choice to me, but I'm sure listeners in the US
will think the rest of the world is a bit strange. So as simple as that. And then you've got things
like time zones, which are obviously they're simultaneously really complicated and not that bad.
I suspect that most of the ways in which many software engineers go,
I can't cope with time zones, they're too complicated.
Actually, that can be managed, but there are probably other complexities
that most software engineers have never even considered,
which we can come on to in a bit um but then there's there's just simple things like uh calendar arithmetic
is really weird what is january the 30th plus one month well it can't be february the 30th
unless it's 1812 in sweden um or one year in r Russia because these things happen.
So, you know, do you say, oh, well, that goes to February 29th or February 28th, depending on whether or not it's a leap year,
or do you say it goes to March 1st?
Weird questions like how can you tell whether someone can vote
or not in a particular election?
Well, you say, well, it's if they're 18 years old
right well what exactly do you mean by that uh if it's a leap year when if they were born on a leap
day um and the election is on march the first then you could say well potentially adding 18 years to
their date gets to march the first so maybe they're 18. You get different answers
depending on how you do arithmetic. And that ends up being very counterintuitive for software
engineers, particularly. And I would say for product owners and stakeholders, it requires
more significant precision than many people are used to providing, partly because everyone has
their own natural idea of what the answer would be to any given question. And there's a sort of
blank spot of, well, I don't understand that there could be any other answer. So, you know,
in a book that I've been working on recently i have this uh example of a requirement for a
product saying customers can return a product within three months that sounds fairly straightforward
so the product manager gives that to the engineers and says right go and implement that
and they get back something that may not be what they anticipated at all. So just in that example, and this is off the top of my head,
if I had the text of the book to remind me everything that it needs to go through,
there might be more.
But it's like, what do you mean by within three months of purchase?
Is that three months of placing the order or the order shipping or the order being delivered?
What do you mean by three months in terms of this calendar arithmetic that we talked about before,
where if it's three months after November the 30th, does that mean I can return it on March the 1st or not?
Because three months after November 30th might be March 1st, or it might be February 28th or 29th,
depending on what your answer is to calendar arithmetic.
Which time zone is that in?
Are we okay to just use the Gregorian calendar?
There are just so many of these things.
So I tend to find that really good product managers
will understand that there are always extra questions to ask, and they'll really
welcome that and buy into it and say, okay, let's write all these different test cases and
conformance tests, whatever we want, acceptance tests, whatever we want to call them. And that's
great. Not so good product managers will say, no, you're making this too complicated. It's really simple,
which I tend to find is code for, I don't understand where the complexity is, therefore, it must not exist. So yeah, there's just all kinds of bits of complexity, all wrapped up into
date and time. And most of them are actually manageable if you can separate them out from each other. But because you tend to encounter them all in one go, there's this great temptation to just throw up your hands, particularly if you can get to an answer that's probably right most of the time.
For example, an answer that is fine so long as you're nowhere near a daylight saving time change then yeah that'll be fine it's only
two hours a year that it'll be wrong i can't be bothered to do it properly whereas it doesn't
take that long if you can set aside some time to think right i'm going to really make sure i can
separate out all the different concepts involved in date and time make sure at any point in time
all the concepts within my application are clever,
are clearly mapped to those date and time concepts. And then I can work with them properly.
That takes a fair amount of time. But you end up with a much clearer code base,
and one that you can test and work with more easily. So that was a very long answer to
why is it so complicated, but it just is.
I think it exactly explains why it's so complicated. There's so many nuances. Maybe the one takeaway
for software engineers, should we always be storing, or how should we be storing timestamps?
And I know you have a blog on this, like maybe it's just storing them a certain way is not,
yeah, what's your take on that?
So the blog post you're referring to is um
around storing date and time always as utc and that is regarded as a silver bullet uh even by
very experienced engineers and there's this attitude of just store it in data in utc rather
and then uh you can deal with any time zone stuff later on when you display the value.
And in many cases, that's fine, but you need to know about the cases where it's not fine.
So anything which is a machine generated timestamp, so something like a commit time in a
database or a log time, something like that, that's fine to store as UTC.
And usually that's the right answer.
You might want to store it as a date time and offset if it matters where the log entry was generated, for example.
But UTC is fine.
You won't lose any data.
But let me give you a different example, which is where suppose we were going to meet and we were planning
to meet a long time in the future, say in 2025. And for some reason, I always give an example of
December the 1st in Paris in 2025. So we say we're going to meet at nine o'clock in the morning, Paris, 2025, December the 1st. And that's fine. So we can now predict
what UTC instant in time that nine o'clock in the morning in Paris, December the 1st,
2025 would be, but we might be wrong. So we would probably guess that it would be at UTC plus one. So that Paris will be
one hour ahead of UTC at that point in time. So we'd say it's 8am UTC, store that in the database.
And if nothing changes between now and then, all is fine. You know, as in if the government
doesn't decide to change the rules. However, the European
Union has been looking at, with things like the pandemic and war in Ukraine getting in the way,
potentially getting rid of daylight saving time and probably each country moving to
sort of permanent daylight saving time. So what they observe in the summer at the moment
being their permanent UTC offset.
So at that point, Paris would be at UTC plus two
the whole year round.
So if you then say, okay, well, in my database,
it says that we're meeting at 8 a.m. UTC
and you display that to the user,
you tell them that you're now meeting at 10 o'clock in the morning.
That's not what the user said.
The user said we were meeting at 9 o'clock.
So there are lots of ways around this.
My favorite sort of easy-to-remember thing is
if you always remember everything that the user told you,
you can't go wrong, or at least you can only go wrong in particularly extreme circumstances where you would have asked for more
information so there are bizarre circumstances where you know a particular date and time wasn't
ambiguous but because the daylight saving time change rules have changed suddenly you know you're you're actually meeting at half past one in the morning.
It wasn't going to be the day that the clocks went back, but it's become the day that the clocks went back.
So 1.30 in the morning occurs twice. And because you weren't aware of that at the time, you didn't think to ask, is it the first or the second time that it's 1.30 in the morning?
So that's an edge case on an edge case on an edge case. But the more
general, well, okay, if it becomes ambiguous, we can take a decision one way or another.
At least we know what the user told us, which is nine o'clock in the morning, December the 1st,
2025 in Paris. And with that information, we can still store the UTC instant that we think it's going to be and use that to, for example, sort things, give people an in-order agenda.
And they may be going across multiple time zones.
So it's really helpful to have a timestamp, a UTC timestamp for things, but you can also recompute things
if you know that the time zone rules have changed
because of the European Union
and the French government passing a law
that changes those rules.
If you have just stored the UTC timestamp,
even if you, well, if you know that it's in Paris,
then you could say, okay, well,
what must the user have said for me to have thought
it was eight o'clock using the old rules and then reinterpret that with the new rules. But if you're
going to do that, it's much easier just to store what they said originally. But then you do probably
also want to say when you store the UTC timestamp for ease of processing things,
well, when did I calculate this?
As in which version of the time zone rules did I calculate this against?
So that you can always recalculate it later with newer rules.
And this kind of stuff is tricky.
If you're building any kind of reservation system, like a train ticket system or a restaurant reservation system,
or even an airline ticketing system,
this is the kind of stuff that will bite you if you don't think about this.
As you mentioned, maybe a machine-generated time is fine,
but when a user expects something to happen at a certain time,
time zone rules can change.
Exactly.
Partly because machine-generated times are usually recording something that has happened.
And if something happened at an instant in time, it doesn't matter how you later view
that instant in time, the instant isn't changing.
Whereas the thing that isn't changing for a reservation is probably at least the local date and time at the place where it is.
Now, there's the further complexity of, well, what do you store?
Do you store the location or the time zone?
And most of the time, either will be fine.
But suppose that we're recording this in the middle of the war in Ukraine.
Suppose Ukraine splits into multiple countries with different time zones, where at the moment,
I believe it's just one time zone.
If you recorded just the time zone, you wouldn't know which of those new time zones was actually
going to apply.
Whereas if you store, this is the cafe that we're meeting at with an address, then certainly if you
have appropriately rich sources of data, you can find out later, right, well, what time zone is
that cafe in? So that's one extra level of complexity. And it seems that time is one of these areas that just keeps giving in terms of
you never get to the bottom of the rabbit hole.
There's always more you can dig and don't even get me started on things like
leap seconds and relativity and things that I really don't understand.
Leap seconds are,
it's one of those topics that i understand for about a minute at a
time and then i need to go down and go and have a lie down because it's just i but what what you're
um fortunately i think most software probably doesn't need to care about leap seconds and it's
a really good job yeah i remember google had to do so much work to work around the leap second.
They just slowed down all of their systems.
Yeah.
Google smears the leap seconds, which I believe there are at least some bodies that think
that's not a good idea.
And there are different ways of smearing as well.
There are lots of different ways of dealing with this.
But smearing is certainly useful
in terms of software doesn't need to worry about it, but it does mean that a second isn't
always exactly a second, which sounds bizarre.
Yeah.
I mean, more broadly, is software development just tricky, right?
And this comes back to the Stack Overflow stuff.
The reason why there are so many questions
on Stack Overflow every day,
of course, there are so many new programmers,
but just software development is just tricky.
What is your perspective on the whole field?
I think software development is difficult
in ways that people don't expect it to be. Or at least the ways in which you become
successful aren't necessarily the ways that people might expect, at least in my experience. So
there's a sort of expectation, I think, that you have to be brilliant at computer science and that
if you know all the algorithms, you'll be brilliant and everything will be fantastic for you.
And there may well be plenty of people for whom knowing algorithms is really the core of their
work. And that is, anything else is second place at best. In my experience, it's far more about communicating.
So trying to understand someone else's requirements, someone else's perspective, someone else's preferences for how they do things, being able to explain your own preferences and sort of analyze yourself, be reflective in your practice. So if you write
some code for a project, and it goes really well, see if you can remember what aspects went well.
And if things go badly, how do you avoid making the same mistakes again? There's also an aspect
of humility in software engineering that I think is really important.
And there's this odd sort of tension.
To some extent, you want software engineers to be really bold and say, I'm going to change the world.
But you also want them to say, OK, if there's a problem, the chances are very, very, very high that the bug is in my code, not in the compiler, et cetera.
Just this afternoon, I was looking at a Stack Overflow question, which said, why am I getting
unnecessary errors?
I'm getting errors and I shouldn't be getting any.
And we looked at their code, which they provided as a screenshot, which please don't do that.
And they got a class declaration with a bunch of statements just
straight in the class declaration. They weren't unnecessary errors. It was that the programmer
didn't understand, in this case, C sharp, as well as potentially they thought they did.
And having the mindset of when something goes wrong, I will first assume it's my fault.
It makes all the difference in terms of being able to
analyze things. Because if you always assume, no, my code is perfect, it must be someone else's
fault, you will look in the wrong place most of the time. I'm a reasonable software engineer,
I'm pretty good. But 99% of the time, if something is wrong wrong if something is behaving incorrectly that's because
i have a bug not someone else's libraries the platform the compiler it's my fault um so if 99
percent of the time for me that's the case i suspect it's going to be you know 90 plus percent
for pretty much everyone so look in look into your code, assume the mistake is with you.
And that's difficult in a way that no one comes into software engineering saying,
do you know what my core strength is?
I'm humble.
I don't think I've ever heard anyone say that as an interview or whatever,
but I think it's really important.
I'm hoping to start writing a book
quite soon on diagnostics which is very closely related to all of this so when something goes
wrong what's your process before you reach for stack overflow what have you done to investigate
what's wrong have you checked is this are you comfortable with the language? So it's not likely to be a language problem.
Might the problem be how you're using libraries?
How do you go about sort of doing the scientific method on that
and analysing it and applying divide and conquer?
So if you've got a problem in a huge system,
clearly you're not going to be able to post the huge system on Stack Overflow.
So how do you go back isolating it to as small a bit of code as you can?
And that's really, really, really important for particularly for junior software engineers.
But I think it's sort of how senior engineers have got to be senior is they've had to go through this a lot.
So that's an area of skill that I think is kind of missing from the industry.
Obviously, many people do know how to do it, but we don't seem to have a good way of teaching people to do it.
And I'm hoping to be able to move the needle very slightly on that front.
Yeah. Maybe, yeah, on that note, how do you know that you're missing the right tools, right? Like,
is it just that somebody, you have to work with someone senior who will tell you, you know, you should just learn how to use a debugger? What do you think?
Well, I think we're, I think we're missing the book that I'm going to write.
And in fact, what I'm hoping to do,
so to reveal my grand plan, it's not really a secret, I'm hoping to write a book that is C-sharp based, but also make this into a series and say, anyone else can take what I've written
and apply it to Java. Take however much of my text you want to replace, however much of the text you want to make it Java
specific. Because if we're particularly aiming at relatively new developers, it's unrealistic to say
to someone, okay, well, you don't know C Sharp at all, and you're quite new to Python, for example.
So read this book that tells you how to do diagnostics in C Sharp and apply new to Python, for example. So read this book that tells you
how to do diagnostics in C Sharp
and apply it to Python.
That's just, that's too much to ask of folks.
So what I want is there to be a dedicated book
for every popular programming language
and potentially different platforms within that.
So I think it would be entirely reasonable
for there to be one sort of general Java one
and another Kotlin one, for example,
and then an Android one or for C-sharp, a Unity one,
because they're all going to have
slightly different diagnostic processes.
For me, a console app is great,
but if you're trying to diagnose something
that's Android specific, that's probably not going to help you.
So I really think that books and if people want to then create videos about all this,
that's brilliant because people learn in different ways.
I think we are missing ways of getting there faster.
At the moment, if you're lucky, you are in a company with a senior engineer
who will take you through the process and help you and be patient with you, et cetera. Maybe if
you're lucky, you post on Stack Overflow and people ask you, I spend most of my time on Stack
Overflow these days, adding comments to questions saying, okay, please, could you try to take this huge mountain
of code you've got and reduce it to a complete example? Because the problem you've got is dealing
with this string that you got from Azure, for example, and how you got the string probably
isn't important. You've posted what the string is. You seem to be comfortable with the content
of that string. You're just trying to process it. So we don't need to see the Azure bit. We just need to see here's the string and here's how I'm
trying to process it and what went wrong. And I'm hoping that comments like that, some people
take my suggestions very, very well. Other people, I don't manage to get my points across
as well as I would like to and they'll
take umbrage and and would prefer to delete the question than to provide the extra information or
cut things down um and may may not have the experience to cut it down but i think some
people will be learning through that process obviously not just through me through many people
asking for clarifications on stack overflow and just the process of writing Stack Overflow questions time and time again,
I'm sure you will get better at it over time. So some people will learn through online resources
like, but not solely Stack Overflow. Some people will learn through in-person mentoring, pairing,
just bugging the senior engineer on the team and saying,
I can't understand what's wrong here.
Okay, well, let's go through it.
I'll show you how I would do it.
And a lot of this mentoring may not be very deliberate.
This is mentoring on diagnostics.
It's just, as you watch someone else
trying to diagnose a problem,
if you're paying attention, you will learn techniques as you go along.
And chances are most people will learn through a variety of these things.
But it feels to me like somewhere that we could be more deliberate and more just absolutely blatant saying this book isn't about trying to teach you how to write C-sharp. It's not trying to teach
you the C-sharp language. It's not trying to teach you how to write better C-sharp. It's trying to
say, you've got a problem. And maybe that's, I don't know what to do next, but more often than
not, it's, I've got some code. It doesn't work properly. What's your next step? And that's the
real focus of the book. i really have hopes that this could
be transformative now having you know going back to what i was saying before about being humble
um we'll see and reality will probably bite and it won't do quite as much as i would like it to but
i would really really like it transform, just make those first few
years of software engineering a lot easier. Many of us have the battle scars of spending hours and
days and weeks debugging something only to find a stupid mistake earlier on. Well, we can help other
people to avoid that kind of problem. I have a ton of follow-up questions from that,
but like to start with,
what has your experience been like?
So you mentioned that now you spend a lot of time commenting on Stack
Overflow posts,
right?
How has,
has,
how has Stack Overflow evolved from when you first started contributing in
like 2008,
right?
Like are the problems roughly similar that people are
facing or is the experience just kind of cyclical or has there been a shift in what you've seen
it's very hard to tell um i don't think it's hard to tell whether my expectations have changed or
whether the quality of questions has changed and if the quality of questions has changed. And if the quality of
questions has changed, is that because individuals are themselves changing or there is a different
population on Stack Overflow? It used to be that I would answer 20 questions in a day. And these
days, it's very rare that I would find 20 questions that I want to answer, but I'll definitely find 20 questions which could, the questions themselves could be improved.
And then maybe someone else can answer them or I can answer them or in the process of improving the question, the questioner can answer it for themselves and that's very often the case just by going through the
diagnostic process to come up with a good question you suddenly say oh i see what i'd done wrong yes
it was a yeah i was passing in the wrong file name or whatever it was um so i don't find as
many questions that i want to answer it's possible that my standards my bar for how good a question is has gone up.
Maybe it's gone up too far.
But I think we can objectively say I do not answer as many questions as I used to.
The question of why that is, I think there are lots of contributing factors.
And it's definitely not that I have gone away from Stack Overflow as a platform.
I still think it's an incredibly useful platform
and I still spend a lot of time on there every day.
Yeah, and the second question that I had
was around that whole idea of osmosis, right?
Like learning from a senior engineer,
just seeing how they debug things
or they diagnose what the problem is
clearly like a couple of years ago with the pandemic a lot of the world had to just start
doing more remote development it was already on a rise like and you've been helping people's
you can say like remotely asynchronously for a long time do you think that is still a good idea for most developers
to be fine to be completely remote from day one?
I think all developers should expect that some of their learning,
some of their development will be via remote help.
Some of their development should certainly be on their own.
People can improve by practice, by writing, by reading, all kinds of things without necessarily
any interaction with actual individuals. It's really hard. I believe that there have been studies showing that during the course of the pandemic, senior engineers tended to become more productive and junior engineers tended to become less productive.
Not individuals becoming less productive, but a junior engineer who started in mid-2020 would be less productive than a junior engineer who started in 2018.
And the reasons for those are the same thing, because the senior engineers aren't spending out due to this because mentoring is also a good way of shoring up your own understanding of different concepts.
If you can explain it to someone else, you discover areas that you're not so sure on, and you can also gain other perspectives as well.
And I do want to sort of qualify.
We've been talking a lot about senior and junior engineers,
and I'm regarding that as shorthand for people who've spent a long time
in the industry or people who haven't spent a long time in the industry.
And there will be some people who are in senior roles very,
very quickly and some people who are in junior roles after, very quickly and some people who are in junior roles
after a very long time and various different reasons. What I want to make clear is I don't
want to be associated with any kind of snobbiness of saying, oh, you're only a junior engineer,
therefore not worth my time. That's completely not the case. There's no value judgment when I'm talking senior or junior.
Every individual matters and has interesting perspective that may, you know, someone who
has just started programming and weighs in on a discussion around testing practices may
not have quite as useful opinions as people who've been doing
testing for years and years and years, but we still shouldn't be dismissive of things.
Anyway, back to remote working, the pandemic and things. I'd actually been working primarily
remotely for a few years before the pandemic struck. Anyway, I have personally found it very productive.
Partly, I have the privilege of being,
I'm currently sitting in my shed,
which is, shed is a bit of a euphemism.
It's an outbuilding at the bottom of our garden,
but I'm sitting here in a comfortable space.
It's large enough for me.
It would be a small bedroom, but it's large enough.
It's very comfortable, nice heating, nice sound system.
It's set up for work.
So front and center is a nice monitor, et cetera.
A lot of people in the pandemic have not been in that sort of situation.
If you are just starting out and you're renting a one bed flat and apartment and all you've got is a bedroom and a kitchen and a bathroom and you don't have any other spaces.
Well, where are you going to do remote working?
That puts you at a massive disadvantage. So there are definitely downsides in terms of effectively accessibility of work
and of technical work when it comes to remote working,
as well as missing out on interaction.
Now, I hope that a lot of good engineers, whether senior or junior,
will recognise that they need interaction and always be
ready to jump on a call. And I'm hoping next week to do for the first time in at least a long time,
some remote pair programming with a colleague where I think it's called Visual Studio Sharing
or something. It's been a while since I've, I i know the features there i just haven't used it for a while um we'll be looking at how to take some tests that she's got and
parameterizing them and and hopefully that will be instructive for both of us and certainly much
more productive than me say me effectively trying to direct from just the call and her type on the,
on the keyboard,
but with me providing all the actual right now,
move your mouse.
No,
no three lines down.
No,
no one,
one line up,
et cetera.
I think doing the remote sharing thing is going to be much more productive than
that.
Hopefully she can have the,
she can be in the driving seat as it were for 90% of
the time. But if I do need to just drop into it and say, no, this, this is what I mean. And here's
some code. Um, it will be much more productive that way. So I think recognizing the need for
tools like that and for being available, you know, we're all much more flexible when we're working remotely. I can go and pick my kids up,
go shopping, whatever it is, get my work done, but in a way that the rest of my life beyond work
can fit in with easily. That's great. And I'm not spending three hours commuting every day.
That's great as well. But we do need to make time to be with our
colleagues whether that's over chat or over video calls or with remote pairing whatever works but I
think it's again a matter of reflecting on it and being deliberate rather than just assuming well
okay we're going remote everyone could just kind of work it out for themselves.
Have you made sure that all of your colleagues
have the appropriate equipment
and have somewhere that they can go and work?
And if they don't have space in their home,
are there offices that you can hire near their homes,
hire by the day?
Yeah, and in general we need you can't just drop people into remote and expect everything's going to be the same there is like a different set of
tools that you need and you certainly have to be really intentional about it is is what i've seen
um yeah so on on the whole idea of tools, there's tools that make remote development easier.
There's also more specific programming tools.
There's things like GitHub Copilot and GPT-3 and stuff.
What do you think of these things?
Do you think they're helping or they're making enough of a difference?
Or just in general, what's your perspective?
So I haven't used Copilot.
I've sort of seen videos of it, but I think I've seen
some of the results. So Visual Studio now will suggest to you lines of code every so often.
And I'm in two minds about it. I wonder whether it might lead people into just following what
Visual Studio suggests.
I'm not saying that anyone's going to be just hitting tab
or whatever it is that fills in the suggestion
and trying to do whole programs that way.
But if you are constantly being given a suggestion
and thinking, oh, maybe that's the right way to do it,
instead of if Visual Studio is constantly suggesting something even if half
the time you don't take those suggestions it sort of takes you a little bit away from the
working out what you want to do and then how to do it which i tend to think is a really important
part of programming so we'll see um it's very hard to predict what's actually going to become productive.
I love IntelliSense.
I'm not saying I would struggle without it,
but it definitely makes life easier being able to see easily what's available.
But going from that to an actual suggestion,
yeah, sometimes it's fine, sometimes less so.
I tend to, I'm a fast typist,
so the bottleneck encoding for me
is never how quickly I can write the code,
as in physically type it.
It's the thinking process that goes on behind the typing.
And Visual Studio suggesting things
might be a shortcut in that sense,
but it might be a shortcut to the wrong place.
And I don't think it's smart enough yet
to really read your mind
as to what the product requirements are
and make a decision based on that.
That said, sometimes it is absolutely suggesting
completely the right thing. And that's
lovely. We'll see. I'm kind of glad that we're experimenting with it. I hope people don't get
too sucked into it and expect too much of it. As you said before about remote working, it's about
being intentional, working out when it's appropriate to follow the
suggestion and when it's not. As a wrap up, there's a question that you asked on Stack Overflow
a while back, what's your most controversial opinion on programming? I'm going to re-ask you that.
So beyond storing UTC is not a silver bullet.
I tend to have somewhat controversial. So there are some purely technical things like I wish that in C sharp classes were sealed by default. I wish that floating point variables had no default type, sorry,
floating point literals had no default type that you had to specify a suffix of D for double,
F for float, or M for decimal, for example. Because a lot of the time, I think people use
double because that's the default for a literal. If put 1.5 that means 1.5 is a double
and if we were more agnostic about that um then maybe people would use decimal more when they
should use decimal for example so there are there are simple you know language nerdy things around that. Other controversial opinions. Some people
say that we shouldn't put any comments in code, that every time you put a comment,
other than for documentation comments, that that's effectively a failure and you should rewrite your code to be clearer.
And while I can see the reasoning behind that, I tend to find that comments can provide a narrative
for anyone reading the code for where things are unexpected. And they shouldn't say what the code
is doing because that should be clear.
They should explain why it's doing it, if it's unusual, or if working out the impact of,
well, yes, I can see what each of those three lines does, but what's the result of that?
And why do we want that to be the result?
That's where that's useful.
I tend to be not very dogmatic when it comes to testing.
I don't know why testing in particular seems to attract a lot of dogma that if you're not writing tests in this particular
a range act assert way,
then you must be doing everything wrong. Or if you don't write tests
for all code, then it's terrible. I fully write unit tests where appropriate or rather where I can.
There are some code bases that I work on, usually personal projects uh the the professional projects that i that i work on
all do have um tests but there are personal projects i work on that are primarily integration
and when there are issues they tend to be issues that wouldn't be found by tests anyway
so if people want to look down on me for not embracing test-driven development in every possible scenario, then I guess that could be controversial.
I don't know.
I think it's less controversial now than it used to be that I'm a firm believer in statically typed languages.
I think type systems are great. Now, obviously, you've got
some type systems that are really, really powerful beyond, you know, I don't know F-sharp
particularly well, you know, I can read bits of it, but I certainly don't regard myself as an
F-sharp developer. But I'm aware that its type system is more powerful than, for example, the C-sharp type system.
But I definitely appreciate what C-sharp does give me.
And maybe I should give dynamic languages or dynamically typed languages more of a go.
But it feels like the industry has been sort of recognizing hey this statically type thing
is is really useful therefore we will add types to um javascript and you end up with typescript
etc so yeah i'm i'm a big fan of static typing, and that used to be controversial.
There used to be a certain amount of,
oh, well, if you're stuck in a statically typed language,
you belong to the 1960s or whatever.
I think that's kind of gone away.
Yeah, I don't think I'm massively controversial other than that.
Just kind of wanting to keep my head down and help people become better
developers in little ways really and it looks like you've been passionate about that for a really
long time and that's really great right making other software engineers more productive that's
kind of my job as well and yeah thank you so much for being a guest on the show my pleasure my
pleasure And yeah, thank you so much for being a guest on the show. My pleasure. My pleasure.