Algorithms + Data Structures = Programs - Episode 166: Top 20 GPU SDKs, Libraries and Tools!

Episode Date: January 26, 2024

In this episode, Conor quizzes Bryce about the top 20 GPU SDKs, libraries and tools.Link to Episode 166 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)TwitterADSP: The P...odcastConor HoekstraBryce Adelstein LelbachShow NotesDate Recorded: 2024-01-24Date Released: 2024-01-26ChatGPT Question LinkNVIDIA CUDAOpenGLVulkanDirectXOpenCLWebGLAMD ROCmNVIDIA cuDNNNVIDIA TensorRTPyCUDATensorFlowPyTorchOpenAI GPTMicrosoft CNTKTheanoAMD FireProNVIDIA PhysXApple MetalIntel oneAPINVIDIA RAPIDS.aiCUDA vs ROCm: The Ongoing Battle for GPU Computing SupremacyIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8

Transcript
Discussion (0)
Starting point is 00:00:00 okay but buddy but what trained chat gpt uh that the compute the train nvidia gpu yeah but that that doesn't affect the answers i'm telling you that that large language model it almost certainly knows how it was trained do you know the address i i can't get the address right now because connor and i in the middle of recording recording. Welcome to ADSP the podcast episode 166 recorded on January 24, 2024. My name is Connor and today I quiz my co-host Bryce about what the top 20 GPU SDKs, libraries, and tools are. Yeah, yeah, he said that she must be a very, very smart dog. Yep. Oh, I have some bad news. My little thing that my microphones go in.
Starting point is 00:00:55 Look at this. Come on, Rode. What's this quality control? This is starting to fall apart. It's not acceptable. Yeah. Oh, I want to... Speaking of quality control, speaking of quality control speaking of quality
Starting point is 00:01:06 control this is potentially relevant content for the episode so you know about how that that door the door plug fell like off off of that uh that alaska airlines flight like a couple weeks ago. Oh, yeah, I heard about that. Yeah. So there are some reports out now that indicate what the root of the problem was. And the root of the problem seems to be that the four bolts that were supposed to hold the door in place were just not installed at all. And because I'm very tuned into the aviation world, I've been reading some things about this. And the process failure that led to this issue is actually kind of
Starting point is 00:01:54 interesting. So they thought that there was something defective with the door plug, because there are a lot of issues from the people who provided the door plug. And so when they were doing their QA on the plane before they shipped it out of the factory, they noted that, hey, we got to investigate this door plug. We think the door plug's a problem. So they made a note in the record to go check the door plug. And then they went and checked the door plug. And there's a note on the record that said, all right, we've taken the door out and we've investigated it. And now we're deciding whether or not we're going to replace the door or not. And then the note says, and if we do replace the door with a new one, if we get rid of this one and put in a new one, we will put a note in the system for the door to be rechecked, to do QA on the new door again.
Starting point is 00:02:52 And see, that's where the mistake was, because they had already taken the door off of the plane. And so because they'd taken the door off of the plane, they needed to redo the QA anyways, regardless of whether they replaced the door. They needed to check that the door was properly reinstalled. But they didn't end up replacing the door. And because they didn't end up replacing the door, they didn't mark that the door needed to be rechecked by QA. And because of that, the door wasn't rechecked by QA and the plane shipped out of the factory without any bolts to hold the door in place. And, you know, my partner being a process engineer, I have now some interest in the field of process and quality control and what leads to the root cause of failures. And I just thought this was quite interesting. You know, I'd say that our listeners don't find it interesting but you know we've we've
Starting point is 00:03:48 recently learned that i don't have my finger on the pulse of what our listeners do and do not find interesting well yeah hopefully my planes that i get on don't have a door taken off and then put back on if it gets taken off let's hope it it gets replaced. Well, I mean, no, I think there's some deep lesson to learn here. Like, you know, if you fix some problem or if you decide there wasn't a problem at all, like if you fix some problem in your code, you need to do something to test that the fix is good and you also need to add some sort of regression test
Starting point is 00:04:25 to make sure that you don't run into the problem again i i could imagine ways in which there is a lesson from this that could be learned uh by the software industry yeah don't work on planes all right here's my episode topic for the day. We were recently chatting. Was it a week ago? Was it pre or post a recording? I can't remember. About the landscape of SDKs, libraries, languages that are used to target GPUs. Specifically, in our case, we care about NVIDIA GPUs. And I showed you some diagram that listed
Starting point is 00:05:06 off a few of these things and i think it was just today i decided to ask chat gpt to be specific chat gpt 4 which i signed up for in the last couple weeks for a specific task which it was not able to complete successfully but now i paid for it a month. So we may or may not keep it. And the question that I asked was, so now we're going to do a little family feud, this little quiz time. But I think if you can get five out of the top 20 answers that ChatGPT gave, I'll be impressed. And don't worry. I think you're going to do well. But, you know.
Starting point is 00:05:42 Wait, wait, wait. Hang on, hang on. I just need to wait long enough to log into chat gpt don't be honest be honest bryce i can see i can see the light changing on your face i know you're navigating to different websites do not i want to see your hands above your head for the for the next five minutes you can rest them on your head if you want. So the question I asked was, hey, hey. Hang on. I got to get back to seeing your beautiful face.
Starting point is 00:06:09 I want hands above your head for the next five minutes while we play this little game. And a listener. You can play along as well. And then after we're done this little top, you know, trying to get five out of 20, I'm curious to see how many. Okay, Google. Open Chat GPT. see how many okay google open chat gpt this question comes in two parts because my first question chat gpt4 when it's going on ad nauseum and detail and i was like i had to correct it anyway so and i'll give you the beginning of chat gpt's answer which is before the list starts so
Starting point is 00:06:38 what are the main sdks libraries and ways developers utilize GPUs. Then it started spitting out a bunch of garbage. And I said, stop, stop, stop. And I said, don't give me a description, just list me 20. And then ChatGPT responded, sure, here is a list of 20 SDKs, libraries, and tools for GPU programming. And then it goes one through 20. So the question is, how many can Bryce get? And I will give you a hint. The first one is extremely obvious. It might be so obvious that you might not guess it. But it's like the most obvious one when you ask, how do you target a GPU? I'm going to go with OpenGL. All right. So actually, wait, wait, wait. Let me get a little notepad out so your first answer was
Starting point is 00:07:26 open gl that is number five answer on the board so you're one for one yeah what was your second one cuda cuda is the number one answer on the board so you're two for two vulcan vulcan is the number three answer on the board you're're three for three. You're crushing. So you're definitely. Did I say OpenCL? OpenCL is number two on the board. Four for four. Not a single miss yet.
Starting point is 00:07:55 So I've got number one, number two, number three. And number five. And number five. Which one was number five? So the order was CUDA, OpenCL, Vulcan, and then we skipped number four and then OpenGL. Gosh, I don't know what could be number four. Well, see, so the way that you program GPUs on, this is not my answer, but I'm going to explain why I'm not going to give it as an answer. I'm not going to say Metal, which is the the api for programming gpus on
Starting point is 00:08:26 apple's platform the reason i'm not gonna say it is because metal is newer um and so i think it's gonna appear like later on the list um oh uh wait hang on direct x direct number four is direct x crushing it so not not a single miss yet, folks. And I was going to say, well, go till you get like three or five misses. But at this rate, you know, you're going to run out of things to guess because you're crushing it so much. Yeah. And I think I'm going to be generous. Metal is number 18 on the list.
Starting point is 00:08:58 And that's because ChatGPT4, I think, got updated to like 2023 June, and it also has access. I see you typing, buddy. I see you typing. No, no. It's Ramona's calling me. Hang on. Pause. And, you know, the thing about that is—
Starting point is 00:09:15 Hands above your head. Hands above your head. The thing about that is that she knew that I was recording. You'd think that as time goes on, like the professional quality of this would like just steadily increase, but it seems like the opposite. That one was not my fault. I can't ignore a phone call from the girlfriend because it could be something urgent. And I would assume that if she was making a phone, if she was calling me on the phone
Starting point is 00:09:42 while she knew I was recording the podcast, that it was something like something like oh i get hit by a bus or something like that not oh i can't find the petco order that i'm supposed to pick up all right so to recap bryce has got his hands above his head now he's gone five for five technically six for six because we're giving him metal which is number 18 on the list and not a not a single miss yet and we still got 14 to guess. Well, I'm going to say Thrust. Thrust is not on the list. Your first miss. That is the parallel algorithms library that we have talked about
Starting point is 00:10:12 from time to time on this podcast. How about HLSL? HLSL. That is not on the list. Yeah, that's like a... It's the high-level shader language. I think that's like a it's the high level shader language um i think it's like a part of hands above your head hands above your head hang on i'm looking at no it's like a direct x thing um let's see you want a hint think about uh libraries because remember chat gpt
Starting point is 00:10:40 responded sure here is a list of 20 sdks libraries and tools and a couple of these were actually on the diagram that I showed you, if you recall. The diagram? What did you show me a diagram? Remember a week ago, I ended up showing you the stuff I was working on and then mentioned that I was having a conversation. And my question was, you know, what's the most... Pie torch. Pie torch is number 12 on the list. What is that?
Starting point is 00:11:05 So now seven for nine. If Pie Torch is on the list, that means that... Numba? What? Numba? Numba is a great guess. I was surprised to not see Numba on this list.
Starting point is 00:11:18 But Numba is not on this list. Are you suggesting that Torch is on the list because Pie Torch is on the list? That is not what I was suggesting. Okay. Then continue with your interpretation. There is a massive competitor that used to be more popular than PyTorch. Oh, TensorFlow?
Starting point is 00:11:33 That is correct. Number 11 on the list. Sorry to everyone I know who works on TensorFlow. So I still haven't gotten 6, 7, 8? You haven't gotten six through ten. At this point, you've got the top five, number 11, TensorFlow, number 12, PyTorch, and number 18, Metal, which is a grand total of eight. You've had three misses, so we'll give you two more misses. And definitely, oh, here's another hint.
Starting point is 00:11:58 So number eight, number nine, number 17 are all by NVIDIA. Are all by NVIDIA. Are all by NVIDIA. And actually number 20 as well, although they don't have the name of NVIDIA in the title of it. And actually number 9 doesn't as well. But anyways, number 9, number 10, number 17, and number 20. Is one of them something that you've worked on? One of them is something that I've worked on, yes. Rapids?
Starting point is 00:12:23 Rapids comes in at number 20. And actually, it's interesting. It says Rapids. That's surprising. Yeah, it says Rapid AI. And then in parentheses, formerly known as Rapids, a suite of libraries for data science on GPUs. I checked. I don't think it's Rapid AI.
Starting point is 00:12:36 I'm pretty sure it's called the URL is rapids.ai. And at least the marketing stuff on the main landing page still says Rapids. So Rapids, which is the GPU accelerated version of the popular Python library pandas. So now we've got nine. We're nine for 12. What else do we have that programs GPUs? So number eight, number nine, and number 17 are all NVIDIA made. I know two of them are...
Starting point is 00:13:00 Are they computer graphics? One of them is compute. Two of them is compute. Two of them is compute. And I think the third one leans more towards graphics. Yeah. Oh, God. What's the name of the RTX thingy? The library.
Starting point is 00:13:19 But I don't think it could be that one. What's the name of the RTX library? I don't know. And I don't think it's one of these three so dlss it is not dlss and that's not the rtx thing is it that's the uh no that's that's the super sampling thing that like upskills your 1080p to 4k yeah one wrong guess left hang on okay okay i'm gonna take another is unreal engine on? Unreal is not on there. So in total, you got... That's pretty good, man.
Starting point is 00:13:48 I thought you'd get roughly five because that's what I thought I would get just because some of these are off in the weeds. But you got nine correct, five incorrect guesses. But Numba, Thrust, I think are fantastic guesses. I actually don't know much about HLSL. But here, let's go through. So to recap what Bryce got,
Starting point is 00:14:05 number one, CUDA, number two, OpenCL, number three, Vulkan, number four, DirectX, number five, OpenGL, number 11, TensorFlow, number 12, PyTorch, number 18, Apple Metal,
Starting point is 00:14:15 and number 20, Rapids. We'll go through now the ones that you missed. And this is going to be entertaining. Number six was WebGL. I definitely wouldn't have gotten that just because it's not top of mind. I swear to God, if Cub's on this list, I'm going to be entertaining. Number six was WebGL. I definitely wouldn't have gotten that just because it's not top of mind. I swear to God, if Cub's on this list, I'm going to be pissed. Cub is not on the list.
Starting point is 00:14:30 Cub's the, you know, sibling library to Thrust. Number seven from our competitor. Can you guess based on that? AMD. Sickle? Nope. Hip? Nope.
Starting point is 00:14:40 I actually only learned about this. One API? Like a few months ago one api is number 19 but that's by intel not amd kuda's number one and one api is number 19 that's the world i want to live in sorry to all of our listeners who work on intel but uh number seven though is amd's what's their version of kuda do you know rockham yeah rockham okay but it's it's called hip now but i fun fact i was actually at the launch event for rockham wait rockham is called hip since when the hip would be the more proper thing to call it rockham's like the name for the
Starting point is 00:15:15 like overarching suite back but when i worked at berkeley lab um i think his name was greg he was like the product manager for um uh rockham And I got invited to go down to Santa Clara. This is when I lived in Berkeley. He invited me to come down to Santa Clara for their launch of Rock'Em. And he was really into like rock music. And I don't recall if he played a guitar there, but there was some like rock, like there was definitely like a rock music theme to the launch event. But yeah, I was there at the launch event for Rock'Em. And that was where? What city?
Starting point is 00:15:53 It was in San Jose, I think. Okay. Yeah. Interesting. So wait. Okay, so – What is – just to pause on this, you know, has you know i think rockham is the name for the overall software stack but hip is specifically their like cuda like clone i see i see yeah so the
Starting point is 00:16:14 wikipedia says rockham is an amd software stack for gpu programming and it spans several different domains blah blah blah but then it says it offers several programming models and then proceeds to list yeah three or four of them which is hip like hip is yeah yeah i i'm sorry i was confused really hip is one but like i think when rockham launched um it was supposed to be like their like cuda like cuda for their platform and i think later the hip branding got introduced uh interesting so i think that that's why that's why i thought that that rockham had been renamed to hit because i think originally and what rock was was supposed to be like their kuda but i i could be i could be wrong about that anyways but yeah rockham is like the overall like like um uh plat like in the same sense that like like kuda is like the overall platform but like kuda c++ is one way to program that like rockham is the overall platform and hip is their kuda c++
Starting point is 00:17:05 equivalent anyways give me the rest of the list all right so that was number seven number eight and number nine are both from nvidia it starts with ku and ends in three letters under under underpins both pie torch and tensorflow ku dnn ku dN is correct, yes. I expected you to get that one as well. Number nine, a little bit more esoteric, but we've been getting emails, I think every week,
Starting point is 00:17:32 and it's been called Insert Tuesday. Have you been reading your NVIDIA emails closely enough? And that being said, we get literally like thousands of emails, and that's even after like filtering them. Yeah. TensorRT. Yeah, NVIDIA. Oh, TensorRT. Although actually... Yes, I forgot about that. thousands of emails and that's even after like filtering them yeah tensor rt yeah nvidia oh
Starting point is 00:17:46 tensor rt although actually yes i forgot about that i forgot about tensor rt yeah now that i'm actually thinking about this though uh is it is it tensor rt tuesdays now that i think about it i don't think it's tensor rt tuesdays no it is tensor rt tuesdays hopefully this isn't like proprietary stuff it's all right to mention that. I'm in public, right? The name of a NVIDIA internal email. I mean, I think this whole podcast episode is sketchy because we're talking about competitors' SDKs. And I seem to recall earlier I made a crude joke at Intel's expense.
Starting point is 00:18:22 But you know what? Hey. It is what it is. This is not the opinions of NVIDIA. It's the opinions of us. And this is just what ChatGPT lists. We didn't put them number seven. This is what ChatGPT lists.
Starting point is 00:18:34 Buddy, okay, but buddy, but what trained ChatGPT? The compute that trains. NVIDIA GPUs. Yeah, but that doesn't affect the answers i'm telling you that that large language model it almost certainly knows how it was trained right i don't think so i'm sure that i'm sure that in the uh the input data there's a lot of information that says large that the that open ai uses nvidia gpus and so of course you ask it you're asking that's basically asking
Starting point is 00:19:07 it like how were you born it is funny actually because number 13 on the list is open ai gpt oh that's interesting that's interesting which uh i don't know i didn't to call open ai gpt like an sdk i guess maybe because there are SDKs attached to like. Yeah. I mean, like I've used the OpenAI API. So, yeah. Not shocked by that one. Continuing down the list.
Starting point is 00:19:32 Number 10 is PyCuda, which I am actually not familiar with. Number 11, we already covered. So we'll skip to number four. Oh, yeah. I think PyCuda is not actually a thing that we officially support i think pykuda is like the python kuda um interface thingy okay number 14 microsoft cntk the cognitive toolkit never heard of it number 15 theano with gpu support which I'm surprised that's up there because from what I know, like Cafe and Theano are like pretty old technologies. But hey, probably there's a lot of stuff out there.
Starting point is 00:20:12 Number 16, AMD Fire Pro. Never heard of it. Have you heard of it? A long, long time ago. And number 17 is NVIDIA PhysX, which i assume is a physics sdk or library yeah but okay so amd fire pro i don't think it was their name of like their software i think it was just the name for their like their brand of graphics cards it It was like for the... It says in parentheses next to AMD Fire Pro that it was an STK for AMD GPUs.
Starting point is 00:20:50 Maybe it was. But I think it was, I'm pretty sure it was old. I'm pretty sure it was like the 2000s. Yeah, well, I'm not surprised. I mean, Theano, let's check in on Theano. Ooh, that's not what I meant to do. Just closed all my tabs. Nice, closing all your tabs well control shift t baby theano is a python library for optimizing compiler and manipulating it opens up your most recently
Starting point is 00:21:18 closed tabs and if you close the whole window it'll reopen everything so whenever your computer restarts i mean it always asks if you want to restore but if you uh need to and you like accidentally clicked no to that you can just go control shift t a bunch of times i mean yeah theano came out in 2007 yeah so yeah it's been a while for a while been a while been around for a while anyway so i guess to wrap up this little i mean first of all you you did fantastic. Definitely better than I would have done. Hey, you know what? It's great to have a quiz for once
Starting point is 00:21:50 that I did good on. Babe, I did good on the quiz. And what's even more important is that you crushed top five, you know, right off the bat without even getting a single one wrong, which is pretty impressive. And
Starting point is 00:22:04 I guess, you know, to put a little bow on this little 30-minute episode is the question that, you know, Bryce and I were talking about a week ago off pod was, you know, what is the main way that developers target GPUs? Like, what path are they taking? Is it, is the number one, you know, and that includes things like PyTorch and TensorFlow, where even though you know, potentially, it's not like you're doing, quote unquote, GPU programming, you're just doing, you know, machine learning stuff that happens to, at the end of the day, target accelerated compute, which could be in the form of a GPU.
Starting point is 00:22:52 Like, is that where most of, like, if you were to create a pie chart of developers that target GPU compute, and for our concern, it's, you know, mostly, like I said before, NVIDIA GPUs, what does that pie chart look like? And that's something that I don't really have the answer to. I don't know. I haven't seen any GTC talks or keynotes that talk about that because I think most investors and people that care about NVIDIA, it's not something they care about. But as like a developer and someone that works on libraries, it's something that I've been thinking about recently. Anyways, thoughts, comments? I know your initial response was, yeah, I haven't heard of a slide that shows that data. Well, yeah. I feel like maybe this is just my corporate experience, but I feel like so often data like this that would totally help us make better decisions.
Starting point is 00:23:38 I feel like there are so many software organizations that don't really know the answers to these things. And then because you don't actually have data on how your thing is used, you just like you make decisions based on how people not even think things are used, but how people feel things are used. And I make the distinction because thinking is like a logical process, whereas feeling is an emotional process. And that's how I think we often approach this question. You know, I'm sure that there is some data that somebody has, but the fact that it's not something that like is in front of me is like a bad sign right like that's something that that I should be that like every every like senior engineer at the company should have a should have a good sense of that right but it's also when you have a product in nested kids
Starting point is 00:24:41 as widely used as ours it is difficult to truly survey and understand all of your users, you know? And also, like, how do you assign importance? Is it by the number of developers that work on the thing? Is it by the amount of revenue that they generate? Is it by, like, you know, is it by the amount of code that's written by a particular domain? Or is it by how often that code is run? I think it's very difficult to actually quantify how your software is used.
Starting point is 00:25:17 Maybe there's one 10-line program somewhere that's run on every computer in the world. And that's how your software is most frequently used um versus all the like all the other like thousands of lines that are of code that are written that use your library like are are barely used it's just that one place where it's used a bunch of times i mean i completely agree that this is an extremely hard thing to measure i i was i just while you were chatting, I googled, you know, CUDA developer survey, because I'm pretty sure that doesn't exist. And sure enough, it does not. We have, I think, some byproduct surveys, like I think the fourth
Starting point is 00:25:57 or third top thing was a CU DNN stakeholder survey. But I mean, if there's a C++, what do they call it? Community survey. There's a couple of them, right? One's done by the ISO Foundation. One's done by JetBrains. I think we should try to collect some of that data. I mean, it's kind of sad that like-
Starting point is 00:26:19 I've been saying that for years. I've been trying to put together a CUDA survey for years. Yeah, I mean, why can't we? I think we should commit. I mean, you know. Lazy list. What? Laziness mostly.
Starting point is 00:26:30 Yeah, I mean, we can make it happen if we wanted to. And I think actually like the, you know. Honestly, if you wanted to ask me the reason why I've never like completed plans to put that through. And like I've written it on my to-do list like four or five times over the six years I've been here on NVIDIA. Um, it's just, it's not as exciting a thing to do. It's important, but it's not as exciting or cool. And also it, it always seems like it's a lot of work. There's a lot of like privacy and concerns and whatnot like that. So see, if we were going to do this, I would want to do it correctly. And the way that I would want to do it,
Starting point is 00:27:07 this brings up something that's actually been on a recent episode unless Connor's gotten rid of it. The JetBrains developer ecosystem survey. You know what JetBrains did? One, they didn't try to go and run the survey themselves. They went and hired people
Starting point is 00:27:22 who are professional survey runners and they're like, tell us how to do this. And two, they do the survey and. They went and hired people who are professional survey runners and they're like, tell us how to do this. And two, they do the survey and then they publish the results and they not only publish the results, they don't publish all the data, but they publish the results
Starting point is 00:27:33 and they publish a description of those statistical methods and they 100% do this the correct way. So if I was going to do this sort of survey at NVIDIA, I would not get together a group of people to come up with a set of questions and make a Google Drive form. I would want to get a pile of money and go to some company who are professionals in doing surveys and be like, you go figure this out. Here's my requirements.
Starting point is 00:27:57 You go figure this out for me. I mean, I completely agree. And I think whatever piles of money it costs to make that happen probably is worth it. I mean, like, it's sad. You Google CUDA developer survey and the 1, 2, 3, 4, 5, 6, 7th result is CUDA versus RockM, the ongoing battle for GPU computing, dot, dot, dot. And it's a Medium article by, I don't know who this is, doesn't actually say. We'll leave it. I mean, I haven't even read it. I just glanced at it. But the point is, is like, we should be collecting this kind of like we have a GTC, you know, we have more than one of them. But like the main one happens, whatever, March, April, how hard would it be? Like if we contact one of these surveying companies, a couple months out, say here's our criteria. And we want to like, you know, provide this for every single person that either virtually attends or in personperson attends has the ability. Plus, we'll also, like, put the link online for anyone that just wants to go and fill it out.
Starting point is 00:28:49 And I think we would get thousands of responses. Like, supposedly, there are, like, four to five million registered CUDA developers or something like that in the world. Like, it'd be pretty easy to get, I think, several thousand, you know. Anyways, we'll put it on our list, folks. I mean, look, and if it's too expensive, we can always just, you know, take that out of your pay, right? I mean, not my pay. Take it out of your pay. Anyways, that'll wrap.
Starting point is 00:29:14 What are we on? ADSP episode 166. Be sure to check the show notes either in your podcast app or at ADSPthep dot com for links to anything we mentioned in today's episode, as well as a link to a GitHub discussion where you can leave thoughts, comments and questions. Thanks for listening. We hope you enjoyed and have a great day. Low quality, high quality. That is the tagline of our podcast. That's not the tagline.
Starting point is 00:29:41 Our tagline is chaos with sprinkles of information.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.