The Changelog: Software Development, Open Source - Apple finally gets Siri-ous (News)

Episode Date: June 10, 2024

Apple announces its "new" style of AI, piku gives you "git push" deployment on your own servers, Dabo Chen rebuilds nanoGPT in a spreadsheet, Mark Seemann thinks you'll regret using natural keys in yo...ur database design & Glyph Lefkowitz describes his grand unified theory of the AI hype cycle.

Transcript
Discussion (0)
Starting point is 00:00:00 What up, nerds? I'm Jared, and this is Changelog News for the week of Monday, June 10th, 2024. The AI gold rush has NVIDIA breaking profit records left and right. Turns out at their current market cap, they are now valued at $102 million per employee. I think it's time for some serious raises. You gonna show me the money? You gonna show me the money? Okay, let's get into the news. Apple finally gets serious. See what I did there?
Starting point is 00:00:40 The seemingly sleepy tech giant in Cupertino woke up today at their annual WWDC event. Their response to the recent transformer-infused language model boom, AI, which is short for Apple Intelligence. See what they did there? This new AI will weave its way through the entire suite of Apple platforms and first-party apps, but the primary interface is still Siri. Yes, that Siri. This means Siri is getting a new look and feel, the ability to query it via typing text,
Starting point is 00:01:18 better natural language detection, on-screen awareness, app intents, personal context awareness, and a gazillion other things. The demo was quite impressive, but aren't they all? Oh, and Siri can also ask ChatGPT when it doesn't have an answer for you. If nothing else, this is a huge upgrade from...
Starting point is 00:01:41 Here's what I found on the web. Get push deployments to your own servers. PQ's creators say, quote, we wanted a Heroku slash Cloud Foundry-like way to deploy stuff on a few ARM boards. But since Doku didn't work on ARM at the time, and even Docker can be overkill sometimes, a simpler solution was needed. PQ is currently able to deploy, manage, and independently scale multiple applications per host on both ARM and Intel architectures and works on any cloud provider, as well as bare metal, that can run Python, Nginx, and UWSGI. End quote. Heroku 100% changed the deployment game with its Git Push user experience. Ever since, people have
Starting point is 00:02:26 been trying to replicate that experience in different places with varying degrees of success and failure, but mostly failure. From my early reading, it appears that the Piku team has done it for the 80% of common use cases. The best part? It's not a proof of concept or early alpha software. Quote, Piku is considered stable. It is actively maintained, but actively here means the feature set is pretty much done, so it is only updated when new language runtimes are added or reproducible bugs crop up. A nano GPT pipeline packed in a spreadsheet. This spreadsheet is all you need repo,
Starting point is 00:03:03 which is a play on the attention is all-all-you-need paper that originally introduced the transformer architecture, is great if you are still trying to wrap your head around how GPTs actually work. It's also just an impressive feat. Debo Chen, its creator, says, quote, While reading about LLMs, I realized that the internal mechanisms of a transformer is basically a range of matrices calculations being connected in a certain order. I started to wonder if the whole process can be represented in a spreadsheet since all the calculations are fairly simple. I'm a visual thinker. I couldn't think of a better way to do it. Then, with some trial and errors, I wrote the
Starting point is 00:03:41 full inference pipeline of the nanoGPT architecture into a single spreadsheet. Follow the link in the newsletter to check out the spreadsheet, or hey, there's an image in your chapter data. Give it a look. It's now time for sponsored news. What's causing those poor Core Web Vitals? What are you up to on June 20th? Maybe you should join Salma Alamnaylor and Lazar Nikolov from Sentry in a free workshop so you can learn how to identify the issues causing your poor Core Web Vitals. Then, discover how to trace issues to slow database queries or the dreaded server-side request waterfall. You'll learn how to 1. Discover common sources for poor Web Vitals. 2. Set up tracing with Sentry,
Starting point is 00:04:26 and three, trace issues through your stack to the code level with Sentry. The workshop is on June 20th, and again, it's totally free. Don't miss out. Register now, following the link in your newsletter. And thanks again to Sentry for sponsoring Changelog News. You'll regret using natural keys. I've often talked about the nature of software development and its lack of truly generalizable rules, but experience does reveal anti-patterns
Starting point is 00:04:53 that we can pass down and around to save others the pain and suffering that we had to endure to uncover them. One such database design anti-pattern that Mark Seaman wants to save you from is using natural keys. Mark says, quote, Is it ever a good idea to use natural keys in a database design? My experience tells me that it's not.
Starting point is 00:05:15 Ultimately, regardless of how certain you can be that the natural key is stable and correctly tracks the entity that it's supposed to, data errors will occur. This includes errors in those natural keys. You should be able to correct such errors without losing track of the involved entities. You'll regret using natural keys. Use synthetic keys. End quote. Take his word for it.
Starting point is 00:05:37 He does explain why, of course, in the post. Or just learn the same lesson for yourself the hard way. Your call. A grand unified theory of the AI hype cycle. Glyph Lefkowitz describes a 13-phase AI hype cycle and then enumerates five cycles we've already been through. Number one, neural networks and symbolic reasoning in the 50s. Number two, theorem provers in the 60s. Number three, expert systems in the 50s. Number two, theorem provers in the 60s. Number three, expert systems in the 80s.
Starting point is 00:06:07 Number four, fuzzy logic and hidden Markov models in the 90s. And number five, deep learning in the 2010s. Glyph says, quote, each of these cycles has been larger and lasted longer than the last. And I want to be clear, each cycle has produced genuinely useful technology. It's just that each follows the progress of a sigmoid curve that everyone mistakes for an exponential one. This is an initial burst of rapid improvement, followed by gradual improvement, followed by a plateau. End quote. So, where are we now? Glyph provides some heuristics, but it's hard to say exactly when the current cycle will end. However, he does feel confident to say this, quote,
Starting point is 00:06:46 That is the news for now, but don't forget to scan the companion newsletter for more stories on managing motivation as a solo dev, DuckDB 1.0, and a big list of new dev tools that you should try. If you aren't a newsletter subscriber, get in on the double dip at changelog.com slash news. We have some great episodes coming up this week. Kelsey Hightower joins us on Wednesday and Justin Searles joins us for our WWDC reactions on Friday. Have a great week. Give us a five star review if you dig it. And I'll talk to you again on Wednesday.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.