The Changelog: Software Development, Open Source - Stop uploading your data to Google (News)

Episode Date: June 16, 2025

Lukas Mathis tells us to stop uploading our data to Google, Robert Vitonsky wants web devs to not guess his language using his IP, Tom from GameTorch reminds us that software talent is gold right now,... Austin Parker from Honeycomb describes how LLMs are upending the observability industry, and Vitess co-creator, Sugu Sougoumarane, joins Supabase to lead their Multigres effort to bring Vitess to Postgres.

Transcript
Discussion (0)
Starting point is 00:00:00 What's up nerds? I'm Jared and this is changelog news for the week of Monday, June 16th, 2025. Did you feel that Google Cloud outage last week? Turns out this was another instance of Tony Hoare's classic, billion-dollar mistake mixed with the also classic, distributed systems are hard. The outage began when Blank Fields, in a new service policy, replicated a null-pointer-induced crash loop almost instantly across their global fleet of servers. Props to Google for their transparent postmortem and props to the SRE team that triaged the issue starting just two minutes after that null pointer rolled out.
Starting point is 00:00:51 Okay, let's get into this week's news. Stop uploading your data to Google. A few years ago, Lucas Mathis realized that losing access to his Google account would have been devastating. Quote, I had photos and emails dating back to the mid-90s on his Google account would have been devastating. Quote, I had photos and emails dating back to the mid 90s on my Google account. I had auto upload enabled on my phone's Google Photos account. What are the chances that one of these hundreds
Starting point is 00:01:15 of thousands of pieces of data would trigger some automatic action at some point? What are the chances that I could get in touch with somebody who could fix this for me? End quote. Those are good questions to ask and he isn't merely being paranoid. There are instances of this actually happening to real people
Starting point is 00:01:31 and it's never you until it's you. And then it's too late. Lucas set some rules for himself. One, do not upload any data to Google. My Google account is too important to risk it. Now, no services are tied to it except for those that must be tied to it. Two. Self-host as much as possible.
Starting point is 00:01:50 Three. If self-hosting is not possible, use end-to-end encrypted services whenever possible. And four. Use one service for one thing so that when it gets disabled, only that thing is affected. Click through to Lucas's blog post in the newsletter for his suggested replacement services. Don't guess my language. Here's Robert Votonski. If you're still using IP geolocation to decide what language to show, stop screwing around.
Starting point is 00:02:19 It's a broken assumption dressed up as a feature. IP tells you where the request came from. That's it. It doesn't tell you what language the user wants, speaks, or even understands. It fails all the time. VPNs, travel, people living abroad, countries with multiple official languages. This isn't cleverness.
Starting point is 00:02:39 It gives outright annoyance." End quote. I have to agree with him. That's why the accept language header exists, which Robert advocates for in this post. It lets your user agent send your preferred language to the server. No guessing required. That's your signal. Use it.
Starting point is 00:02:56 It's accurate. It's free. It's already there. No licensing. No guesswork. No maintenance. You don't override screen resolution or color scheme with your own guess, so why do it with language?
Starting point is 00:03:09 Software talent is gold right now. Tom from Game Torch wrote us a good reminder about how amazingly privileged we are as software developers despite the not so great job market right now. Quote, if you have software engineering skills right now, you can take any really annoying problem that you know could be automated, but it's too painful to even start, and you could type up a few paragraphs in your favorite human text editor to describe your problem in a well-defined way, and then paste that into cursor with 03 max pulled up and it will one-shot the automation script in about three minutes. This gives you superpowers. I'm not just a technical founder, now I'm also an entire marketing department.
Starting point is 00:03:48 That's pretty cool. What can you do? I bet you can do a lot." I take for granted just how much tedium I've automated away that non-software people just live with. AI agents bring this ability to many more people, which is awesome. But they also make we software people able to do so much more, with so much effort. I can do a lot and you can do a lot too. So let's do cool stuff. It's now time for sponsored news. What are MCP servers? MCP is an open protocol that
Starting point is 00:04:19 standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools. That paragraph I just read to Sam Ruby is both comforting, because USB for LLM makes sense, and simultaneously, vacuous, because what do I actually do with this? If you feel the same, then Sam has just the blog post for you. He's been digging deeper and come up with a few analogies and comparisons that might
Starting point is 00:04:55 help you make sense of it. 1. MCPs are Alexa skills. 2. MCPs are API 2.0. 3. MCPs are APIs with introspection and reflection. 4. MCPs are API 2.0 3. MCPs are APIs with introspection and reflection 4. MCPs are not serverless
Starting point is 00:05:09 5. MCPs are not inherently secure or private 6. MCPs should be considered family If those 6 bullet points catch your interest, click on through to the other side and see what else Sam has to say. The links in the newsletter. Oh, and do check out Fly2IO while you're there. You might love it like we do. The end of observability as we know it. Surprise surprise, LLMs are upending the observability industry too. Austin Parker from Honeycomb does a solid job laying out the history and how it's all going to change from here. Quote, in a really broad sense, the history of observability tools
Starting point is 00:05:46 over the past couple of decades have been about a pretty simple concept. How do we make terabytes of heterogeneous telemetry data comprehensible to human beings? We've seen different companies tackle this in different ways for technology like Ruby on Rails, AWS, Kubernetes, and now OpenTelemetry. In AI, I see the death of this paradigm.
Starting point is 00:06:07 It's already real, it's already here, and it's going to fundamentally change the way we approach systems design and operation in the future." Austin goes on to describe how Honeycomb's favorite demo for Honeycomb's favorite feature has been utterly disrupted by agentic AI. I ran a single prompt through an AI agent that read as follows. Please investigate the odd latency spikes in the front end service that happen every four hours or so and tell me why they're happening. It took 80 seconds, made 8 tool calls, and not only did it tell me why those spikes happened,
Starting point is 00:06:40 it figured it out in a pretty similar manner to how we tell you to do it with BubbleUp. This isn't a contrived example. I basically asked the agent the same question we'd ask you in a demo and the agent figured it out with no additional prompts, training or guidance. It effectively zero shot a real world scenario and it did it for 60 cents. Multigress is Vitesse for Postgres.
Starting point is 00:07:04 Supabase landed an epic hire bringing Vitesse co-creator Sugu Sugamurrain, apologize on the pronunciation, on to lead their effort on a Vitesse adaptation for Postgres. Here's what Sugu had to say about it. Quote, for some time I've been considering a Vitesse adaptation for Postgres and this feeling had been gradually intensifying. The recent explosion in the popularity of Postgres has fueled this into a full blown obsession. As these databases grow, users are going to face a hard limit once they max out the biggest
Starting point is 00:07:35 available machine. The project to address this problem must begin now and I'm convinced that Vitesse provides the most promising foundation. After exploring various environments, I found the best fit with SuperBase. I'm grateful for how they welcomed me. Furthermore, their open source mindset and fully remote work culture resonated with me."
Starting point is 00:07:55 Multigress will be open source, Apache too, and they're assembling a team to build it. If you're a Go programmer, consider applying, link in the newsletter. That's the news for now. Have a great week. Like, subscribe, and leave us a 5 star review to help out the show and I'll talk to you again real soon.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.