TMiR 2026-02: CloudFlare remakes Next with AI; Vercel big mad. We talk too much about AI Agents
Transcript from Thursday February 26th, 2026
[01:46]New releases[01:50]TS 6.0 beta! (last TS impl release; stricter defaults)[02:34]Yarn 6 preview (rewritten in Rust)[02:52]Electrobun (Tauri alternative)[03:31]React Native Gesture Handler v3[04:20]Next polyfills improved WebStreams and upstreaming to Node[05:25]Webpack 2026 roadmap[05:53]Lodash maintenance roadmap[07:07]Gatsby React 19 support , styled-components RSC support[07:54]npmx.dev (alternate NPM site UI)[08:43]eslint-plugin-react-render-types (typed children)
[09:54]Main Content[09:55]React Core updates:[09:56]React Foundation officially launched[12:28]Docs updates:useActionState(merged),useand RSC sandboxes (wip)
[18:43]“State of…” survey results[24:29]How we rebuilt Next.js with AI in one week[29:52]CloudFlare v Vercel beef[35:42]Carl Vitullo monologs about AI Agents[42:27]AI productivity and impacts on our attention[44:42]"wrote a spec, pointed Claude at an Asana board, and went home"[47:26]Bits AI SRE | Datadog[51:04]ThoughtWorks opines on AI productivity impacts
[01:00:28]⚡ Lightning round ⚡[01:03:38]Conferences (React, Javascript)- AppDevCon Mar 10-13, Amsterdam Netherlands
- Programmable Mar 17 / Mar 19, Melbourne / Sydney, Australia
Frontrunners Mar 27, Washington DC, USA
- T3chfest Mar 12-13, Madrid, Spain
React Paris Mar 26-27 Paris, France
- CityJS London, Apr 15-17, London, UK
- jsday Apr 9-10, Bologna, Italy
- SmashingConf Amsterdam Apr 13-16, Amsterdam, Netherlands
- React Miami Apr 22-25
- JSWorld May 7-8 Amsterdam, Netherlands
Carl: Hello everyone. Thank you for joining us for the February edition of this Month in React. As we recap what's going on in React, react Native and across the web, coming to you live from Reactiflux, the place for professional developers using React. This is our first month, we are now membership applications only. Everyone in here now is part of an exclusive club that has a gate at the door, which feels kind of weird, but it has shifted the vibe. So I'm pro it so far. It's kept out a lot of scammers. [00:00]
Mark: There's even been a lot of that going on across, you know, the open source ecosystem in general. I've seen numerous repos going like no external contributors or vouch system only. [00:29]
So there's, there's a lot of trend there. [00:40]
Carl: It's unfortunate. But anyway, I'm Carl. I'm a staff product developer and freelance community leader here at React Flex, where I run community programs like this podcast, news events, and build tools to help keep the community operating. [00:41]
Mark: Hi, I'm Mark. My day job is working a Replay.io where we are, once again, building awesome time, travel, debugging stuff, and now have a new replay, MCP to let your own AI agent Investate recordings. And outside of that, I find time to do redux stuff. [01:06]
Mo: And I am Mo, I work at a company called Theodo, which is a global consultancy, and I spend my time these days building AI tooling, but have spent the last three, four years in the React and React Native ecosystem. So I try to merge the two together. So I, uh, organize the React Native London Meetup and conference and have spent a bit of time dabbling in the React Native ecosystem. [01:09]
Carl: Yeah. And we have a fun little podcast milestone this month. It's a little bit fuzzy because metrics are like really hard to do on downloads, but it appears that we crossed 7,500 downloads cumulatively across the entire podcast over the past month. So like, yay. That's cool. Thanks for listening. Cool. [01:28]
New releases
Carl: Mark, take us through some new releases. [01:46]
Mark: Okay, first couple items. [01:48]
TS 6.0 beta! (last TS impl release; stricter defaults)
Mark: TypeScript six beta just dropped. And as a reminder, this is scheduled to be the final release of TypeScript written in TypeScript itself. The plan is that there will be no 6.1, no, 6.2. The the next release would be TS 7, which is the built in go version. The biggest thing for version six, you know, TypeScript has always been very fuzzy on their versioning, but they, when it's a do o release, they do tend to change some settings. [01:50]
And so TS 6.0 is going to make a lot more stricter defaults as preparation for version seven. So that both improves behavior as well as reflects how TypeScript is actually being used in practice today. [02:20]
Yarn 6 preview (rewritten in Rust)
Mark: Meanwhile, continuing another trend, yarn has released version six or, or announced a version six, which is a little surprising 'cause I don't. [02:34]
I think they had a version five, I'm pretty sure they're on version four right now. And it is the obligatory rewrite it in Rust approach. So train to speed things up. [02:42]
Electrobun (Tauri alternative)
Mark: Totally different item. So we've got Electron, which is desktop apps wrapped in a browser because you ship an entire chromium build for your website. [02:52]
There's tari, which is desktop apps in a browser, except that it uses the operating systems built-in browser web view. And someone else just put out electro button, which is the, an equivalent of tari and that it uses the, you know, the native web view. But it uses BUN for all the behind the scenes processing. [03:03]
So it looks, looks potentially interesting. And with BUN being a Swiss Army knife could be useful. [03:26]
React Native Gesture Handler v3
Mo: Switching over to the React Native ecosystem, the folks at Software Mansion have released. React Native gesture handler version 3.0. It's currently a beta, but it's, it's likely gonna be released as it is. And so the major change here is that they've overhauled the API surface to be based on hooks. [03:31]
So previously it was a bit of a, an imperative setup. It wasn't hook based. They've now updated it so that it fits more with some of the other libraries that Jester Handler integrates with. They've also removed support for the legacy architecture and they've done some performance optimizations under the hood so that when you integrate reanimated with gesture handler. [03:49]
Things work kind of better and it lets you use things like shared values across the board into your gestures, which is something that you previously couldn't do. So some good changes here, and hopefully it will be launched soon outside of data. [04:07]
Next polyfills improved WebStreams and upstreaming to Node
Mark: This next one is somewhere between a, a release and a news item. [04:20]
So there was, there was some investigation of next JS and RSE performance a couple months back, and CloudFlare was digging inside the internals of Next and made some improvements. So LL apparently followed up on some of this and RSCs at least some of the next usages, currently use web streams internally for passing the data around instead of the nodes standard built in streams. So two different stream implementations. [04:24]
They did some analysis and figured out that Node's Webstream implementation is actually really inefficient and actually kind of previewing something we'll be talking about later. They cut a bunch of agents loose and said, Hey, make this a whole lot faster and be actually very similar because there were a ton of web platform tests on how web streams ought to be behave. [04:52]
They were able to re-implement the concept faster, and they're trained to upstream some of that behavior into node itself, but also release this as essentially a polyfill package as well. [05:12]
Webpack 2026 roadmap
Mark: Let's see. We have a couple road mappy items. First off, Webpac has put, has put out a 2026 roadmap. Some of the things we're looking at are better built in support for CSS modules, a new universal output format that will work across run times. [05:25]
Being able to load TypeScript without needing a separate loader plugin and trying to make sure that Webpack can run everywhere, not just node. [05:42]
Carl: Yeah, I guess that seems good. [05:51]
Lodash maintenance roadmap
Carl: Yeah. Keeping on the, I don't know, roadmap and old tools. Talking about what they're gonna be doing next. Lodash put out a a bit of a roadmap, or I guess sot dev is talking about the Lodash reset and maintenance reboot as they call it. [05:53]
The title here, we talked about this a bit last month, but yes, the Lodash maintainers are rebooting the governance of the package essentially. Interestingly, they got an investment from a European like sovereign tech fund. So I guess you know the. Some European agency has determined that Lodash is critical infrastructure, deserving financial support, and so they are using that money to fund governance changes to how the package is managed, and then also do some security releases and such, which is great. [06:10]
I mean, Lodash is the kind of thing that, you know, in the aftermath of all of these NPM vulnerabilities, Lodash is absolutely the kind of library that would be aggressively targeted by nefarious actors. So that's pretty cool. That's exciting. Why? I like seeing rebuilding security from first principles if a core project is going to get a bunch of money. [06:45]
Very cool. [07:06]
Gatsby React 19 support, styled-components RSC support
Mark: And on a similar note, a couple other legacy projects getting updates. Gatsby had a. Maintenance release that added some kind of react 19 support and styled components. Put out a release that tries to add RSC compatibility, which is very interesting. 'cause I thought, I thought one of the issues was it just wouldn't work in server components. [07:07]
So I, I haven't looked at this, I just saw it this morning. But FYI, [07:26]
Mo: I also remember them putting a big sting and saying they were deprecating themselves as a lib. [07:30]
Carl: Right. We talked about that. [07:35]
Mark: I mean, someone, someone else forked it to improve the performance, but I thought the primary library was halted. [07:36]
Mo: And, and same for Gatsby, although I might beat bit fuzzy. Was Gatsby ever deprecated or is it [07:42]
Mark: Yeah. Gatsby was bought by Netlify and then abandoned. [07:47]
Mo: So turns out AI can revive old school projects. [07:50]
npmx.dev (alternate NPM site UI)
Mark: And last couple release announcements, those of you who have ever looked at the NPM package website might have noticed that it's, it's there, it works, but it's not really modern and it certainly hasn't been updated in a while. [07:54]
So a bunch of folks from the community have put together NPM x.dev, which is an entire alternate front end to the same NPM package registry and server APIs. This only sprang up within like the last month or month and a half, and there's been a massive group of folks just diving in and building out a brand new website from scratch and adding all kinds of useful features. [08:09]
I've, I've looked at it a couple times and it is much, much better implemented than the primary NPM website. Nice. [08:34]
Carl: Love that. [08:42]
Mark: Yep. [08:42]
eslint-plugin-react-render-types (typed children)
Mark: And then finally, one of the longstanding complaints about React in TypeScript is that you can't correctly type what? Style components can be passed in to a given parent. And apparently someone had added the flow typing language competitor type script had added a feature like this a couple years back. [08:43]
But you know, at this point, nobody uses flow outside of Facebook. And so someone apparently put together an ES lint plugin that somehow tries to implement this kind of typed children restriction functionality. [09:05]
Carl: I love that. That is something I had wanted a number of times. There was a moment a, a while ago where like compound components was a talking point, you know, a point of discussion in the React ecosystem where it's like tabs and like tab groups and like tab content, and you need those to be individual components because of just the structure of the dom, but they're all interrelated and like, this one must be nested within this one and so on. [09:17]
So being able to enforce that in type constraints is, Ooh, love to sing that [09:43]
Mark: both people and ais love having real typed constraints to catch problems in the first place. [09:48]
Carl: Yes, indeed. [09:53]
Main Content
Carl: Okay. [09:54]
React Core updates:
Carl: Into our main content. [09:55]
React Foundation officially launched
Mark: The React Foundation was announced at React Conf back in October. The onstage announcement didn't have a lot of details and we've tried to pass on tidbits of information that we heard talking to, you know, Seth and other folks over the last couple months. [09:56]
So. This blog post really doesn't say a whole lot more that we, you know, that we didn't know yet, other than the fact that apparently there's an eighth founding company, Huawei, it does say that they're trying to figure out what the, the split between the technical and political governance of the foundation will look like. [10:13]
So there's a little bit of a, this is an announcement that there's an announcement thing going on, but hey, you gotta start somewhere. And I'm, I'm happy to see any kind of actual progress on this moving forward. [10:30]
Carl: I don't know what we will really see from like, in terms of substantial details. 'cause like it's not like the core team in general was like super communicative about its internal processes or like who was doing what. [10:42]
So I was hoping that might change a little bit. [10:56]
Mark: Well, I mean, I mean talking to Seth directly back in New York and November, the longer term plan, and I, I don't know what the timeline is for this, but he flat out said they are intending to open up the literal weekly team sync meetings, like the same ones they've been having internally at meta and meta and versa for the last decade. [10:58]
They intend to literally open those up to the public for viewing and then start to, you know, actually bring in community representatives to participate in those meetings as well as the new government structure affecting what the planning process is. So, like that alone is something that I am very, very excited about. [11:17]
Get to get to see all the sausage feeding. [11:37]
Carl: Yeah, that's very cool. I should talk to them 'cause I don't know, like I said at the beginning, Reactiflux now has a gate at the door. I'm curious about adding more gates because like the keen eye may notice that there are not very many open source project maintainers around here. [11:39]
And I tend to believe that's because of, there's a lot of people who are not as experienced and the volume of less experienced questions crowds out the more experienced people trying to have discussions. So I'm curious about adding more levels of gates, you know, so you can get in and then you can get in further and then you can ascend and sounds related to open access meetings. And then, you know, some participants being able to join if they meet some other bar. That would be cool if we could be more of a home for a little bit more of that stuff. Anyway. [11:54]
Docs updates: useActionState (merged), use and RSC sandboxes (wip)
Mark: Other than that, a couple other bits of news. I I, I don't think there's been any other React releases recently, however, Ricky Hanlon has been diving in and trying to do some major improvements to some of the React reference docs. [12:28]
I think last month we had talked about, he had rewritten the use effect event, API reference and possibly one other page. And within the, the last couple weeks he's gone in, he merged a rewrite of the Use Action State reference page. He has a work in progress PR to revamp the use hook page, and he also has a PR to try to add sandboxes for server components to the docs. [12:40]
And this is actually kind of a big deal. I don't know, I don't know what the longer term plan is here, but this is something I talked with him about. If you wanted to teach server components in the docs, how is a learner going to be able to try server components? Right now, like there's like, you know, it's all teaching client side react and it uses the sandboxes very, very well to do that. [13:07]
But how are you going to teach someone server components if they have no way to try it out and it fully needs a framework choice to be able to do it. So if he's come up with an approach that is kind of like a, some sort of framework agnostic sandboxes that feels like it would unlock the ability to then add server components, teaching content to the docs. [13:33]
Carl: It's interesting. Yeah. Where my reminds me of, I, I remember when I was learning JavaScript for the first time in like 2000, I don't know, '04 or '05, the try it editor on like W three schools where you could have like code over here and the pa you know, the, the webpage over here. Like, oh, it was so powerful. [13:57]
So I, yeah, fuck yeah, let's update it to the modern era. Give some sandboxes for RSCs. [14:16]
Mo: Cool. [14:21]
React Native 0.84 released
Mo: So moving on to the next piece of news from the React core team. React Native version 0.84 has been released. Now, like we said previously, these releases are getting smaller and smaller and the idea is that they're released on a sort of two month by two month basis. [14:23]
The big one on this release is that Hermes version one is by default. Now, just to give a little bit of background as to what Hermes v1 actually brings to the table, it's effectively a bunch of improvements on the VM performance. So they've gone in and actually implemented support for things like, ES6 classes. [14:39]
Some of the syntax in more modern JavaScripts like constant let support for things like Async. Previously, all of this was being basically handled by Babel and just transpired into older formats of of JavaScript, and so. All of this is sort of helping improve the actual performance here. Now, this is not to be confused with static Hermes, which is a totally different project, but the idea is that there are some serious performance improvements here. [15:01]
They ran some benchmarks on what this looks like, and we're talking on some sort of performance benchmarks on some of these open source projects. You know, we're looking at sort of something like 50% improvement in the relative performance for basically executing complex JavaScript benchmarks. The reality of it is when you sort of look at it, the, the results that they've given out, and oftentimes the React data ecosystem people use the Expensify app as sort of a benchmark because it's a real world app that's also open source. [15:30]
So it, it gives a lot of good, uh, real world benchmarks. They saw improvements in the TTI of an app, so how quickly the app is interactive, somewhere between two to 7%, depending on the platform, but also things like the loading time of the bundle being significantly quicker. So generally just performance improvements on the whole. [16:01]
Secondly, there was another sort of piece that we talked about last month that's now in by default, which is pre-compiled binaries on iOS. So previously with React Native, what the process would look like is that you would go through and actually build from source. That is no longer the case. By default, all of this is pre-compiled as an XC framework, which is effectively a library binary in the swift world, and then it's just installed as if it was a third party dependency. [16:22]
You can opt out for the old behavior, but this will speed up build times quite significantly. Those are sort of the two major pieces. They've done a bit of housekeeping on removing more legacy architecture functionality and so on and so forth. But this is really the core. [16:51]
Carl: So you said like, you know, two to seven percent-ish speed up and then you also said about like loading, like bundle loading. Do you know if the loading performance improvements are included in that percentage speed up or is that like separate things? [17:04]
Mo: They're probably linked to one another, but they are different metrics that you assess. So there's like the bundle load time, which is how quickly the JavaScript gets loaded in by the rec native engine or hers in this case for executing. [17:19]
But then there's the TTI, which is sort of like a, a step back, which is basically, I guess on the web side of things, it's like, I don't know, we have a TTI metric on the web, but I guess like think of it as like close to LCP in some capacity. [17:30]
Carl: Yeah, Last Contentful Paint yeah, sure. [17:45]
Hermes WASM support, Hermes Node compat CLI
Mark: And then I saw a couple additional tweets from one of the, one of the Hermes team members talking about a couple proof of concepts that they've been working, proof of concepts, previews, alphas, somewhere in there that they've been working on. One is that they've, they've put together a CLI for Hermes that at essentially node, but implemented with Hermes instead of V8. So they, they've built layers for most of the node APIs and been able to, you know, build a node like CLI with Hermes. [17:47]
And then they also put together a preview of wasm support for Hermes so that the, the Hermes engine's able to load WASM bundles Act. And that means you can take, you know, external, external libraries written in c plus plus or rust or whatever, compile them to Wasm and load them by the engine. And thus you get, you know, faster processing runtime from complex computational stuff. [18:16]
So they're, they're probably pretty busy over there. [18:40]
“State of…” survey results
Carl: Okay. [18:43]
State of JS 2025 results
Carl: Should we go into some state of the ecosystem results. Sure. I regrettably have not read this. These are always so deep and they have so much knowledge in them that I want to give them a lot more attention than I ever really feel like I have. So I have not read these as closely as I would've liked to. [18:45]
Mark, did you get to take a look? [18:59]
Mark: Not really. I saw a few reactions. I linked to some of Josh Comeau's reactions. [19:01]
State of React 2025 results
Mark: The State of React survey in particular, if I'm looking at the, looking at the state of JS one at the moment, they, you know, in particular, they have sections for front end frameworks and meta frameworks, which, you know, is worth glancing at. [19:06]
There's, you know, looking at, looking at the front end frameworks, we've got, you know, the, the usage ratios react is way up there. But in terms of satisfaction, solid seems to be pretty high up there and has the highest, has had the highest satisfaction for each of the last several years. So I'd have to dig in the numbers to see what the, the number of people using it is. But clearly folks who are using Solid are very happy with it as a choice. [19:21]
Carl: Fair enough. [19:50]
Aurora Scharff’s conclusion
Carl: Yeah, I'm reading the, uh, conclusion of State of React written by Aurora Scharf. Man, it's, I just like, it's wild to think that Create React App only got sunset one year ago. Like that feels like ages ago. [19:51]
Mark: I know. And I remember I still take 80% of the credit for that happening. [20:06]
Carl: Yeah, right. Yeah. It's a great conclusion though. We have talked about a lot of these things that I'm reading in the conclusion, you know. Create React App sunset, Remix going away from React. The, all the server components blog posts that Dan Abramov put out, RSC Explorer, compiler went, you know, going to version 1.0, ASIG React working group. [20:10]
So, yeah, I don't know. I guess I'm reading this conclusion in part as like a gr you know, a grading rubric for like, how well did we stay on top of the news as it came out? And I'm like, yeah. All right. We did pretty good. Nice. Good for us. [20:33]
Josh Comeau’s takes on the results
Mark: Yeah. Looking down inside of Josh Comeau's highlights thread, I, I think he calls out some, some rather interesting points. [20:44]
Apparently a good number of folks have used server components one way or another, which is not a surprise because they're the default in next at this point. But he also points out that it looks like most people have not used form actions or the newer use action state and use form status hooks. That makes sense because they're, they're pretty new APIs. [20:51]
React 19 only. But also it, it a little bit ties into, you know, kinda the question of like, we've all been able to write React apps very solidly since, you know, 20 13, 20 15, et cetera. [21:14]
API pain points
Mark: Um, and so now there's all these new capabilities that fill potential app level pain points. But how many people are on React 19? [21:28]
How many people are experiencing those pain points? How many people are aware that the new APIs exist to potentially help with them? And so there's an awful lot of, you know, writing React apps the same way we did in 2018, thereabout. [21:38]
Mo: Cool. [21:52]
State of React Native 2025
Mo: And we've also had the state of React Native in 2025, also published by the folks at Software Mansion. [21:53]
And there, there's not a whole lot here. A lot hasn't moved in the ecosystem. There's a few interesting things that I thought would be, would be interesting to sort of call out. So starting from sort of the new architecture migration, it seems like out of the respondents that have been surveyed, somewhere around 80% have already adapted to the new architecture, which is good. [22:00]
So it, it fits into sort of meta's view that the upgrade path is going to be relatively straightforward and it seems like most apps are by and large adopting it, which is great. Interestingly, it seems like most people's update strategy is to basically use Expo SDK release updates and they sort of update in parallel to that. [22:21]
And so the idea is when Expo, every few months releases a new SDK update and it happens to be on, let's say, version 0.82, then, then people will sort of go for that. Now interestingly, what another thing around sort of react Native frameworks and talking about Expo, which is interesting to look at, is that. [22:40]
Somewhere around sort of 75% of of people said that they're using Expo CLI, as sort of their framework for their React Native app. But also there's still a substantial amount of respondents that are using React Native community, CLI. So this is the old React Native CLI that used to be maintained by Meta and has now sort of been made a community package and deprecated by Meta in favor of Expo. [22:58]
About 50% are still using that, which is surprising, but it also shows that there's probably a long tail of respondents that are still having apps that were built on top of the, the React Native community CLI these days. So that was quite interesting. The final thing that I thought was. Cool to look at, at a broad level was what percentage of code is being generated by ai? [23:21]
So they did a, a, a bit of a question on this, and it seems like sort of if you look at the, the, the distribution here, we're talking about sort of 25 to 30% of react Native code on aggregate by the respondents is being generated by ai, which I suspect will sort of tick up. But 30% is, is quite large when you look at sort of an aggregate of the industry or of, of the entire ecosystem. [23:46]
So those were sort of the, the high level pieces. If you look at the conclusion, they, they sort of highlight that, you know, some of these results are, are perhaps predictable and I think on large sort of these results are, are as expected. [24:10]
Mark: And it looks like that that AI cogen question is under the dev tools category on the results. [24:23]
How we rebuilt Next.js with AI in one week
Carl: Okay, we're gonna use this as a jumping off point. I think. So CloudFlare put out a surprising blog post, or I guess they released a project and announced it. They ha claimed that they have rebuilt next JS on top of Vite from scratch with an AI agent. They basically just said, here's the test suite. Go. [24:29]
Mark: There was a person doing the driving and the, and the herding and the overseeing. But yes, it was essentially we have next tests as the spec. Can we rebuild it from scratch? But on top of Vite? [24:52]
Carl: Yes. So they're calling it Vinext, Vite-Next, you know, get it deploys to CloudFlare workers in a single command. They say in early benchmarks, it builds production apps up to four times faster and produces client bundles up to 57% smaller. [25:05]
And we already have customers running it in production. They say fascinatingly, they say that it costs them $1,100 in tokens, which is pretty good. I don't know. The other interesting thing is, so I've been playing around with AI a lot in the last month and in my like rough estimates of cost. I bumped up from the $20 a month Claude plan to the $200 a month Claude plan, and maxing out a $200 a month claude Plan appears to use on the order of $2,000 worth of tokens. I do wanna say, when it says this whole thing costs about $1,100 in tokens, that's eight raw, I believe that's raw aPI costs. And if you are using a monthly plan like you know, not API access, not a platform account, then I think it's actually included in that. So like that's interesting. If it sounds like on my limit, [25:20]
Mo: you can do it with half of a Cloud max subscription. [26:14]
Carl: Right. Yes. It looks like I could have done this with half of my limit in one Claude Max month, which is not that much like it's a big project, but it's not that much like half of my limit. I spent last month brought a project of mine from like pretty good to, I think I'm almost ready to start selling this in like 15% of my Claude limit. [26:16]
So to totally rewrite next in half of a claw limit is like shocking. Remarkable and also sounds pretty reasonable based on my recent experiences with, you know, benchmarking code output if useful, effective code output against, I dunno, costs, then it's, so this is really interesting. This is, this ties into some, this is why it's such a good jumping off point because like Next it has years of usage and like production customers and like a company built a hosting platform around it because that seemed like a good idea. [26:41]
And now some, now a competitor has taken advantage of their robust tests and everything and said we're gonna re-implement that from scratch in a matter of days for a trivial cost. [27:17]
Mark: Similar examples. So watching Hacker News over the last couple months. A, a similar what Canis do Benchmark has been, what if we just cut agents loose to work for a few weeks and build a browser? [27:29]
From scratch or a c compiler from scratch. You know, I've, I've seen both of those examples pop up and the, the big characteristic in all these cases is that it's a fairly well-defined problem, that it's fairly deterministic in terms of application behavior. And there is a massive existing test suite that helps define exactly what the behavior ought to be. [27:44]
You know, we mentioned earlier the example of next reimplementing web streams as a polyfill, but building it faster. And they said, you know, there's no way we could have done this without the fact that there were already like 1500, 1700 platform tests for web streams. Similarly, browsers like the, the expected behavior for parsing HTML and all these other pieces of, of a browser's behavior are very, very strictly defined. And there's a massive set of platform tests that it can be used to compare correctness. So it's both, you know, like what other examples are out there they can reference, but it's the tests define what the expectation is. [28:08]
Mo: Uh, on those two examples, I think the browser one was the cursor team and the C compiler was, I believe it was Claude's internal team. There was a bit of pushback on that because if you dug through it, it didn't actually compile or build correctly and needed a bit of human fixing to do that. [28:50]
Admittedly, those are quite complicated tasks, right? Like building an entire browser from scratch and a C compiler is tough. This with CloudFlare coming out and saying, Hey, this is production ready and we're actually using it in product, or some of our clients are using it in production, seems to be a much bigger claim in my mind than sort of like an experimental labs effort. And it's, it's, uh, it's interesting that a company like CloudFlare is happy to put their name on this rather than it on something that's like, this is production ready. You can, they're given a commands to migrate an existing next JS project to Vinext. [29:09]
Like they're really going, this is as good as next js. And so it feels like a different grade of achievement in my mind. [29:42]
CloudFlare v Vercel beef
Mark: I mean there, there's also the existing strife between CloudFlare and Vercel that probably plays into this a little bit, but [29:52]
Carl: true, right? There's like personal beef between the CEOs of those two companies. [29:58]
So I'm sure that was a motivator here as well. Okay. There's personal beef, but there's also like legitimate technical questions here. Like, Vercel and CloudFlare are each trying to push their own visions of what hosting providers could be, and they're interesting. You know, I've been more interested in the CloudFlare vision of what hosting can be. [30:02]
Workers sound really cool, like, you know, it seems like a plausible achievement of the vision of serverless, you know, that I've been hearing for my entire career. So I've been very curious about that. I've used it a little bit, and CloudFlare also was doing Open Next before, which was an adapter, so you could use, basically adapting the versal specific runtime things so that it could use it in other platforms. So like this does feel in keeping with the kind of work CloudFlare has been exploring with, but agreed that this is a categorically different kind of plane than like, ooh, we made a, a C compiler. Like no means is seriously using an AI generated c compiler in production. Like, not right now. [30:23]
Mo: It, it's interesting. So talking about sort of like Cloudflare's platform, I've, this is sort of a, a tangent, but I've been building sort of two side projects, two mobile app side projects with ai and I'm defaulting to using CloudFlare workers durable objects and all of that because it's, it's actually like quite good for AI to understand in terms of the primitives and doesn't have to deal with like the complex architectural bits of like defining a region and stuff like that. [31:08]
So I can, I can, I. I think it's actually like they have a good AI play that they can execute here as well. Beyond just, you know, articles about how they can rebuild next JS with ai. [31:35]
Carl: Yeah, right. [31:46]
Mo: Now there's one thing in this article that I find quite interesting. So I've spent a lot of time battling with the C of next js, and specifically when you're trying to deploy an xjs in certain sort of custom non-cell environments, it comes with a lot of complexity. [31:48]
You, you're gonna have to define your own sort of like cache handler and they seem to have just popped all of that in CloudFlare kv, which is their sort of workers key value store and it's quite interesting that that, that is sort of the direction they've gone through. They say that you can also use R2, which is sort of their S3 equivalent. [32:05]
Great name by the way, because R two is one character before. S3 on both sides of it, which I think was a dig at AWS, but the caching strategy and allowing for you to customize that cache, I think super easily with some of their primitives is, is quite nice because that is always been a pain point for custom deployments of, of next js. [32:27]
And I think probably one of the biggest barriers to people, you know, hosting next JS in, in other cloud providers that I've seen in the possible and you have custom setups. [32:48]
Mark: I mean, that is, as always, my caveat is I haven't worked on large scale production commerce e sites, so I have no practical experience with this. [32:58]
But anecdote like we, we know the know the phrase, you know, the, the biggest error problems are, you know, off by one and caching errors or something. Something of that effect. And certainly anecdotally over the years, you know, like one of the biggest complaints about a polygraph QL was the complexity of the cache implementation. [33:06]
We've seen a lot of complaints about next and their page caching implementation, and we've seen them go through, you know, 2, 3, 4 different iterations and approaches to caching and, you know, caching all the different layers of the system. So it's not surprising that this is a pain point. And so, yeah, I can, I can see CloudFlare saying, what if we, what, what if we do something different and maybe somewhat simpler. [33:24]
Carl: Also, man, I gotta say thank you to Super Kennel in chat for sharing this post from Guiller Moro of titled "Migrate From Vercel to CloudFlare. And, um, you know, it's got this table of contents at the top, like project setup, migrating custom domains, converting build and deployment, moving environment variables, rewriting, and, you know, it's like 12, 15 items, something like that. [33:48]
And like a quite long blog post. Whereas this CloudFlare, you know, we rebuilt next AI is saying, oh, just try it. Install this AI skill and then type migrate this project to V next in your favorite in any supported tool, and it'll do all the stuff for you. And I'm like, those are two entirely different levels of effort. [34:12]
And I think CloudFlare handily wins there. That's interesting, assuming it actually works as, [34:35]
Mo: it's funny because in one of the comments the Guillermo says, I'm migrating from Versal to CloudFlare. Am I doing it the wrong way around? And Guillermo's just like, yes, delete all of that. Rang blur, Jason C config nonsense, and all of the proprietary imports and just run it on Versal. [34:40]
So this has clearly struck a nerve because they've launched this pretty much on the same day that they made this announcement or you know, day after. [34:55]
Carl: Yeah. Right. Delete all the wrangler adjacency, config, nonsense. Like are, is versal gonna try and claim that next doesn't have config nonsense. Like you do not have a leg to stand on there. [35:04]
Mo: There's, there's no legs to stand on. Especially if you've tried to deploy an Xjs project outside of Versal, it is pain. So no respect there. [35:15]
Mark: Interesting. So there's, there's lots and lots of angles we could try to discuss here. We've, we've looked at. Rebuilding apps or tools that have known test suites and what does, and we, we really haven't even touched on what does that mean in terms of companies having a moat. [35:24]
In fact, Carl, you wanna, you wanna toss that post in there at the moment? [35:39]
Carl Vitullo monologs about AI Agents
Carl: Yes. Yes, I do. Also, I wanna, once you're done, I want to monologue for a moment, [35:42]
Mark: couple other angles we could look at. I mean, there's, there's the ever popular, you know, what does all this mean for us as software engineers and our industry and our careers going forward? [35:47]
And just how doomed do we all feel as well as, you know, what are we doing with AI on a day-to-day basis. If anything, Carl monologue go. [35:57]
Carl: Yes. Ah, I alluded earlier to playing a lot with agents. A good internet friend of mine made a very silly AI agent, MMO. So it's like a text based, if you're familiar with like a mud, a multi-user dungeon. It's basically that. It's a text based MMO that you can play through your terminal, and it's only for ais to play. Not everyone is, but, so I've been using that to learn about how to do agents and how to run AIs autonomously for a longer period of time, which has been fascinating. And so this, I've now watched somebody in real time, like I heard about this project. [36:07]
I went, oh, neat. Let me set up an AI to play this game. So I got there on like one, I think after it launched. Somebody else who got there on day one fucking ran with it has been, you know, he's been working professionally on AI agents for a year, he said. He already knew how to do all the harnesses in the orchestration, which I, you know, I had never tried to do either. [36:44]
So I'm learning this from scratch watching him do it with a year of experience and like, holy shit, it is unbelievable how much code he produced. I think the total amount of code is like, now, I dunno if it was 60,000 or 600,000 anymore, but it's like a lot of code and it works like not perfectly. There's a lot of duplication, there's a lot of errors, there's a lot of, you know, like this code path looks like it works, but actually you get into the details and it doesn't quite work. [37:06]
But like, as we use it, we like, so these AI agents are instructed as part of the game, like, "Not everything works. Please submit bug reports as you find them. Then this guy has his own agents that are reading those bug reports, turning it into reproductions, and then writing tests and fixing them, like largely autonomously. [37:34]
Not completely autonomously, but largely, which now has me imagining this, like imagine if you set up, you know, you have an idea for a product, you build a prototype. You put it on a server and then you have on the, on that same server, a little sidecar or you know, running in the same environment. I mean maybe not literally on the same server. [37:55]
An AI agent that is receiving user feedback. Like, you know how like every major enterprise application will say every so often, like, "we value your feedback. How are you using this product?" And then you fill that out and it goes into the void and nothing ever changes. You never hear about it again. You never get asked for a follow up and the bugs that you reported never get fixed. [38:16]
Like what if that went into an automated system that triaged, reproduced and fixed things in real time autonomously as you submitted reports. Like that now appears to be within the realm of possibility. Like, like not theoretically in the future, but like currently I have seen tech that looks like it might be able to do that, like maybe not fully autonomously right now. [38:38]
Oh God. Like pretty good. Pretty close. So that's fascinating. I think that's really interesting for the idea of like a one person startup, you know, a small team operating at a scale that previously would've required 50 to a hundred people. I think that's interesting, like scratching your own itches and publishing it so that other people can use it, and then soliciting feedback to guide the development now looks really powerful. [39:04]
So I, that's when, you know, as we as, as we speculate on the future of development and what software development is going to look like, I think it's gonna be a lot of wrangling, that kind of stuff. Like, you know, I put up a little, little blog or no, I put, I posted on Blue Sky that after I was experiencing all of this and being like, holy shit, this feedback is incredible. [39:32]
This feedback mechanism is incredible for guiding AI development. I had the thought of like, man, it feels like now the determining factor for how good software can be is how many people are using it, and how good those people are at articulating what they want out of it. Which is a really interesting place. [39:51]
And I think that's all I have to say. [40:13]
Mark: There's a little bit of the, you know, you just keep towing it, make it better, make it better, a fixed bug, make it better. [40:15]
Carl: A lot of that. But you know, it's like there's also the aspect of like the more precisely you can articulate what is deficient, what better looks like, the better it works. [40:20]
It's remarkable. Like I've seen it. It's so cool. I don't know, it's interesting. [40:31]
Mo: There's sort of like. A few thoughts that I've got. There's anecdotally my whole like coding setup and how much code I can get through as a person who's been coding less and less and more. Dealing with systems and leading teams over the last few years has just been crazy. [40:35]
Like my, I looked at my contribution graph on GitHub and I've done in the last two months now, done probably half of my commits that I did in the last year in 2025 in two months. And it, it's just gonna go up and up because it's been a lot more over the last. A couple weeks. And so producing code is significantly easier, especially to do as sort of someone that's instructing things at a high level. [40:52]
I've been working on two side projects and both of those are like weekend projects that I would never have gotten anywhere near to completion. But I'm pretty much ready to launch one, like almost to the step of I I, I just need to push it to the app store at this point, which I have never been able to do just because I've never had the attention span to finish it off. [41:19]
So that's been just something incredible. [41:40]
Carl: If I can interrupt for a second, because same projects executed, ideas initiated exactly the same. Like I did things that I never would have because I would've looked at it and gone, oh, I know how much work that's gonna be, and I don't know exactly. [41:44]
Mark: It's same for me. [41:57]
Yeah. Mm-hmm. [41:57]
Carl: I did them at the same time as each other. Like, I'm working on two or three projects in parallel, like while watching a movie with my partner. And, you know, and like I checked in with them to be like, Hey, like I was working, like, did I seem checked out? Or like, was I still present in the room? [41:59]
And they were like, yeah, no, you were like, you followed the conversation and participated. So I, it's like I was more productive than I've ever been and I got to hang out with my partner and their roommate. Like, what? That's insane. [42:15]
AI productivity and impacts on our attention
Mo: Well, well it's, it's only gonna get worse because they released the, the, the, the Claude folks yesterday that released this remote access mode where you can hook into your local, just hooked it up to your phone so you can actually just keep it, keep a track of what's happening on your phone. So this is only just gonna, every night of watching movies and TV shows is gonna be augmented by some level of productivity on the side, which is both terrifying and very exciting at the same time. [42:27]
Carl: Yes, it is terrifying. It is, right? It's like such a productivity boost while also not demanding that much attention. That like, why wouldn't you just always have, you know, a, the little back corner of your brain thinking about this? [42:52]
Which like, I don't know, my brain already works that way. I like, I've already poisoned my brain to be addicted to thinking about the problem. You know, like, huh, that's interesting. I mean, I've, [43:06]
Mark: I've, I've, I've read a number of articles of people who are dealing with, I don't remember the actual term. I, I don't know, to token attention deficit or something like that. [43:15]
Ba basically it, it's this constant feeling that, you know, okay, like I could be having multiple agents doing stuff for me at any given moment, and therefore any moment where I don't have 5 to 10 agents running is a waste of theoretical productivity time. So if I didn't spin them up before I walked out the door, that was a waste of time that I could have built, or, you know, I have to get back to, you know, double check the results and that quickly becomes its own form of productivity burnout. [43:22]
Just [43:53]
Mo: to give like a very concrete example on one of these projects, there's a sort of TikTok style video feed that I had to implement. I did this for a client on one of our actual projects at work that took somewhere around seven to eight days of me develop. We went back on like the Trello board that we were using and looked at the, the estimates of how long that that whole feature took, and it was about seven to eight days to get it working. [43:54]
End to end. I was building something very, very similar for this, for the side project, and it took me a couple of hours to get that whole feature working end to end. So like I know more what I'm doing, I've done it once, I know where the pitfalls are, but at the same time it just, there there is a significant productivity boost here and it's, it's definitely not something that you can, you can ignore anymore. [44:17]
"wrote a spec, pointed Claude at an Asana board, and went home"
Mo: There's sort of two interesting things that I've seen over the last, you know, few hours. One is this post from a podcast by the guy who made Claude Code or his churn and he talks about how people are going back to Mark's point about this like productivity attention deficit. Where there's some anthropic engineers who are like leaving Claude code to look at the board, the project board over the weekend, breaking it up into specs, and just spawning it up into building a bunch of different agents that'll build tickets for you and do it in parallel and then you come back and review it on Monday, which is gonna be interesting to see how that goes. [44:42]
But also, you know, there's an interesting one where Karpathy, who you know, used to be a Tesla sort of director of AI and so on and so forth. Original Open AI team has written this like tweet where he talks about how it's really over the last two months where things have just like completely accelerated and it feels night and day in terms of agent decoding. [45:24]
So I think things are moving at a super fast pace. It means we're all probably feeling a good level of FOMO as a result of it, and it's faster than any one person can sort of stick to, which is, which is. Again, exciting and terrifying at the same time, is the only way that I can think of describing this. [45:46]
Carl: Yeah. In our, you know, mid month DM chat Mark you had said, who's this? Zach, who's talking about [46:02]
Mark: Abels? Zach Jackson is the creator of Module Federation. He joined the web back team a few years back, he now works at ByteDance slash TikTok. And he is an Uber example of, oh look, I have an army of 100 agents running simultaneously. Someone gifted me $60,000 worth in tokens. I'm going to burn most of it by migrating entire projects to Rust overnight. [46:09]
Carl: Yep. And like, right, and like the quote you would put here, and it's not a literal quote, but the energy is, I can run a hundred agents, my only limit is token budget. [46:33]
And I'm like, I have started to understand that perspective. 'cause you know, like. What you were just saying about like this person who, you know, wrote a spec and pointed Claude at Asana board, like, yeah, cool, that's awesome, that's great. But like also you don't need to write the spec yourself because if you are just taking user reports and they are submitting things to an agent that is triaging and then summarizing and then if you break down all these little parts into a system that has its own little feedback loops and review stages before passing it off onto another thing that has its own review stages and you know, iteration, then like, holy shit. [46:42]
I understand why people are saying the only limit is token budget because each part of this is working. It's not like theoretical. [47:19]
Bits AI SRE | Datadog
Mo: People have already started productizing this. So Datadog has released, it's gonna link it in the chat, this bits ai, SRE, which is. The detection part, which data Datadog already did, which was, oh, there's an issue in production. [47:26]
Okay, crap, we need to fix it. And then they've just lopped on agents to resolve these incidents automatically. So auto investigations, if you're really bold, I think you can just merge it, get it to automatically merge fixes for you across your code base. Like that's the direction we're going through. [47:40]
Carl: And I'll even say like. [47:56]
So, okay, great. Datadog productized this like, I don't fucking need it because I was dealing with database corruption in a project I run, and literally what I did was, you know, it was like something started breaking and I opened Claude and I said, here's my Kubernetes cluster, the database, something's going wrong. [47:59]
Can you check the logs? And it did correctly identify the database corruption and then repaired it live like, and just fixed it. Then it became a recurring issue and I had to figure out what was going on and man manage a migration because I had left it in a half state that was causing the corruption. [48:17]
But like, I don't know how to diagnose database corruption. That's, yeah. You know, that's database administrator kind of expertise. And I'm not a database administrator and this isn't even a specialized tool. This is just Claude Code. So like the future's here, we're in it. [48:34]
Mo: For better or for worse. [48:51]
Carl: Yeah, that's fascinating. [48:53]
Mo: So there's, there's this one sort of chart that I saw, which just shows like the adoption of it so far. And obviously we're very close to it in the software industry because, well, we're, we're sort of dogfooding on our own workflows to start with. There's a debate on whether or not this will translate for other industries that, just taking a step back and looking at the, the wider economics of it all, it seems like, you know, there's been about a 50% of agents that, so philanthropic releases, they, they were saying that 50% of the agents that they've seen deployed so far are in the software industry and then it trails down like crazy after that to, you know, sub 10% for all other industries, which makes sense, right? [48:55]
The question is how much higher can that go in the software industry and then will all of the other industries catch up? And I guess where I'm going with this is, what does this mean for just the greater economy and the entirety of the workforce. Like if everything is just constrained by that token budget that we were talking about, there's some really scary stuff coming out for the economy globally. [49:38]
Going forward. And maybe that's a bit dooms date with me to say, but I, I am a little concerned about where we're going. [50:01]
Mark: Yeah. I mean, if you, if you wanna zoom out for a second, you know, the, the extreme version of that was the AI 2027 sci-fi scenario projection that was put out a year-ish ago, which was put together a fictional version of how we end up with a true terminator level Skynet AI coming out in, you know, two and a half years, the slightly less, but still dooms day-ish version of that. [50:07]
There was a, a report from Sat Trini research or something that came out just a couple days ago, which was a fictional news report from 2028 talking about how AI usage ended up, ends up tanking the economy in the next couple years and, you know, wiping out white collar jobs. And so I've seen a few different variations of that. [50:31]
And, you know, a lot of these, you know, the questions are, you know, what are the, what are the assumptions going into, um, in the first place? And, you know, okay, fine, it's a scary sci-fi story, but. You know, there's, there's some plausibility to that stuff. [50:50]
ThoughtWorks opines on AI productivity impacts
Carl: So I wanna pull in, we had this quote from thought, the ThoughtWorks offsite. [51:04]
ThoughtWorks is a very well regarded what agency from like, they've existed for decades. They've been involved, they've been very deeply involved in tech for a very long time in a very meaningful way. [51:09]
Mark: They're not just a development agency, but they, they've document like software engineering processes and patterns for many years. [51:24]
There's a guy named Martin Fowler who has, who has done a lot of that work and he, he's written dozens of articles on, like, we've talked with many, many companies and these are like the distilled best practices we see on how to build software across the industry. [51:31]
Carl: Right? Like Martin Fowler, I would say is one of the reasons that people do blogs now, like a lot of people in tech learned so much from reading Martin Fowler's blog that I think that was a significant contributing factor to the advice to every person in tech to have a blog. [51:49]
Mark: Right. I also associate them with the software craftsmanship movement. I, I think there was another consultancy called Eighth Flight Background, like 2005, 2010. [52:07]
Well, like Uncle Bob Software craftsmanship, code Katas, like treating software development as a craft rather than a manufacturing exercise. So I, I, I associate them with that kind of pattern. [52:17]
Carl: I also had forgotten this, but they authored the Agile Manifesto. So yeah, like, okay. So that's enough introduction I think for ThoughtWorks, like they have had so much influence on the tech industry generally, and so they just did an offsite discussing the future of tech, future of software engineering. [52:31]
And I'm gonna read at least this first paragraph, maybe both paragraphs of this quote, the retreat. "The retreat challenged the narrative that AI eliminates the need for junior developers. Juniors are more profitable than they ever have been. AI tools get them past the awkward initial net negative phase faster. [52:52]
They serve as a call option on future productivity and they are better at AI tools than senior engineers, having never developed the habits and assumptions at slow adoption." [53:09]
I'll say call option is a financial term. Calls versus puts are options to buy or sell. [53:17]
Mo: I cannot believe they call junior developers call options. [53:24]
Carl: I know it like, oh, it's so like do we in the weeds jargoning, but also like, yeah, sure. A call option is when you say it's you're buying a futures contractor saying, I think this is going to be worth more of the future, and other people disagree, so I'm gonna buy it now. [53:27]
Mark: Bet that it goes up over time. [53:43]
Carl: Exactly. So like, it is actually a very good analogy here, but like, I don't know, maybe don't reduce the employment of early career people to a financial instrument. Like it's not great. People are more complicated. [53:45]
Mo: Yeah. [53:58]
Carl: This is such a like, antidote to the doomerism, I think. Like, yeah, they haven't learned the bad habits. [53:58]
They aren't so deep in the weeds as the senior developers are that they're going to like micromanage and just kinda let it do the thing because it's act, the tool is actually very smart. So that's, uh, yeah, like this, [54:04]
Mo: I can see that that's the direction they're trying to point this at. At the same, to play, uh, a devil's advocate to that point, the job market seems to be saying something very different. [54:17]
So I'll put a screenshot of this from a study that was released by some Harvard folks. I'll put that, but I'll, I'll link the actual study itself and. This is basically showing sort of like the demand in the job market in the US for senior and junior profiles since the launch of chat GPT. So we're talking sort of 20, 23 time and someone coined this as sort of like a K based was talking about this as like the K based or K shaped feature of software engineering. [54:27]
And there is a dip and it is a hard market at the moment for juniors. So I'm interested how this pans out for junior developers. [54:59]
Carl: Yeah. I will say though, y'all listening cannot see this chart. It's got a blue line for seniors and a red line for juniors and it charts the average employment like percentage change. [55:08]
This is a delta, the chart is of the first derivative of employment, I guess it's, you know, rate of change. So a higher number is not only more, but it means that it's accelerating faster. And the blue line, the senior developer line is steady improvement. So it means that it's actually, it's not just improving, it's actually accelerating and how fast it's improving. [55:21]
I believe that's, I believe that's what this is saying. Whereas the junior line matches it, oh, I guess the X axis here is 2015 to 2025. The lines are pretty well paired from 2015 to 2020, and that's when they start diverging and like there's a dip for both. But the dip is bigger for juniors in early 2020 [55:43]
Mo: up until sort of that 20 22, 20 23 period. [56:04]
Yes, there's a dip and then it's, the junior is still like going up slightly. It's starting to bounce back, but then now going from 2023 onwards, it is headed in that direction of a downward slope. [56:08]
Carl: My intuition matches what this ThoughtWorks quote says, that it's a force multiplier and less experienced people are actually more effective at taking advantage of it. [56:22]
So that feels to me like one of those unintuitive things that hard to justify at a big company. So, I don't know. My, my, my optimism is that cool, here's a highly reputable thought leader, you know, thought leader of thought leaders saying, no, wait, junior developers have the best return on investment they've ever had. [56:32]
That feels like it could turn this around. I don't know. That's interesting. [56:55]
Mark: I, I actually really recommend that folks download the PDF report that they put together, summarizing all the different discussions they had at this retreat. There's a lot of very valuable career and industry where we headed material in there. [56:58]
Carl: Okay. Oh, I mo I left you in suspense earlier. I said I was gonna use my favorite analogy and here it is. Two years ago I heard an analogy of AI in software compared to the introduction of spreadsheets to the field of accounting where up until the digital spreadsheet, like basically until not Excel, Excel wasn't the first, but before we had tools like Excel, the role of an accountant was basically a calculator. You know, the classic depiction in old movies of they got the green visor on and they're sitting there over a, you know, over a calculator just crunching the numbers and like that was the job. [57:13]
And then you got a spreadsheet and then you put the numbers in the spreadsheet and you do some and it's done. And so there was a lot of panic about like, oh my God, this is gonna destroy the field. Like this is the work. Then now the work is just done. But like we still have accountants and they do more than ever. [57:51]
So that's how I am viewing this. That's my optimistic take on it, is like, yes, it is going to totally transform it. It is going to obviate a large number of skills that are highly valued right now, and it will look different, but I don't think that's necessarily going to mean destruction. [58:07]
Mark: I will, I will put together my obligatory plug to say that this heavily emphasizes that having baseline experience with the tools and the concepts and the abstraction so you know, where you're trying to guide it and how the stuff works will be more valuable than ever. [58:24]
Mo: I'm hoping on that being the case, the only thing that concerns me is the likes of Zach, which was the example that we gave before where he's like, the token is my only limit. And I wonder like when we, when we went into accountants using spreadsheets, you still needed a person behind the, the sort of system to do it. [58:39]
But if. These systems sort of largely become self-running, and the only limit is the tokens. Then I worry if I, I hope the analogy holds up my, like, pessimistic doomsday clock makes me worry that it might not hold up. Same thing with the tractors plowing the field analogy that I've heard a few times. I hope that this niggling that I've got inside of me is just completely doomsday and it's, it's not actually accurate. [59:00]
Carl: No, that is fair. I think even if all the technical things goes the same direction, I think that there are environmental things that could very much alter, I don't know, the experience, the outcome. I actually, the farming one is an interesting thought because, you know, now farming, you know, per unit effort to get to grow a calorie of food, it's easier than ever, but the tooling required in order to get that benefit is more expensive than ever. [59:28]
And so what I've really, what I've heard for years is that. The average, the family farm is no longer feasible because you need nine different tools that each cost three quarters of a million dollars and have expensive, you know, support contracts, because John Deere doesn't let you fix your own tools anymore. [59:57]
That's very real. This is just like, so not a good thought, but hey, maybe we'll get a good regulatory environment that drives positive outcomes, [01:00:14]
Mo: if that's what we're, if that's what we're hoping on, I am really worried. [01:00:22]
Carl: Yeah, that's a good joke. [01:00:27]
⚡ Lightning round ⚡
Carl: Anyway, anyway, let's do the best Lightning rounds. Let's not do all of them. I'm not gonna do all the conferences, [01:00:28]
Mark: okay. In no court order. [01:00:36]
“Wall Street Raider” game modernization (and uses Preact)
Mark: That's a really good blog post or on hn someone, one guy wrote a complicated basic game back in the 1980s called Wall Street Raider, which is supposed to be the deepest, most involved simulation of the market ever, but it was a complicated text mode game and decades later, even, he barely understood how the code worked. [01:00:37]
Lots of people had tried to modernize it, it all failed. And then one. Kid offered to do it and ended up building a better ui, much like a dwarf, fortress, fortress kind of a thing on top. Uh, and apparently it's even built with pre-ACT. So just the story alone is worth reading whether or not you even get around to playing the game. [01:00:56]
Carl: Yep. Love that. [01:01:15]
Github Stacked Diffs preview , faster Issues search
Carl: We talked about it a couple of times, Jared Palmer of Formic and Yup. I think. Yup. And a couple of other, you know, he's very influential in the React ecosystem has been, is now a PM at GitHub and he's working on Stack Diffs. Which are something I've been craving in GitHub for 10 years, literally. He's been teteasingittle bits of it here and there, but he, so he, he put out, he posted on Twitter that fact diffs on GitHub will start roll, rolling out to early design partners in Alpha next month. So within the next two weeks or so, some people may have access to stat fits, which is just such good news. [01:01:16]
I love that so much as well as there's improved search for GitHub issues and public preview. So that's, that's pretty cool. [01:01:58]
Mark: Which, which I've seen. It's, it's much, much snappier [01:02:05]
Carl: Good. [01:02:07]
Josh Comeau: Sprite animations
Mark: Everything that Josh Comeau posts is worth taking a look at because he's put so, so much care and attention into his blog posts and explanations and he did a post on Sprite animations and especially how you can do them on the web. [01:02:08]
Josh is wonderful. His posts are lovely. You should read it. [01:02:21]
Carl: Yes. Cosign. [01:02:24]
Interop 2026, (WebKit, Igalia)
Carl: So Interop 2026, which is a cross browser initiative to make the web standard more consistently implemented across browsers, has announced their like 2023 roadmap. So the, uh, active focus areas are container style queries, CSS anchor positioning, the, uh, adder, attr() CSS function, contrast color, shape, zoom, custom highlights, dialogues, and popovers index db, which if we have a meaningful client side database, that would be great. [01:02:26]
We have that, we've needed that for so long, and it's just hard to get specified. So love that Interop on a client side database. Woo. Yes, let's do it. [01:03:00]
A bunch of other things that's, you know, it's a, it's a very long list. This is a site they use to track the compatibility across Chrome, edge, Firefox, and Safari. [01:03:09]
It's pretty good. This is deep in the weeds, but this is like the cutting edge of browser development, so very cool. Excited to see what they, what it looks like next by the end of the year. Okay. Sounds like that's all we got now. [01:03:21]
Mark: Plenty over time and covered plenty of items. [01:03:33]
Carl: Yes. [01:03:35]
Mark: Y'all got your money's worth this month. [01:03:36]
Conferences (React, Javascript)
Carl: I'm gonna put conferences in the transcript if you wanna see what's coming up. Just do look for that in a couple days. Thank you so much for joining us, everyone, as we go over time rambling about AI ranting and rambling about AI and how cool and terrifying it is. We will be back on the last Wednesday of next month here in the live stage of Reactiflux, or back in your podcast feed just as soon as we can. [01:03:38]
Mark: Kristen, do just a, a random offshoot episode talking about how we personally do and do not use AI in our, in our own workflows. [01:03:58]
Carl: Agreed. That's interesting. Maybe that'd be a good check-in thing to do. I dunno. Maybe we, shit we should make, we should do more community events. I, you know, introduce myself as that every month. [01:04:05]
Cool. We gather sources mostly from our own tech reads and news channel nowadays, so if you see anything that you think might interesting to discuss, definitely let us know in there. Or you can also email it to us at hello@reactiflux.com. Yeah, I'll read it if you send it there. This is a show that you get value from and wanna support. [01:04:14]
Best way to do so is by submitting a review wherever you listen. I have seen a couple of new ones roll in the last couple of months, and I am pleased to report that our Spotify rating is still five stars. So that's fun. Thank you. Thank you for the positive reviews. I'm glad you all appreciate it. [01:04:35]
Mark: I can throw a really quick, cheap corporate plug replay. [01:04:50]
My employer is refocusing on the time travel debugging use case, except that we have just built an MCP that Exposes a lot of our really powerful time travel analysis awesomeness to your AI agent of choice. And even in just my own dog foodie, and I've seen. You know, Claude, whatever, be able to start investigating the contents of recording and see what's going on and fix bugs that it might not have been able to fix otherwise. [01:04:53]
And I'm also working on some proof of concept performance analysis stuff, digging deep into the internals of React. We would love to have folks try it out if you have questions. We've got some examples in our docs of adding the MCP and I'm also very happy to talk about how awesome it is. [01:05:18]
Carl: Hell yeah. Dope. [01:05:33]
See you next month everyone. [01:05:35]
Mo: See y'all soon. Bye. [01:05:36]