Skip to content

Commit 07e425b

Browse files
committed
fix: no more aside for me!
1 parent 3b9d375 commit 07e425b

6 files changed

Lines changed: 41 additions & 30 deletions

File tree

src/content/posts/llm-1.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -52,18 +52,19 @@ A genuine text I sent to the team is:
5252

5353
> I keep on telling you, every time I try to touch the UI I get burnt so hard I want to quit doing frontend altogether
5454
55-
This is the story of a recent project that I contributed to, [Dotlist](https://github.com/edwrdq/dotlib). Dotlist is still undergoing development and I have faith that we'll figure out how to fix these mistakes. This isn't meant to roast or dunk on our choices -- instead, it's a cautionary tale for how to avoid them in the future.
56-
57-
<aside>I now have made and use <a href="https://dotlist-lite.vercel.app/">Dotlist Lite</a> for managing all of my todos and haven't ran into a single issue yet. Features like live sync and the (in my opinion :D) beautiful themes make me truly enjoy the experience, even if it doesn't have some of the sparkly features of the regular Dotlist.</aside>
55+
This is the story of a recent project that I contributed to, [Dotlist](https://github.com/edwrdq/dotlib)[^loc]. Dotlist is still undergoing development and I have faith that we'll figure out how to fix these mistakes. This isn't meant to roast or dunk on our choices -- instead, it's a cautionary tale for how to avoid them in the future[^dotlite].
5856

5957
Eventually, you have to start fresh with a new codebase. This is what I did with [Dotlist Lite](https://github.com/aadishv/dotlist-lite). It's not meant to compete with Dotlist, and I'll probably only touch it sporadically from now on. I made it because I needed a good todo list app _now_, not when we figured out how to fix the UI of Dotlist. I specifically designed Lite to be much leaner while still supporting all of the important features, as well as much more polished microinteractions/UX. How I got this to happen while still using Claude for the most part is quite interesting:
6058

6159
- I set up the project myself. I wrote out the Convex Auth boilerplate, setup shadcn themes (using tweakcn to choose), and styled primitive elements myself. The content, database, mutations, and queries, however, were left empty.
6260
- I made Claude port my old code to this codebase. I had an old version of Dotlist which I published as a tool on my website; it was a tiny React app which saved to localstorage. For the initial prompt, I just had Claude port the entire app (only ~600 LOC of TypeScript) to our new Convex + Vite foundation. The key idea of this is that I already understood the code, having worked on the old app for a while, so there is a very low chance of having any garbage code show up in the process. This already had the vast majority of features I needed.
6361
- I tuned microinteractions myself. If I didn't like a font choice or animation speed, I looked at the Tailwind classes and changed them. This is important to build up knowledge of what goes where in the codebase.
6462
- When I wanted to update a feature, I always asked Claude to do it. I'd review the code and test it out. If a specific part had a problem, I'd ask Claude to "simplify" and remove that part, then rewrite it myself.
65-
66-
<aside>For reference, at the time of writing, Dotlist had 1,000 lines of (purportedly non-vibe coded) backend code and 2,500 lines of vibe coded frontend code.</aside>
63+
6764
In conclusion, vibe coding is good. Until it's not. I still think the biggest advantage of vibe coding is to get an idea out super quickly -- PMs can vibe code a prototype of certain functionality instead of trying to describe it to engineers, etc. The issues start emerging when using vibe coding to actually write the majority of a production grade application.
6865

6966
If you're looking for a simple todo app, check out Dotlist! It is still a great choice even if it's vibe coded and the team is working hard to fix our issues with the AI-generated base.
67+
68+
[^dotlite]: I now have made and use <a href="https://dotlist-lite.vercel.app/">Dotlist Lite</a> for managing all of my todos and haven't ran into a single issue yet. Features like live sync and the (in my opinion :D) beautiful themes make me truly enjoy the experience, even if it doesn't have some of the sparkly features of the regular Dotlist.
69+
70+
[^loc]: For reference, at the time of writing, Dotlist had 1,000 lines of (purportedly non-vibe coded) backend code and 2,500 lines of vibe coded frontend code.

src/content/posts/music.md

Lines changed: 8 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,8 @@ Apple Music, in the fullscreen view for playing music, shows an animated, flowin
1313

1414
![](assets/music.md/1.png)
1515

16-
I spent a lot of time trying to figure out how this could work before giving in and actually doing a bit of research into it. I ended up going down a multilayer rabbit hole (as most of my projects end up in) to reproduce it myself.
16+
I spent a lot of time trying to figure out how this could work before giving in and actually doing a bit of research into it. I ended up going down a multilayer rabbit hole (as most of my projects end up in) to reproduce it myself[^ai].
1717

18-
<aside>
19-
<b>Use of AI:</b> no AI-generated code ended up the final version of this project (although tab complete through Zed's Edit Predictions was used). This task is actually surprisingly difficult for LLMs -- I tried to get OpenCode with gpt-5-mini and grok-code-fast-1 to do similar work to what I describe in the rest of this post, and they both failed. I think a big reason for this is context, which was very pervasive as the files I'm discussing are in the 12k LOC range. Even using tricks like subagents doesn't fully resolve the problem.
20-
<br /><br />
21-
However, I *did* use AI to help me with my research. All queries were conducted on the Google Gemini app with Gemini 2.5 Pro. I chose to go this route because the actual work with the code is very appealing to me, but doing all of the auxiliary research, not as much. Queries ranged from simple ("how does X API work?") to much more complex "explain how X effect is implemented in Y shader language, taking into account Z's implementation"). I occasionally asked Gemini to generate example code but never just pasted it in. Share links to all of my Gemini conversations will be available at relevant parts of the article.
22-
<br /><br />
23-
This is how I'm doing much of my coding nowadays, so I hopefully won't have to post another update like this in a while.
24-
</aside>
2518

2619
This was a relatively short project: I initially started looking into it in the afternoon of October 20...
2720

@@ -112,11 +105,7 @@ Gemini threads:
112105

113106
At this point, I concluded that the fastest route to answer my questions would be to write the effect myself and then tune it until it matched the Apple Music one.
114107

115-
I initially decided to try using WGPU and WGSL since my graphics-nerd friends recommended it (and also because Rust = ⚡blazing fast⚡), but quickly burned out after it took about 200 lines of code to draw a triangle, and 350 for a simple image. The concept of textures/fragment shaders/vertex shaders weren't super clear to me at the time, either, which probably added to the confusion.
116-
117-
<aside>
118-
For those who are wondering, vertex shaders are called multiple times to return points which form lines/points/triangles/etc. For each shape, the fragment shader is then called at each pixel (in the canvas coordinate space) to choose a certain color. Textures are used to store images and intermediary frames, and are often used paired with framebuffers.
119-
</aside>
108+
I initially decided to try using WGPU and WGSL since my graphics-nerd friends recommended it (and also because Rust = ⚡blazing fast⚡), but quickly burned out after it took about 200 lines of code to draw a triangle, and 350 for a simple image. The concept of textures/fragment shaders/vertex shaders weren't super clear to me at the time, either, which probably added to the confusion[^shaders].
120109

121110
I decided to try using WebGL instead on a whim, which led to the fumble of the century:
122111

@@ -263,3 +252,9 @@ A [chat with Gemini](https://gemini.google.com/share/3eb691bbdf01) seems to sugg
263252
A few future directions I could take this are actually learning WGPU (despite its verboseness...) and porting the visualization to run natively, perhaps converting the shaders via wgpu's `naga` utility, or porting it to pure TypeScript through the insanely cool [TypeGPU](https://docs.swmansion.com/TypeGPU/). GPU programming is definitely quite interesting and I think I'll explore it more in the future, perhaps implementing more complex programs such as ray tracing.
264253

265254
As always, thanks for reading!
255+
256+
[^ai]: **Use of AI:** no AI-generated code ended up the final version of this project (although tab complete through Zed's Edit Predictions was used). This task is actually surprisingly difficult for LLMs -- I tried to get OpenCode with gpt-5-mini and grok-code-fast-1 to do similar work to what I describe in the rest of this post, and they both failed. I think a big reason for this is context, which was very pervasive as the files I'm discussing are in the 12k LOC range. Even using tricks like subagents doesn't fully resolve the problem.
257+
However, I *did* use AI to help me with my research. All queries were conducted on the Google Gemini app with Gemini 2.5 Pro. I chose to go this route because the actual work with the code is very appealing to me, but doing all of the auxiliary research, not as much. Queries ranged from simple ("how does X API work?") to much more complex "explain how X effect is implemented in Y shader language, taking into account Z's implementation"). I occasionally asked Gemini to generate example code but never just pasted it in. Share links to all of my Gemini conversations will be available at relevant parts of the article.
258+
This is how I'm doing much of my coding nowadays, so I hopefully won't have to post another update like this in a while.
259+
260+
[^shaders]: For those who are wondering, vertex shaders are called multiple times to return points which form lines/points/triangles/etc. For each shape, the fragment shader is then called at each pixel (in the canvas coordinate space) to choose a certain color. Textures are used to store images and intermediary frames, and are often used paired with framebuffers.

src/content/posts/robotics-2.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -57,12 +57,7 @@ Typically, these binaries are built and uploaded through VEX’s proprietary IDE
5757
- Documentation. PROS has actually well-documented APIs compared to the mess of VEX APIs. For example, VEX only somewhat [documented their APIs a few months ago](https://api.vex.com/) — even though they’ve existed for years!
5858
- External libraries. VEXcode’s tough integration with other tools makes it hard to have a proper package management system. In contrast, PROS has a robust library ecosystem with hundreds if not more packages ready to install via their CLI (another thing that VEXcode doesn’t have).
5959
- IDE integration. While PROS has a recommended VSCode plugin, its extensible CLI means you can code in it from everywhere (including [Zed](https://zed.dev), my favorite code editor). VEXcode can only be used from their proprietary app or VSCode extension. Also, VEXcode has _very_ weird code structure, while PROS’ is just regular C++ with cpp and header files.
60-
61-
<aside>
62-
Update 10/7/25: this is a bit inaccurate. C++ does support async, but PROS doesn't use it; all of its operations are synchronous, and threads are preemptive. `vexide` uses Rust's cooperative scheduling to have first-class async operations.
63-
</aside>
64-
65-
- PROS is open-source! All of VEXCode’s APIs and protocols are closed-source (although the SIGbots team got access to it under a NDA to develop PROS) while every single bit of PROS is open-source and on [Github](https://github.com/purduesigbots/pros). This has enabled the community to do a bunch of cool things. The coolest of these, in my opinion, is [vexide](https://vexide.dev/), which is a runtime like PROS for the V5, with two major differences. 1) It supports async. But wait, C++ doesn’t have async. And then we have 2) _It’s written in Rust!_
60+
- PROS is open-source! All of VEXCode’s APIs and protocols are closed-source (although the SIGbots team got access to it under a NDA to develop PROS) while every single bit of PROS is open-source and on [Github](https://github.com/purduesigbots/pros). This has enabled the community to do a bunch of cool things. The coolest of these, in my opinion, is [vexide](https://vexide.dev/), which is a runtime like PROS for the V5, with two major differences. 1) It supports async. But wait, C++ doesn’t have async[^misrep]. And then we have 2) _It’s written in Rust!_
6661

6762

6863

@@ -239,3 +234,5 @@ That’s basically all I have to say about the robot’s codebase. Hope some of
239234
## Epilogue<!-- {"fold":true} -->
240235
241236
Due to internal frustrations with how the team was being managed and the lack of focus on coding, I left the team in October 2024. As for the future of the team, they recently qualified to the World Championships by getting a design award at states. Unfortunately, their coding is in limbo at the moment as multiple other coders have left the team or are busy with other extracurriculars. As noted above, to preserve my original code (entirely written by me, with no external authors), I have cloned the repository at the time of my leaving the team in this MIT-licensed [repo](https://github.com/aadishv/HighStakes). Hope this post was helpful/inspiring/something!
237+
238+
[^misrep]: Update 10/7/25: this is a bit inaccurate. C++ does support async, but PROS doesn't use it; all of its operations are synchronous, and threads are preemptive. `vexide` uses Rust's cooperative scheduling to have first-class async operations.

src/content/posts/robotics-4.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -46,10 +46,8 @@ When the team just got formed, I had no idea how we would play the game. I thus
4646
![image](assets/robotics4/image.png)
4747
Note the many fancy terms here. The TL;DR is that I wanted to run as much as possible on a Jetson Orin Nano (not the Jetson Nano that comes with the VEX AI Platform) so we could have powerful hardware (aka GPU) to do cool stuff. What I eventually realized, however, is that getting this to work would require tens of thousands of lines of code. Given that this was 2 weeks before the competition, we decided to scrap this plan.
4848

49-
<aside>VEX teams are required to maintain an <em>engineering notebook</em> to log their development journey, which also helps when getting awards.</aside>
50-
5149
So what did we do instead?
52-
Here’s an excerpt from our engineering notebook about how we handled this.
50+
Here’s an excerpt from our engineering notebook[^nb] about how we handled this.
5351

5452
## The _actual_ plan
5553

@@ -138,3 +136,5 @@ We ended up ranked second and made it to the finals, where we (inevitably) lost
138136
Overall, it was fun! We also got to ~~steal some tech~~ get inspired by some of the teams that had tried using AI. Big thanks to Chroma for being the reason we won any matches at all.
139137
140138
Just one more blog post left for VAIRC. That one’s gonna be _very_ technical and _very long_…
139+
140+
[^nb]: VEX teams are required to maintain an <em>engineering notebook</em> to log their development journey, which also helps when getting awards.

src/notes/agentic-envs.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
I've recently been slowly ramping up my use of agents. The last time I tried to use agentic coding, it [spun into a mess of vibed coding](/llm-1), so I tried my best to avoid it this time. I'd previously been using the "fast iteration" models, notably Grok Code Fast 1 and sometimes OpenCode's Big Pickle (which is GLM-4.6), to do smaller tasks like
2+
3+
* Refactor this function to use this helper.
4+
* Write another function in the style of this one, with the following changes.
5+
* Move this logic out of the function into a separate module.
6+
7+
As I tried to do larger refactors or add functionality, though, this quickly reached its limit. This was partially due to model choice; I switched to using Claude Opus 4.5 for big tasks. However, an equally big issue was the agentic environment in which the model ran.
8+
9+
Agentic models, like basically all coding-oriented LLMs today, rely on some kind of feedback loop to generate correct code; it's practically the definition of "agents." In both OpenCode and Zed, LSP support is built-in, so, for example, ESLint automatically checks changed files and reports its errors back to Claude. This doesn't work perfectly sometimes, though.
10+
11+
In a recent large-scale refactor, the model changed the schema and indexes for a commonly used table in the database, which is used by basically all of the backend. ESLint wasn't running on those files though, so the built-in LSP didn't return any errors. I thus instructed the model to run `bun typecheck` (which force-runs eslint on all files, not a subset) to find everywhere where its changes broke stuff. The typecheck command itself took around 20 seconds but eventually, after many iterations, the model did manage to get the full refactor done. Notably, even though it took several minutes, it required no input from me -- I didn't have to test the backend myself, or even rerun typecheck, because..
12+
13+
The typecheck was all the model needed. In all of my projects, typesafety is the #1 goal[^typesafety], not only because it makes it much easier to test code, but also because agentic work significantly simpler. The agent didn't have to spin up 3 MCPs, run the Next dev server in the background, and then manually check all flows. The typesafety provided by the tools and services we used meant everything fell into place on its own.
14+
15+
Having some kind of "end-to-end" testing system is great in a lot of cases, since it enables the model to practice test-driven development, or at least use tests directly to check its code, providing an extra layer of safety over typechecking. Here's an example of a recent project where I provided exact examples of input and output and let the model figured out the rest: [OpenCode transcript](https://opencode.ai/s/UkNhmA7i). In this case, I didn't even review the model outputs because I could verify that it passed the tests, showing how such clarity is helpful even when vibe coding.
16+
17+
However, there are systems where tests aren't easy or trivial or fast to add, like in my initial example of a Next.js app with dozens of possible user flows. In those cases, typesafety is the best bet, and one you should make sure pervades everywhere.
18+
19+
TL;DR: Add end-to-end testing/encourage TDD where you can; otherwise, type safety and typechecking are musts.
20+
21+
[^typesafety]: In a recent small playground I've been working on, I'm basically just writing a Python wrapper around string manipulation: particularly, I'm using Python typechecking to create an extremely safe LaTeX creation system. The use of strict types means I can avoid footguns easily; you can't add a polygon to an expression in Desmos, and neither can you in my Python LaTeX wrapper; you can't return a boolean in functions, and neither can you there; etc. **Typesafety is good for humans *and* agents.**

src/styles/globals.css

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -175,9 +175,6 @@
175175
blockquote {
176176
@apply border-l-4 pl-4 italic border-gray-400;
177177
}
178-
aside {
179-
@apply border p-3 text-muted-foreground my-3 rounded-lg;
180-
}
181178
p > code {
182179
@apply text-aadish;
183180
}

0 commit comments

Comments
 (0)