How I’m using LLMs in 2026
I had to admit I caught the LLM wave a bit late. This is a story of my journey and what I’ve learned.
AI as Science Fiction
As a kid, “Artificial Intelligence” for me was science-fiction. It was Skynet1 and Replicants2 and positronic brains3. Although eventually when I started taking computer science classes, I learned that computers could display aspects of intelligence with things like primitive neural networks and red-black decision trees. After graduating I took a “Machine Learning” class and learned about how large datasets could actually produce more meaningful results with techniques like Random Forests and k-means clusters. All the while, computer vision, OCR, speech to text all kind of worked.
AI makes a great story
Eventually “Deep Dream”4 came out and now computers could produce stuff that was novel and unknown. The output was pretty trippy. Next computers were trying hard to determine if something was a chihuahua or a blueberry muffin5.
Next, the initial LLMs came out which could do better auto-completion. It was actually useful, but still didn’t feel magical. Image generators started making weird modern art pictures and were useful for decorating blogs with abstract stuff. I remember people using ChatGPT6 to generate emails and word-smith stuff to make things sound more formal. I started to suspect that more of the things I read were not coming from humans anymore.
By mid-2024, Google search changed for me (and even Duck Duck Go). Search results were now first fed into an LLM and results came out. Sometimes it was useful, sometimes it was not. I tried to actually interact with Gemini9 directly and it was kinda helpful for very high level discussions. It still didn’t feel like magic.
AI is pretty cool
Using NotebookLM18 to generate custom podcasts was impressive. I started to make podcasts about tricky computer programming topics and listen to it when I was driving. I was absorbing information, but I was also impressed with how the voices really sounded like two people talking to each other. My mind would generate an imaginary duo talking in the studio. Then they would say the word “chuckle”, laugh afterwards and the illusion popped.
In the physical world, the promise of robots cleaning your house was in the air. Until you peeked behind the curtain and saw it was actually a human looking through the robots beady-little camera eyes. Self-driving cars could be seen all about town, only occasionally blowing past a school-bus stop-sign21.
At the end of 2025, I started to use Windsurf10 more heavily. First it was more about autocompletion. Then it was about writing specific functions with tests. Then I started making my requests a bit broader. I played with one-shot making web-apps for family over Christmas break. They were like “It would be cool if this existed.” Then we made it exist within 15 minutes, copying code from Gemini and slapping it into a file and pushing into Github. For my family this was Magic. For me it was starting to be impressive and my skepticism was starting to break.
January 2026, is when I started to see the Magic. I got access to opencode12 at work and plugged in Claude Sonnet13. I started small, but then I plugged in MCPs14 for our internal wiki. Now I could summarize design docs and guidance and really cook. Now tedious document reading and yaml-file editing were offloaded to a machine that did not complain and did a great job. This was great.
AI is magic
February 2026, I started to drive opencode harder. I created Skills and Agents that stored their hard-earned learnings into text files. I also started giving the agents harder tasks. Edit a Grafana15 dashboard. Help me debug a crash. Edit multiple repositories to solve a general problem. Create JIRA16 tickets for a specification. Research and write a specification given my input. I used different models and watched the dollars spin. Claude Opus17 was the magnum opus. This thing felt WAY smarter. It could tell me all about something with succinct clarity and links to primary source documents. I could plug in a backtrace and source code and it would point at the bug, write a reproducer and show me it was right (even after I was skeptical). I was using it to generate research, building small bespoke tools to solve problems I had that day. There were mistakes and logical errors sometimes that I had to spot and guide; regardless, this felt finally like magic.
I now have a magic wand.
What does it mean?
Technological progress is accelerating very quickly. We’ve hit a point where these tools will improve themselves and accelerate what LLMs can do even faster.
This story skipped the parts where I saw the Internet come into existence. Going from acoustic-coupler modems19, to hearing the dial-up tone and getting a dopamine rush to gigabit always online fiber. We’ve gone from TVs with tuning knobs for UHF and VHF20 to having 10 streaming services that you forget you have, and having to use an AI to figure out which streaming service some random movie you want to watch is actually on. I am seeing technology move faster and faster.
I’ve also read a few great essays such as Something Big Is Happening23 and The Adolescence of Technology22. If you continue to extrapolate it seems like super-intelligent AI will happen soon. How AI will affect jobs is huge, and how will this affect peoples lives for better or worse. My hope is that this technology can be an amplifier for those that have ideas and want to build things. I also think there is a real concern of AI-safety. If a model is more intelligent than any human, why couldn’t they break any security we put in front of systems? If a model is not “aligned” could it utilize its intelligence to break out of any boxes constructed by mere humans? Humans are already bad at telling when they are lied to, what happens when a superintelligent entity without a face starts doing it?
Every day seems like a mixture of “Wow! This stuff is magic” and “I got a bad feeling about this”. My approach is to lean in and learn, while trying to see the big picture. Exciting times.
References
- Skynet - Terminator franchise
- Replicants - Blade Runner
- Positronic brains - Isaac Asimov’s Robot series
- Deep Dream - Google’s neural network visualization
- Chihuahua or Muffin - Viral machine learning meme
- ChatGPT - OpenAI
- Google Search
- DuckDuckGo
- Gemini - Google AI
- Windsurf - Codeium IDE
- GitHub
- OpenCode
- Claude Sonnet - Anthropic
- Model Context Protocol (MCP)
- Grafana
- JIRA - Atlassian
- Claude Opus - Anthropic
- NotebookLM - Google’s AI notebook
- Acoustic coupler - Early modem technology
- UHF and VHF - TV broadcast frequencies
- Waymo passing Austin school buses
- The Adolescence of Technology - Dario Amodei
- Something Big Is Happening - Matt Shumer
- AI 2027