My blog started, indirectly, because I was 15 minutes late to class. Actually, the idea of having a personal blog began in the university library, but without the confrontation on the first day of uni, it might not have happened either.
It was the first lecture of Introduction to Programming at university. I rushed in to find most seats taken after arriving about 15 minutes late (I had just moved from the countryside to the city, so my biological clock hadn’t adjusted quickly enough)—except one seat next to a guy who looked focused. I slid into the seat. After a few conversations, I found out that this guy had a similar background to mine. Both of us had enrolled in this university with awards from national olympiad competitions—mine in physics, his in math. By some turn of events, we ended up whispering not about code or why we chose this university after high school, but about quantum physics and relativity. Despite having a background in mathematics, especially combinatorics and number theory, he knew quite a bit about physics and its foundations, citing popular names like Einstein, Heisenberg, and Schrödinger.
He was brilliant—a graduate of a top high school in the city, with a mind for math that both impressed and intimidated me. Yet, we found common ground in nerdy, big ideas. Our interests were like two sides of a coin: he was drawn deeply into the human world—religion, society, history—devouring books like Sapiens and other major works about humanity, belief systems, and imaginary orders. I was pulled toward the abstract and objective aspects of nature—consciousness, spacetime, and the metaphysics of everything beyond us. Metaphysics is one of the biggest pillars of general philosophy, which focuses on looking at the world from a higher perspective, using abstract reasoning to draw out some of the most universal principles from the physical world instead of dissecting each phenomenon individually—hence the name meta-physics.
An unexpected friendship bloomed from that awkward first meeting. It felt a bit like a duality: Naruto and Sasuke, perhaps, but with fewer fights and more library chats.
During those chats, he’d share his wild, often startling theories on society and ethics. Some were so out-of-this-world that I’d joke he needed a warning label, or society might confront him on a beautiful evening. One afternoon in the library, chatting about everything to relax after finishing our last examinations of the second semester, half-amused and half-serious, I told him, “Why don’t you start a blog? These ideas aren’t the kind of things people hear in daily life. Perhaps you’ll find someone with the same mindset.”
I didn’t think much of it, but that sentence seemed to light a spark in him. He got distracted for days and began pouring his thoughts into Google Docs. When I asked why he didn’t just build a website already, he said something that stuck with me: if he focused on building the site first, he’d lose the motivation to write. The ideas had to come first. Actually, I think the real reason is he hadn’t built anything web-related before, so building a website out of the blue without experience would be challenging—so, yeah, he tried to avoid it and prioritize ideas before they evaporated.
Months passed, and the tables turned. He kept nudging me: “When are you starting a blog?” I always refused. It sounded tedious, and I didn’t feel I had anything pressing to say. Why would anyone care? At that time, I also didn’t have any specific passion to start a blog. When you blog about something, you at least need something in mind, right? But for me, a physics enthusiast trying to fit into the discrete mindset of computer science basics, I surely had nothing to share with everyone at the time.
But something shifted recently. Maybe it was the quiet realization that writing helps me think—and somehow makes me feel at ease. Maybe I finally felt that little fire to share, not for an audience, but for the simple act of putting ideas into words. Maybe writing a blog about the experiences and hardships I’ve encountered would be a lifeboat and a perfect guide for me in the future when, again, I feel lost in my mind. So here I am, using a simple template I found on GitHub (I’m not hardcore enough to code a site from scratch), writing these first lines.
I’m not sure who will read this, and that’s okay. This space is for me—to explore ideas that catch my eye, from math and physics to philosophy and what to eat for dinner. To document a learning journey. To welcome the random and the yet-unknown.
A blog, it turns out, can begin with being late, a conversation, and a friend who wouldn’t let a good idea stay quiet.
$…$Two years later$…$
A lot has changed since that conversation. I’m no longer a freshman in my first year—I’m now a third-year undergraduate, with a clearer mind and a stronger mental model, and, most importantly, I’ve figured out what I want to do and research.
Surviving dozens of projects across many subjects from my first year through my second year, I realized that my formal coursework hasn’t led me to the technical frontiers I find most compelling. While my school’s curriculum hasn’t yet dived deep into the AI and deep learning topics that spark my curiosity, I haven’t waited. For over a year now, starting sometime in the second semester of my second year at university, I’ve been on a self-directed journey down the rabbit holes of reinforcement learning and deep learning foundations. There are a few reasons why deep learning caught my attention at this time:
- First, it uses much more derivatives, integrals, differentiation, and many things born from calculus and mathematical analysis. Those are what I dealt with a thousand times back when I was competing in the National Physics Olympiad—I slept with them every night and woke up with them every morning during my preparation.
- Second, it draws inspiration from nature: deep learning borrows the neural architecture inspired by the biological structure of the human brain, while reinforcement learning embraces the idea of trial and error in the learning process of nearly every form of life on Earth, including us humans.
- And finally, I just wanted to escape my boring daily life at the time—to break away from the normal workload of school and find something out there that I felt passionate about, where tediousness seemed too luxurious to have even once.
After a long period of self-teaching the foundational knowledge of deep learning and reinforcement learning, I found something that, once again, captivated my mind with its beauty and mystery: the generative power of diffusion models. My background is in physics, so anything that deep learning brings from physics to AI to enhance its power is always captivating to me. I still remember learning about the dynamics of particles in dynamical systems: we have heat transfer between particles in the same material (say, a long tube with one cold end and one hot end, where heat gradually transfers from hot to cold); the internal friction between layers of fluid flowing through an environment, where layers slide against each other, creating friction that slows some layers and speeds up others; and the final phenomenon, which you might guess from my talk, is the diffusion process. In that process, particles scatter throughout a container or environment, increasing the system’s entropy until the particle distribution is nearly uniform everywhere. Somehow, computer scientists have successfully brought the idea of diffusion processes into generative models, creating state-of-the-art models in image generation—and now they’re potentially being used for other generation tasks, including text in natural language processing and trajectories/rewards in robotics learning.
This process of teaching myself complex topics has taught me something crucial: the best way to solidify your understanding is to articulate it clearly for someone else. The idea of this self-teaching technique is easily found on the internet. There are a few keywords when talking about these techniques:
- Rubber duck debugging: We, as undergraduates majoring in computer science/information technology (or whatever we call ourselves), may have heard about this technique once or twice. It tells us that the best way to debug an error in a piece of code is to use a rubber duck—innocent and adorable—and teach it about the code we’re writing. Every detail and component of our code must be articulated clearly enough so that a cute thing like a rubber duck can understand. That way, within the next minute or two, we might figure out where the bug is and eliminate it.
- The Feynman Technique: This technique is named after one of the most popular physicists in world history for his contributions to quantum electrodynamics (QED). He is called the best teacher, best explainer, and best scientific communicator in the world due to his curiosity, sense of humor, and unparalleled understanding of the physical world and pedagogical talent. The technique is the same as rubber duck debugging: it tells you to try to articulate an idea you’ve just learned in its simplest form so that even a five-year-old child could understand it. Given the complexity of quantum electrodynamics—which I believe is incomprehensible enough that most of us wouldn’t understand it—I strongly believe the technique this man used is a cornerstone for the self-teaching process.
Ultimately, you don’t truly know a concept until you can explain it. That’s the first reason I’m here now.
This blog, then, is my new tool for thinking. It’s a place where I plan to break down the ideas I’m wrestling with—from the multimodal action spaces in diffusion policies to the exploration-exploitation trade-off in deep reinforcement learning. By writing, I aim to crystallize my own knowledge. And by sharing, I hope to connect with others walking a similar, self-guided path. Talking about learning, I have quite a strange opinion: what we strive to learn is what drives us to the top and stays with us for the rest of our days. Any knowledge bestowed upon us that we take for granted will never be a true companion and will soon be replaced or abandoned. If my notes and explanations can help even one fellow learner, that would be a wonderful bonus.
But my interests aren’t confined to neural networks and loss functions. I also want to explore a different kind of model here: models of thought. I have a long-standing interest in philosophy, particularly epistemology (how we know what we know—the philosophy of knowledge, mind, wisdom, and language) and metaphysics (the nature of reality, existence, objective and subjective matter, etc.). Sometimes, the questions in AI and the questions in philosophy feel surprisingly adjacent. What does it mean for a machine to “learn”? What is the nature of the “world model” an agent builds? What actually is consciousness, a concept we talk about as if we knew it, but which is ready to be replaced or redefined at any time? Writing will allow me to explore these parallels.
So, consider this blog a fusion of my two intellectual tracks. One is technical, focused on understanding and explaining the mechanisms of modern AI. The other is philosophical, focused on questioning the foundations of knowledge and existence that these technologies touch upon.
Eventually, his long journey of persuading me to write a blog turned out to be more successful than he thought—just earlier than I was ready to hear it. This space is my commitment to the process of learning, thinking, and sharing. I’m not sure exactly where it will lead, but I’m excited to find out.
Welcome.