Blog Posts
I've always wanted to write a blog but never convinced myself I was capable of it. However, I honestly use LLMs to quickly transform my thoughts into words - at least something remains! So non-technical posts will be co-authored by some model :)
Filter by topics:
We Won START Hack 2026 with AuditChain: Fuzzy Logic, Metaflow, and an Audit-Ready Procurement Agent
How our team won START Hack 2026 by building AuditChain, an autonomous procurement decision engine that keeps the LLM on parsing and explanation duty while the actual decision path stays deterministic, auditable, and boring in exactly the right places.
The post covers the real architecture from the hackathon solution: FastAPI, React, Metaflow observability, Azure-facing deployment proxies, fuzzy handling of borderline procurement cases, and why this problem is much more about decision discipline than flashy AI demos.
Also included: the prize-stage photos, the officially nonchalant photo, and a defense of fuzzy logic that survived both a hackathon jury and my own skepticism.
I Built a Free AI Newsletter That Learns What I Like
The best things I've ever learned came from friends doing stuff and telling me about it. So I built a "friend" that reads arXiv every night and explains the interesting stuff like an excited colleague.
Broletter is a fully automated, personalized daily science newsletter. It runs on my laptop at 11 PM, costs $0/day, and delivers ~7 minutes of curated science to Telegram each morning — with a gentle feedback loop that adapts to my interests without becoming an echo chamber.
Built with Gemini 2.5 Flash, arXiv API, and Telegram. Fully open source — fork it and make your own.
Optiver Career Kickstarter — Technical Interview Prep Guide
Structured prep guide for Optiver's 75-min technical interview covering code review, deployment planning, CS fundamentals (memory, concurrency, architecture), behavioral STAR stories, and smart questions to ask.
Covers everything from cache locality and lock-free data structures to order book implementation and branch prediction — tailored for low-latency trading systems engineering.
1-Day Networking Interview Prep - Interactive Learning Platform
🚀 Need to prep for a networking interview fast? I built an interactive learning platform covering TCP, BGP, routing protocols, ARP, DNS, DHCP, NAT, IPv4/IPv6, and Linux troubleshooting.
Features include:
- 6 comprehensive modules with hands-on practice
- 30+ W3Schools-style interactive exercises with real-time validation
- Inline exercises throughout - test your understanding as you learn
- Linux command toolkit organized by troubleshooting questions
- Mock interview scenarios and rapid-fire Q&A
Perfect for anyone preparing for networking roles at tech companies. Learn by doing - zero time wasted on theory you won't use!
Should Machines Learn Only from Failures? Exploring Pomodoro Technique
📝 A summary of a pseudo-realistic scientific paper exploring an interesting concept: what if machines could learn from their successes, not just their failures?
In supervised learning, we typically focus on minimizing errors. But what happens when our model predicts something correctly? Standard learning doesn't update anything. This paper explores Pomodoro Learning, a two-phase training cycle inspired by alternating work and reflection, using Bayesian uncertainty estimation to learn from confident predictions without overfitting...
Papers: A Platform for Real Research Sharing
I'll begin by acknowledging that I could be wrong, and perhaps even highly likely to be, but this is simply how I perceive things.
Research, at its core, represents one of the greatest forms of human expression and collaboration. I respect it deeply, but to be honest, it no longer feels as exhilarating as it once did.
What I truly want is to collaborate, plain and simple. I see the potential for a platform dedicated to genuine research collaboration — where people can comment on papers, highlight problems, run experiments, and improve work together...
Fundamentals of Inference: Mathematical Foundations
🔢 Diving deep into the mathematical foundations that make probabilistic inference possible. This post explores why high-dimensional spaces are so challenging and how Gaussian distributions provide elegant solutions.
From the curse of dimensionality to the beauty of conjugate priors - understanding the theoretical underpinnings of modern AI methods requires grappling with some fundamental mathematical concepts.
Exploring concepts like concentration of measure, the advantages of Gaussian assumptions, and why certain mathematical structures make inference tractable in high-dimensional spaces...
PAI Course Notes: Probabilistic AI and Uncertainty
🧠 Welcome to my deep dive into Probabilistic Artificial Intelligence! This course is fundamentally changing how I think about machine learning and AI systems.
It's not just about making machines smart, but about making them humble: systems that know what they don't know, and act cautiously when uncertainty is high.
From Bayesian linear regression to Gaussian processes, from active learning to reinforcement learning - exploring how intelligent agents can reason about uncertainty and make better decisions when the stakes are high...
Big Data Course Notes: Rebuilding the Tech Stack for Scale
💽 Welcome to my journey through ETH's Big Data course! This is where theory meets reality, and where I'm learning that handling petabytes of data requires fundamentally rethinking everything.
The core challenge? "We will have to rebuild the entire technology stack, bottom to top, with those same concepts from the past decades, but on a cluster of machines rather than on a single machine."
From the three Vs (Volume, Variety, Velocity) to HDFS and MapReduce, this post covers the foundations of distributed data systems and why we need to completely rethink traditional databases...
Competitive Programming Notes: From Self-Doubt to Understanding
"Competitive Programming is a Bad Word" - that's how I started these notes. My self-esteem had decreased by 69% after the first AlgoLab lesson at ETH 😅
This post is my raw, honest journey through the prerequisites for AlgoLab: graphs, algorithms, and data structures. It's filled with code examples, performance tips learned the hard way, and the kind of notes I wish I had when starting out.
From DFS and BFS to Dijkstra's algorithm and the pain of getting TLE errors - it's all here, explained in a way that doesn't make you feel stupid.
How I Survived University: Study Tips & Life Lessons
Hey everyone! 👋 So you want to know how I made it through my Computer Engineering degree? Well, grab a coffee and let me share some real talk about university life...
The biggest game changer? Finding the right study partner. I was incredibly lucky to find an amazing friend who became my study buddy (and eventually my girlfriend! 😉). We were perfectly complementary - where I had intuition, she had perseverance; where she excelled at perfectionism, I brought different perspectives.
But there's so much more to share about study techniques, the magic of vacation planning during exam season, and those study materials I created that helped tons of students...