Created: 2026-02-09 Updated: 2026-05-03 23 min read

This is my 40-day devlog:

// just map the error and use a closure to do something
blah_blah.map_err(use a closure or do something with this)

for handling an option:

// providing a fallback value
value.unwrap_or("default")
// (closure computes only if None)
value.unwrap_or_else(|| compute_default())
// convert option -> result so you can use ? 
// and propagate the error upwards
value.ok_or_else(|| MyError::NotFound)?

days like this I miss golang a lot. I had no problem with err != nil but one annoying part in go is if you encounter an error you still have to return the corresponding empty value along with the error and I hated it in go but here in rust I feel returns are handled better. eg:

func a() (s string, err error) {
    result, err := somethingThatCanFail()
    if err != nil {
        // You're forced to return a zero/empty value alongside the error
        // For a string it's "", for a struct it's annoying to find the 
        // definition and return an empty
        // if I return nil instead of empty initialized we should
        // make sure thats handled it else panic
        return "", err
    }
    return result, nil
}

but in rust:

fn a() -> Result<String, SomeError> {
    // ? propagates the error and returns early on Err so no 
    // empty value needed
    // ? operator is nothing but a short notation for match 
    // command to match value and error, 
    // if error propagates upwards
    let result = something_that_can_fail()?;
    Ok(result)
}

day 11

continued my epub internals exploration. my software is slowly shaping up. One goal I had in mind was to embrace the language’s functional patterns. I usually avoid using functional patterns and do the grunt work instead, mainly because I was lazy to learn and adapt to the pattern. so today, this was the scenario I had to code:

 let r =new::vec() // initialized empty container

  run a for loop{
    
    do a bunch of transformations
    append the end result of transformation into r
  }
  return r

I knew there existed some functional pattern I could leverage. did some digging and found this

let r = iterator.map(|some closure| =>{
    perform whatever
})collect::<type>();

for example:

items.iter().map(|item| ->Result<somestruct,error>{
Ok(somestruct{})}).collect::<Result<Vec<_>,_>>()

this is a super cool pattern I learned today. collect solves a lot of manual typing and makes it look cleaner but I still feel it adds some cognitive strain compared to straightforward looping and collecting, despite the additional lines. This is just my personal bias. for me the number one metric of code quality is cognitive load when someone reads the code. just thinking about cognitive load solves all other code quality related problems for me like naming, clean code…whatever! I also learnt about what _ does. it just smartly identifies the type at compile time. it’s a placeholder for a concrete type that rust already knows from context. I had a few rust CRUDs written so I quickly vibecoded a very basic frontend and asked claude to connect the frontend with my written backend. I want to learn typescript as well but for now I am focusing purely on improving my rust skills. so all my rust code is handwritten and I am planning to use basic simple vibecoded typescript for frontend. mindboggling how claude is able to one shot beautiful frontend which I could never do without llm’s help. One problem I have with vibe coding typescript is I am not literate enough in typescript to judge the code it generates. so eventually I’m thinking of quickly going through typescript docs (as I already know a little bit of react) to understand the generated code better because vibecoding feels very uneasy to me.

day 12

with the gained knowledge of epub I finalised that I need to extract the spine out of epub to get the chapters, wrote spine extraction from epub using epub crate which fortunately works out of the box. today I learnt the difference between iter() and into_iter(). so assume you are returning a struct after conversion:

Ok(some_container.iter().map(|item]->some_struct{
    fill the struct with reference &val
 })
)

this is actually wrong because iter borrows not moves so we can access the reference only and returning will go out of its lifetime. so in cases like these we can do a ownership change using into_iter() which will transfer the ownership so we can return the struct

day 13

Since I am vibecoding the TypeScript frontend, I can’t trust the code AI writes. I need to make sure I validate and sanitize every single thing that comes to the backend at the backend entrypoint (handler). There are many ways to skin this cat but I felt we could do type-driven validation and sanitization by utilizing the constructor and access specifiers in Rust to enforce validation. we can either define custom types (tuple struct or whatever). I started to use this pattern extensively in model.rs

pub struct StructA(usize);

impl StructA {
    pub fn parse(v: usize) -> Result<Self, ApplicationError> {
        if v > 10_000 {
            return Err(ApplicationError {
                    // error is handled here
            });
        }
        Ok(Self(v))
    }
    pub fn get(&self) -> usize {
        self.0
    }
}

pub struct StructBRequest {
    file_id: uuid::Uuid,
    spine_idx: StructA,
}

impl StructBRequest {
    pub fn validate(raw: StructBRaw) -> Result<Self, ApplicationError> {
        Ok(Self {
            file_id: raw.file_id,
            spine_idx: StructA::parse(raw.spine_idx)?,
        })
    }
    pub fn file_id(&self) -> uuid::Uuid {
        self.file_id
    }
    pub fn spine_idx(&self) -> usize {
        self.spine_idx.get() 
    }
}

take a look at this code, we have 3 structs here: StructA which is a tuple struct, StructBRequest which is the one that stores validated and sanitized input, StructBRaw is the raw input from the UI before validation. StructA is public but you can see that usize is not public, so you can access an object of the structure but not the values in it from other modules. similarly in StructBRequest, even though the struct as a whole is public, thanks to Rust’s access specifiers, file_id and spine_idx are private other modules cannot access these values directly and have no choice but to use file_id() and spine_idx() to get the values, which are validated via validate(). let’s name the module which calls this struct: call.rs

// import the structures from model.rs
// assume this handler is called by the ui
fn handler(raw: StructBRaw)->Result<(),ApplicationError>{
    let req=StructBRequest::validate(raw)?;
    // req is the object of the struct but req.file_id will
    // throw error because file_id is private
    // so how can we access the fields?
    let file_id = req.file_id();
    // note: req.file_id() is same notation wise as 
    // StructBRequest::file_id(&req)
    let spine_idx = req.spine_idx();
}

by doing this you have no choice but to validate. forcing validation is the key here. we are not giving an option to validate or skip, rather we have designed it in such a way that there is no other option to proceed other than validating the incoming struct.

day 14

today I sat and did a bunch of TypeScript vibecoding. I was told frontend is a solved problem for AI but looks like it is 95% close and I have to drive the last 5%, else the models drink a crazy amount of tokens. nevertheless I am mind-blown with the kind of screens these models are able to generate. If not for LLMs I would have taken at least a couple of weeks to build it from scratch. I am still worried about the amount of TypeScript code it generated. I am very sure that if I do it myself it would be a lot less. But the functionality works and I guard/validate and sanitize everything at the backend entry so I am fine with it for now because frontend is not my priority now. I went through a couple of good quality OSS TypeScript projects and learnt about how the files are structured and quickly structured the code. we are 90% close to finishing the application

day 15

I am done with v0 of the RSVP ebook screen reader. so today I spent my time figuring out how to cross-compile and distribute the Tauri application. one disadvantage of Tauri over Electron is you can’t cross-compile to a different OS from one system. this is a direct consequence of Tauri’s biggest advantage. unlike Electron, Tauri uses the OS’s native WebView (WKWebView on macOS, WebView2 on Windows, and WebKitGTK on Linux) so it’s not bundled with the binary. Electron, on the other hand, ships Chromium and the Node.js runtime together with every app, making Electron apps significantly heavier. since the WebView is OS-native, I have no other choice but to use different runners (Linux, Mac, and Windows) to natively compile the application on each OS and release them. thanks to free GitHub runners for public repos, I wrote a simple GitHub Actions workflow that, upon releasing a new tag, automatically starts a build that compiles and creates a draft release. also wrote a simple Makefile to trigger all of these because I am not sure when I will get back to this again. having a Makefile with all commands tied to it is very useful because if I revisit this after months, it’s easier for me to pick things up and run deployments. finally binary size is 4ish MB. successfully release V0. here is a demo video

day 16

I use a Debian stable machine and everything works fine for me but I want to test it on other devices. I asked one of my friends who uses a Mac to test the beta version. he is my go-to person every time I build something. since he is semi-technical he uses whatever I build from an end user perspective. He gave me a bunch of UX feedback, asked for a couple of usability features and he found a couple of bugs. I was very surprised these frontend-specific bugs existed because I tested those scenarios while I built those features. so when the LLM made some changes in the UI it messed this up. This was a revelation for me. from now on if I ever vibe code the frontend, I will definitely have a UI testing setup like playwright. for now I fixed all the bugs and UX features he asked for and redeployed. I built this for myself so I will be using it every day to read ebooks and whenever I need something I am thinking of quickly adding it in. I was asked to build something in golang for a while now so onto the next one :)

day 17

Started to learn about how to build a workflow orchestration state machine. For the kind of load I had to handle, I sat down and decided how to build a scalable version and started to build it. I am trying to design an internal platform which any team can use to configure a custom workflow through their UI and configure any tasks. For this kind of architecture, the tasks can take even hours to finish so it has to be asynchronous as well as event-driven. I was able to design it completely today.

day 18

Built a small POC of the design I planned yesterday to see if I missed something. I built a simple UI to configure any workflows, connect them together in the UI and you get a JSON which has all the transitions, what endpoint should we trigger in that state, state conditionals, start state, end state, etc. Then I converted the JSON into a directed acyclic graph. Then validated the graph. Now we have 2 queues one is the activity queue and another is the workflow queue. The moment a state is triggered we put it in the activity queue and go do something else or sleep. There will be a bunch of workers that listen to the queue. We can auto-scale the workers by listening to the queue and if it reaches a certain threshold we increase the number of workers. The worker’s job is to pick the activity and in our case fire an HTTP request to a particular URL with retries. Once a task is done we put it back in the workflow queue so that it acts like a webhook to again wake up the workflow, saying the task is completed or failed, and show it in the UI. This is what I implemented today.

day 19

I got to know today after a bit more research that there is a workflow orchestrator primitive called Temporal which almost has the same architecture that I designed and built yesterday. So today I went through the docs and created the exact same design but with Temporal primitives. Temporal has the orchestrator implemented, queues implemented, worker implementation they have something called activity which is the task we want to perform. So I combined all those pieces and linked the workflow UI I built yesterday on top of my Temporal implementation.

day 20

Implementing this state machine made me more interested in event-driven architecture. I know a little bit of Kafka but I wanted to know the internals and how Kafka is built and designed. So I started reading this book Kafka the definitive guide, starting with understanding what a log is from this blog the log.

day 21

Started with understanding Kafka terminology like topics, partitions, consumer groups, brokers. Learned how data is written into various partitions (hashing). If the message queue is the main source of truth then replication is important, so went ahead and learned what a replication factor is, how to set one, how to configure producers, consumers and consumer groups. After learning all of these, quickly built a version of it in Go and played around with different configurations.

day 22

Batch commits and offset guarantees are very critical in a message queue. There are 3 commit modes in Kafka: at-least-once, at-most-once, exactly-once. Each one of the modes commits the data differently. One commits after processing, another processes after committing, etc. Each one has its own drawback and if we know how things are configured we can work around it while consuming. Then I implemented a version of commits and offset guarantees to see how data changes when the offset changes and Kafka restarts.

day 23

Reliable data delivery is key because most designs which involve these events written into Kafka would drive the whole architecture. Kafka provides various guarantees like acks=0, acks=1, acks=all and each one has its own drawback and reliability factor. Understanding them is crucial for me to understand the tradeoff and which suits me well. In a typical use case, acks=all works because this means that the leader will wait until all in-sync replicas get the message before sending back an acknowledgment or an error — like strongly consistent.

day 24

There are a lot of different design patterns which are very common and prevailing in designing event-driven architectures. Started to explore them to understand their use cases and tradeoffs. There is a concept called two phase commit in distributed systems and it took me some time to understand why two phase commit is a bad idea for coupling a DB and a message queue. 2PC prioritizes consistency over availability. 2PC sacrifices availability. During the prepare phase, all participants are locked. If the coordinator crashes, all participants are blocked indefinitely, thus making the whole system tightly coupled which goes against the idea of microservices. Then to solve it I found out a solution intuitively and got to know it’s called transactional outbox. After implementing it from scratch I understood there are a few drawbacks in this outbox pattern as well, like sending the same message again when there is a crash, the developer forgetting to commit the data into the outbox table along with other tables, etc.

day 25

Learned about event sourcing today. Event sourcing is very closely related to the state machine we were implementing. When we start using Kafka and if we start storing every single event in chronological order then we will get an event log which produces a comprehensive audit of exactly what the system did, using which we can recreate the system again, revert to an old state by rewinding the log and replaying the events in order. Doing this means technically Kafka becomes the source of truth over the DB because Kafka has the real record of exactly what happened in order and the DB only has the end result but we don’t have any idea what all happened to identify, fix errors if they exist, or revert. So we have to worry about replicating the Kafka queue because now it is the source of truth.

day 26

A very powerful pattern used when we have heavy reads but not very heavy writes or vice versa is Command Query Responsibility Segregation (CQRS). Understood the difference between events, queries and commands. Implemented a simple version of it.

day 27

Too much noise on the applied AI side so today, since it was a leisure day for me, I deep-dived into agents, MCPs, skills and tool calls. Looks like in the end all of it is a bunch of text prompts that go to the LLM xd. Understood from first principles so I thought it would help others, so I recorded and uploaded a 50-min long YouTube video explaining the same.

day 28

Today I got an opportunity to learn and implement a solution via dspy. I linked DSPy for programmatic prompting along with Google’s Agent Development Kit for agent loop and other agent support. I also used gepa to try and optimize my naive prompts which I fed via DSPy.

day 29

Had some search problem that I needed to solve at work. I have exposure to phonetic search algorithms like Metaphone, Double Metaphone, etc. I was exploring keyword search algorithms like TF-IDF, understood the math behind them and what their drawbacks are, and later learned about the BM25 algorithm which powers Lucene. Then went ahead to understand dense embedding algorithms, how to use cosine similarity to fetch the data. Then read this open search blog on the reciprocal rank fusion algorithm to rerank and combine hybrid retrieval, which I can use to combine both BM25 keyword-based search as well as semantic search.

day 30

I was implementing dense embedding from what I learned yesterday but it was still not good enough to understand the surrounding semantic context around a sentence. Within a sentence dense embedding works well but I want to understand the semantic similarity between the question and the available sources. I learned about cross encoder ranking. Assume we have 1 question, 10 sentences. We feed all of them through a cross encoder which will give us the similarity rank. The difference between a dense encoding and a cross encoder is that the cross encoder is better at understanding not just the question’s context but also the surrounding sentence’s context, so the attention mechanism can make a better choice in understanding the context. Went through how it is implemented (full cross attention, attention mapping, CLS token, SEP token, etc.)