<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Ravi Atluri</title>
        <link>https://raviatluri.in</link>
        <description>Product Engineer at GoFood in Gojek. Working scalable and reliable systems &amp; abstractions for product engineering teams.</description>
        <lastBuildDate>Sun, 15 Mar 2026 07:46:41 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved 2026, Ravi Atluri</copyright>
        <item>
            <title><![CDATA['Redesigning the macOS On-Screen Keyboard']]></title>
            <link>https://raviatluri.in/articles/redesigning-macos-on-screen-keyboard</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/redesigning-macos-on-screen-keyboard</guid>
            <pubDate>Sat, 28 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA['How I redesigned the macOS on-screen keyboard with Claude.']]></description>
            <content:encoded><![CDATA[<p>I got bored of the macOS on-screen keyboard that I have been using for years. I had a brainwave and decided to redesign it. I knew nothing about SwiftUI or Swift, so I obviously got my partner-in-getting-shit-done to build it for me.</p><p>So far, I have been using the <a href="/articles/my-mac-accessibility-setup">mac's built-in accessibility</a> keyboard since 2022.</p><p><img src="https://raviatluri.in/images/my-mac-accessibility-setup/macos-accessibility-keyboard.png" alt="macOS Accessibility Keyboard" /></p><p>Last year, I built <a href="/articles/building-september">September</a>, a communication assistant for myself to talk to my son and wife. But by the time I switched to a browser, opened the app, and typed out the message, my 8-year-old son's attention would have left the room. Also, my son started taking advantage of the long pauses to auto-approve permissions for things I might need him to do. He will walk into my room, ask me something, and immediately loudly declare "Mom, dad said okay! I can have 6 chocolates now".</p><p>You see where this is going.</p><p>So my big idea was to build everything - typing, my voice, notes, stories, etc. - all into one on-screen keyboard.</p><p>I used <a href="https://www.pencil.dev">Pencil</a>, a Claude-powered design tool, to try to give some shape to my ideas. After some back-and-forth, Claude came up with these designs.</p><p><img src="https://raviatluri.in/images/redesigning-macos-on-screen-keyboard/light-theme.png" alt="Light Theme" /></p><p><img src="https://raviatluri.in/images/redesigning-macos-on-screen-keyboard/dark-theme.png" alt="Dark Theme" /></p><p>You can read the full conversation <a href="/transcripts/redesigning-macos-on-screen-keyboard.html">here</a>. The transcript is generated by <a href="https://github.com/sonnes/chitragupt">Chitragupt</a>, yet another brainwave recently built by my ghost-coder.</p><p>We are living in incredible times where there are absolutely no barriers to building anything. If we can imagine it, AI can build it.</p><p>I can't type with my hands, I use a head-mounted mouse and every click takes 0.8 seconds. Yet, Claude allows me to build at incredible speed.</p><p>What's your excuse?
</p>]]></content:encoded>
            <category>accessibility</category>
            <enclosure url="https://raviatluri.in'/images/redesigning-macos-on-screen-keyboard/light-theme.png'" length="0" type="image/png'"/>
        </item>
        <item>
            <title><![CDATA[Introducing xapi - Type-Safe HTTP APIs in Go]]></title>
            <link>https://raviatluri.in/articles/introducing-xapi</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/introducing-xapi</guid>
            <pubDate>Wed, 15 Oct 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[xapi is a lightweight Go library that brings type safety and simplicity to building HTTP APIs. It reduces boilerplate with generics, middleware, and optional capabilities.]]></description>
            <content:encoded><![CDATA[<p>Building HTTP APIs with Go's standard library means writing the same pattern repeatedly:</p><p><pre><code>func CreateUserHandler(w http.ResponseWriter, r <em>http.Request) {
    var req CreateUserRequest
    if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
        http.Error(w, "Invalid request", http.StatusBadRequest)
        return
    }
    defer r.Body.Close()</p><p>    // Extract additional data from request
    req.Language = r.Header.Get("Language")</p><p>    // Validate the request
    err := req.Validate()
   if err != nil {
        http.Error(w, err.Error(), http.StatusBadRequest)
        return
   }</p><p>    // Call the business logic
    user, err := createUser(r.Context(), &req)
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }</p><p>    // Encode and write response
    w.Header().Set("Content-Type", "application/json")
    w.WriteHeader(http.StatusCreated)
    json.NewEncoder(w).Encode(user)
}
</code></pre></p><p>Your typical handler ends up being 30+ lines where only 3 lines are actual business logic.</p><p>Here's the thing: most of this repetition can be abstracted away.</p><p><h2>xapi</h2></p><p><a href="https://pkg.go.dev/github.com/gojekfarm/xtools/xapi"><strong>xapi</strong></a> (<a href="https://github.com/gojekfarm/xtools/tree/main/xapi">GitHub</a>) uses Go generics to turn HTTP handlers into typed functions. Your endpoint becomes: request type goes in, response type comes out. The repetition is taken care of, while still giving you the flexibility to customize the behavior.</p><p>This lightweight framework centers around a few ideas:</p><p><ul><li><strong>Typed endpoints</strong> that handle JSON decoding, validation, and encoding</li>
<li><strong>Optional interfaces</strong> for extraction, validation, status codes, and custom responses</li>
<li><strong>Standard middleware</strong> support without any special wrappers</li></ul></p><p><h2>What It Looks Like</h2></p><p>Here's the same user creation endpoint, but with xapi:</p><p><pre><code>type CreateUserRequest struct {
    Name  string <code>json:"name"</code>
    Email string <code>json:"email"</code>
}</p><p>func (req </em>CreateUserRequest) Validate() error {
    if req.Name == "" || req.Email == "" {
        return fmt.Errorf("name and email required")
    }
    return nil
}</p><p>type CreateUserResponse struct {
    ID    int    <code>json:"id"</code>
    Name  string <code>json:"name"</code>
    Email string <code>json:"email"</code>
}</p><p>func (res <em>CreateUserResponse) StatusCode() int {
    return http.StatusCreated
}</p><p>handler := xapi.EndpointFunc<a href="
    func(ctx context.Context, req </em>CreateUserRequest">CreateUserRequest, CreateUserResponse</a> (<em>CreateUserResponse, error) {
        return &CreateUserResponse{
            ID:    1,
            Name:  req.Name,
            Email: req.Email,
        }, nil
    },
)</p><p>http.Handle("/users", xapi.NewEndpoint(handler).Handler())
</code></pre></p><p>Your handler is a function from request to response just like a typical service or controller layer. <strong>xapi</strong> eliminates the HTTP handler layer.</p><p><h2>The Optional Interfaces</h2></p><p><strong>xapi</strong> defines four optional interfaces. Implement them on request and response types only when needed.</p><p><strong>Validator</strong> runs after JSON decoding. You can even use any validation library here:</p><p><pre><code>func (req </em>CreateUserRequest) Validate() error {
    if req.Name == "" {
        return fmt.Errorf("name required")
    }
    return nil
}
</code></pre></p><p><strong>Extracter</strong> pulls data from the HTTP request that isn't in the JSON body, like HTTP headers, route path params, query strings:</p><p><pre><code>func (req <em>GetArticleRequest) Extract(r </em>http.Request) error {
    req.ID = r.PathValue("id")
    return nil
}
</code></pre></p><p><strong>StatusSetter</strong> controls the HTTP status code. Default is 200, but you can override it:</p><p><pre><code>func (res <em>CreateUserResponse) StatusCode() int {
    return http.StatusCreated
}
</code></pre></p><p><strong>RawWriter</strong> lets you bypass JSON encoding entirely. Use it for HTML or binary responses:</p><p><pre><code>func (res </em>ArticleResponse) Write(w http.ResponseWriter) error {
    w.Header().Set("Content-Type", "text/html")
    fmt.Fprintf(w, "<h1>%s</h1>", res.Title)
    return nil
}
</code></pre></p><p><h2>Middleware</h2></p><p>Middleware works exactly like standard <code>http.Handler</code> middleware. Any middleware you're already using will work:</p><p><pre><code>endpoint := xapi.NewEndpoint(
    handler,
    xapi.WithMiddleware(
        xapi.MiddlewareFunc(rateLimitMiddleware),
        xapi.MiddlewareFunc(authMiddleware),
    ),
)
</code></pre></p><p>Stack them in the order you need. They wrap the endpoint cleanly, keeping auth, logging, and metrics separate from your business logic.</p><p><h2>Error Handling</h2></p><p>Default behavior is a 500 with the error text. You can customize this:</p><p><pre><code>errorHandler := xapi.ErrorFunc(func(w http.ResponseWriter, err error) {
    w.Header().Set("Content-Type", "application/json")
    w.WriteHeader(http.StatusInternalServerError)
    json.NewEncoder(w).Encode(map[string]string{"error": err.Error()})
})</p><p>endpoint := xapi.NewEndpoint(handler, xapi.WithErrorHandler(errorHandler))
</code></pre></p><p>This allows proper error handling, letting you customize the error response, distinguish validation errors from auth failures, map them to appropriate status codes, and format them consistently.</p><p><h2>Why This Works</h2></p><p>Most HTTP handlers follow the same pattern. <strong>xapi</strong> codifies that pattern using generics, so you write less but get more type safety. Your request and response types define the API contract. The optional interfaces give you escape hatches when you need them.</p><p>The result: handlers that are mostly business logic, with HTTP operations abstracted away into a lightweight framework. You can use it with your existing HTTP router and server, keeping all existing middlewares and error handling.</p><p>If you're tired of writing the same HTTP plumbing in every endpoint, <a href="https://github.com/gojekfarm/xtools/tree/main/xapi"><strong>xapi</strong></a> might help.
</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building September - A Communication Assistant]]></title>
            <link>https://raviatluri.in/articles/building-september</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/building-september</guid>
            <pubDate>Tue, 09 Sep 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[A communication assistant for people living with neurodegenerative conditions like ALS, MND, or other speech & motor difficulties.]]></description>
            <content:encoded><![CDATA[<a href="https://september.to" className="my-6 flex items-center justify-center gap-4">
  <img src="https://raviatluri.in/images/building-september/logo.png" width={64} height={64} />
  <div className="bg-gradient-to-r from-amber-500 to-amber-600 bg-clip-text text-5xl font-bold text-transparent">
    {'september'}
  </div>
</a></p><p><a href="https://september.to">September</a> is a communication assistant for people living with neurodegenerative conditions like ALS
(Amyotrophic Lateral Sclerosis, also known as Lou Gehrig's disease), MND (Motor Neuron Disease), or
other speech & motor difficulties.</p><p><h2>Background</h2></p><p>I was diagnosed with ALS in 2019, and have gradually lost the ability to speak and type. I relied on a combination of dictation, one-handed typing, and a mouse. However, as my speech became more slurred and my arms weakened, I continued working, coding, and using my computer. I adapted to various combinations of one-handed typing, dictation, and on-screen keyboard. I currently use an on-screen keyboard and a head-mounted mouse (a device that tracks head movement to control a computer cursor).</p><p><h3>Tools I've Used</h3></p><p><strong>GitHub Copilot</strong>: With the launch of GitHub Copilot in 2021 (an AI assistant that helps write code and text), I was able to continue coding and writing everything in VSCode (a popular text editor used by programmers). The smart text suggestions that learn from your writing were a game-changer. I was able to write e-mails, blog posts, long messages, work documents, and all within VSCode. I would copy-paste text to-and-from different platforms like Slack, Google Docs, GMail, etc. While cumbersome, it was an effective way to type faster. I hoped that the Apple Mac's on-screen keyboard would somehow provide a similar experience.</p><p><strong>Voice Banking Challenges</strong>: As my speech slurred, I tried several voice banking apps. All of them required long recordings in quiet environments. It was hard to get a good recording with a 3-year-old in the house during lockdown.</p><p><strong>ElevenLabs Voice Cloning</strong>: In 2023, ElevenLabs launched their voice cloning technology. This was the first easy way to clone my voice without jumping through hoops. But by then, my speech was already slurred. So I dug through my old hard drives and videos to find old recordings of me speaking. I used these recordings to create a long enough recording to clone my voice. It wasn't perfect, but it was better than everything else.</p><p>With the rapid evolution of AI technologies in coding, writing, and speech, I set out to build an application that combines all these different advancements to make communication easier.</p><p><h2>Daily Communication Needs</h2></p><p>All these smart AI tools changed the way I wrote code and worked remotely, but daily communication remained challenging.</p><p>The most common thing I do is talking to people around me. It could be for simple things like asking for water, or telling what I'd like to eat for lunch. Sometimes, it could be video calls with my family and friends. Or just telling my wife about how my day went.</p><p>Most often I would compose messages in notes or WhatsApp and have people read them while talking to me. Even with voice cloning services, I still need to type out full sentences every time. These tools don't offer auto-complete or suggestions based on previous conversations. Nor could I use my cloned voice with FaceTime, Zoom, or other video calling apps.</p><p>While using head-mouse or eye-tracking devices, clicks are precious. Especially when the vocabulary and sentences used every day are the same, I should not have to type out full phrases and sentences every time.</p><p>Beyond daily conversations, I also needed to share longer thoughts and stories. Whether explaining technical concepts to my team, recounting childhood memories to my son, or documenting ideas that might help others facing similar challenges, I found myself writing longer pieces of text. These weren't just messages—they were narratives that I wanted to share with my voice, not just text on a screen.</p><p><h2>The Product: September</h2></p><p>These experiences inspired me to build September—a communication tool that combines the AI technologies I've been using to make daily communication easier and more natural.</p><p><img src="https://raviatluri.in/images/building-september/talk-demo.png" alt="September Screenshot" /></p><p><h3>Smart Text Editor</h3></p><p>I wanted September to learn from how I actually talk, not just provide generic suggestions. The smart text editor uses my message history to provide instant auto-complete suggestions, just like GitHub Copilot did for my coding. It also uses AI (Gemini, Claude, etc.) to provide contextually relevant sentences, phrases, and words. The more I use it, the better it gets at understanding my writing style and providing more accurate suggestions.</p><p>I also wanted to be able to share my thoughts and stories with proper context. In every conversation, I can provide additional context in the form of notes, documents, images, videos, or links. September indexes all this information to help me "speak my mind" in conversations, whether I'm explaining technical concepts to my team or sharing childhood memories with my son.</p><p><h3>Text-to-Speech</h3></p><p>September integrates with multiple text-to-speech providers, giving me a choice of voices that suit my language, style, and personality.</p><p><h3>Voice Cloning</h3></p><p>The ElevenLabs voice cloning technology was a breakthrough for me. September makes it easy to record my own voice or use samples from existing audio or video files. I can also search and choose from a wide range of community and professional voices. This gives me the flexibility to use my own voice when I have good recordings, or find alternatives that work well for me.</p><p><h3>Transcription</h3></p><p>When others are talking to me, September needs to understand what they're saying quickly. September uses AI to transcribe audio in real-time and provides contextually relevant suggestions based on the conversation. This helps me respond more naturally and keep up with the flow of conversation.</p><p><h3>Multiple Keyboard Layouts</h3></p><p>Since I use a head-mounted mouse and need to minimize clicks, September provides multiple on-screen keyboard layouts—QWERTY, Circular, Emojis, and more. I can choose the layout that works best for my current input method, whether that's mouse, or eye-tracking devices.</p><p><h3>Accessible Design</h3></p><p>September is designed to work for people like me who have speech and motor difficulties. It works on any modern web browser across desktop and mobile devices, and it's compatible with various input methods. No complex setup or downloads are required—I can just open it in my browser and start communicating.]]></content:encoded>
            <category>september</category>
            <category>als</category>
        </item>
        <item>
            <title><![CDATA[Handling Errors in Go - Beyond the Basics]]></title>
            <link>https://raviatluri.in/articles/handling-go-errors</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/handling-go-errors</guid>
            <pubDate>Tue, 27 May 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Go's error handling philosophy is simple, but some applications need more than just error strings. By combining error wrapping, custom error types, and structured error metadata, you can build maintainable, and observable Go applications.]]></description>
            <content:encoded><![CDATA[<p>If you've written Go, you know that error handling is a core part of the language. Go's approach is pretty simple: errors are just values, and you're expected to check and handle them explicitly.</p><p>The simplest way to create and use errors in Go is with <code>errors.New</code>:</p><p><pre><code>var ErrUserNotFound = errors.New("user not found")</p><p>func GetUser(id int) (User, error) {
    if id == 0 {
        return User{}, ErrUserNotFound
    }
    // ...
}
</code></pre></p><p>Checking for specific errors is straightforward with <code>errors.Is</code>:</p><p><pre><code>if errors.Is(err, ErrUserNotFound) {
    // handle user not found
}
</code></pre></p><p>This works well for most applications. But, sometimes you'll want to add & propagate more context with errors, while still preserving the original error value.</p><p>Go 1.13 introduced error wrapping with <code>%w</code> in <code>fmt.Errorf</code>. This lets you add context while preserving the original error:</p><p><pre><code>func GetUser(id int) (User, error) {
    if id == 0 {
        return User{}, fmt.Errorf("user with id %d not found: %w", id, ErrUserNotFound)
    }
    // ...
}
</code></pre></p><p>You can still use <code>errors.Is</code> to check for the original error, but you lose access to the extra metadata (like the user ID) as the error propagates.</p><p>To carry metadata with your errors, you can define a custom error type:</p><p><pre><code>type UserNotFoundError struct {
    ID int
}</p><p>func (e <em>UserNotFoundError) Error() string {
    return fmt.Sprintf("user with id %d not found", e.ID)
}</p><p>func GetUser(id int) (User, error) {
    if id == 0 {
        return User{}, &UserNotFoundError{ID: id}
    }
    // ...
}
</code></pre></p><p>This lets you attach structured data to your errors, but it comes with a cost: you'll end up creating lots of custom error types, which can get verbose and hard to maintain—especially if you want to add structured logging, metrics, or handle errors in middleware and HTTP handlers.</p><p>What if you could attach arbitrary key-value pairs to errors, and access them anywhere the error is handled?</p><p><a href="https://github.com/gojekfarm/xtools"><code>xtools/errors</code></a> (<a href="https://pkg.go.dev/github.com/gojekfarm/xtools/errors">docs</a>) makes this possible by allowing you to attach arbitrary key-value pairs to errors:</p><p><pre><code>package errors_test</p><p>import (
	"fmt"</p><p>	"github.com/gojekfarm/xtools/errors"
)</p><p>func ExampleWrap() {
	// Create a generic error
	err := errors.New("record not found")</p><p>	// Wrap the error with key-value pairs
	wrapped := errors.Wrap(
		err,
		"table", "users",
		"id", "123",
	)</p><p>	// Add more tags as the error propagates
	wrapped = errors.Wrap(
		wrapped,
		"experiment_id", "456",
	)</p><p>	// errors.Is will check for not found error
	fmt.Println(errors.Is(wrapped, err))</p><p>	// Use errors.As to read attached tags.
	var errTags </em>errors.ErrorTags</p><p>	errors.As(wrapped, &errTags)</p><p>	// Use the tags to construct detailed error messages,
	// log additional context, or return structured errors.
	fmt.Println(errTags.All())
}
</code></pre></p><p>With this approach, you can:</p><p><ul><li>Attach context as the error propagates (e.g., table, user ID, experiment ID)</li>
<li>Check for specific error types with <code>errors.Is</code></li>
<li>Extract all attached metadata for logging, metrics, or constructing API responses</li>
</ul></p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Introducing xkafka - Kafka, but Simpler (for Go)]]></title>
            <link>https://raviatluri.in/articles/introducing-xkafka</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/introducing-xkafka</guid>
            <pubDate>Wed, 07 May 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[xkafka is a Go library that brings HTTP-like abstractions to Apache Kafka. It simplifies producing and consuming messages by using familiar concepts like handlers and middleware, reducing boilerplate and letting you focus on application logic.]]></description>
            <content:encoded><![CDATA[<p>I've spent a fair bit of time writing Kafka consumers and producers in Go. If you've used <a href="https://github.com/confluentinc/confluent-kafka-go">confluent-kafka-go</a>, you know the drill.</p><p>Your consumer probably looks something like this:</p><p><pre><code>consumer, err := kafka.NewConsumer(&kafka.ConfigMap{/<em>...</em>/})</p><p>err = consumer.SubscribeTopics([]string{/<em>...</em>/}, nil)</p><p>// some way to cancel and stop the consumer
run := true
for run {
    msg, err := consumer.ReadMessage(time.Second)
    if !err.(kafka.Error).IsTimeout() {
        // handle error from consumer/broker
    }
    // process message
    // manually commit the offset, if needed
}
consumer.Close()
</code></pre></p><p>There's a lot that goes into the processing loop:</p><p><ul><li>read messages</li>
<li>handle Kafka and application errors</li>
<li>retry transient errors</li>
<li>metrics, logging, tracing, etc.</li>
<li>secondary dead letter queues</li>
<li>and, of course, wiring all this together</li></p><p>A surprising amount of code isn't really about your application logic. If you're building something that consumes more than one kind of message, this quickly gets verbose. Most of the code is just scaffolding.</p><p>What if we could make using Kafka, in Go, feel more like writing a simple HTTP service?</p><p><h2>HTTP-like Kafka</h2></p><p><a href="https://pkg.go.dev/github.com/gojekfarm/xtools/xkafka"><strong>xkafka</strong></a> (<a href="https://github.com/gojekfarm/xtools/tree/main/xkafka">GitHub</a>) is a Go library that provides HTTP-like abstractions for Kafka. It tries to make working with Kafka feel a bit more like writing a simple HTTP service, and a lot less boilerplate and plumbing.</p><p>Here are the core abstractions:</p><p><li><strong>Message</strong>: Like an HTTP request. It has the topic, partition, offset, key, value, headers, and so on. It also allows callbacks to track message processing.</li>
<li><strong>Handler</strong>: Like an HTTP handler. It's where your business logic lives.</li>
<li><strong>Middleware</strong>: Just like HTTP middleware, but for Kafka. You can add logging, metrics, retries, etc., without cluttering your core logic.</li></ul></p><p><h2>Publishing Messages</h2></p><p>First, let's get simple things out of the way. Here's what publishing a message looks like with xkafka:</p><p><pre><code>producer, err := xkafka.NewProducer(
    "producer-id",
    xkafka.Brokers{"localhost:9092"},
    xkafka.ConfigMap{
        "socket.keepalive.enable": true,
    },
)</p><p>producer.Use(/<em> add middlewares </em>/)</p><p>msg := &xkafka.Message{
    Topic: "test",
    Key:   []byte("key"),
    Value: []byte("value"),
}
err = producer.Publish(ctx, msg)
</code></pre></p><p>That's it. You can also publish asynchronously if you want higher throughput or want to handle delivery events asynchronously:</p><p><pre><code>producer, err := xkafka.NewProducer(
    // ...
    // configure a callback to handle delivery events
    xkafka.DeliveryCallback(func(msg <em>xkafka.Message) {
        // ...
    }),
)</p><p>// ...create a message
// or, configure a callback on the message itself
msg.AddCallback(func(msg </em>xkafka.Message) {
    // ...
})</p><p>// start the producer. this will start a background goroutine
// that will handle message delivery events.
go producer.Run(ctx)</p><p>// publish a message. this will return immediately.
err = producer.AsyncPublish(ctx, msg)
</code></pre></p><p><h2>Consuming Messages</h2></p><p>Now let's talk about the other side of Kafka: consuming messages. In my experience, this is where most of the complexity (and headaches) with Kafka show up. There are so many ways to configure and process messages in a consumer. The tradeoffs between throughput, durability, and delivery guarantees can get confusing and complicated.</p><p><strong>xkafka</strong> distills the most common patterns into a few simple abstractions and sensible defaults, while still giving you the flexibility to tune things for your needs.</p><p><pre><code>handler := xkafka.HandlerFunc(func(ctx context.Context, msg <em>xkafka.Message) error {
    // ...
    return nil
})</p><p>consumer, err := xkafka.NewConsumer(
    "consumer-id", // consumer group id
    handler,
    xkafka.Brokers{"localhost:9092"},
    xkafka.Topics{"test"},
    xkafka.ConfigMap{/</em>...<em>/},
)</p><p>consumer.Use(/</em> add middlewares <em>/)</p><p>err = consumer.Run(ctx)
</code></pre></p><p><h3>Streaming vs. Batch</h3></p><p>There are two main ways to consume messages:</p><p><ul><li><strong>Streaming</strong> (with <code>xkafka.Consumer</code>): You process messages one at a time, as soon as they arrive. This is great for low-throughput systems, or when you want to keep memory usage low and have strong processing guarantees.</li>
<li><strong>Batch</strong> (with <code>xkafka.BatchConsumer</code>): You process messages in batches, either by size or by time window. This is useful for high-throughput systems, or when you want to buffer spikes and avoid hammering downstream systems or databases with every single message.</li></ul></p><p>Both approaches keep messages in order. With batches, you can control the size or frequency of those batches.</p><p><pre><code>consumer, err := xkafka.NewBatchConsumer(
    // ...
    xkafka.BatchSize(100), // batch size
    xkafka.BatchTimeout(15</em>time.Second), // time window
)
</code></pre></p><p><h3>Sequential or Async</h3></p><p>After reading a message or batch, <code>xkafka.Concurrency(N)</code> determines how messages or batches are processed:</p><p><ul><li><strong>Sequential</strong>: Default. One message or batch at a time. The next one isn't read until you're done with the current one.</li>
<li><strong>Asynchronous</strong>: N > 1. Multiple messages or batches are processed in parallel.</li></ul></p><p><h3>Offsets</h3></p><p>One thing that always tripped me up with Kafka consumers is <a href="https://github.com/confluentinc/librdkafka/wiki/Consumer-offset-management">offset management</a>. By default, Kafka moves the offset forward as soon as it delivers a message, not when you finish processing it. That means if your downstream is temporarily down, or your app crashes mid-processing, you might lose messages.</p><p>To solve this, I have seen developers add a separate database or queue to guarantee message processing. This adds another system to maintain and an additional point of failure. This is unnecessary.</p><p><strong>xkafka</strong> simply sets <code>enable.auto.offset.store=false</code> and only stores the offset after the handler finishes processing the message or batch. So if something goes wrong, you'll just re-process the last message, not lose it. For batches, it tracks the highest offset, per topic and partition, in the batch.</p><p>This means you don't need a separate database or queue just to keep track of what you've processed. Kafka handles it for you.</p><p><strong>Note:</strong> If you are tracking the Kafka lag, remember that increasing lag is not a bad thing. Instead of optimizing for zero lag by offloading messages to another queue, you should focus on improving throughput of your downstream systems.</p><p>#### At-Most-Once Guarantee</p><p>By default, <strong>xkafka</strong> relies on Kafka's <code>enable.auto.commit=true</code> and <code>auto.commit.interval.ms</code> to commit offsets, periodically in the background.</p><p>By enabling <code>xkafka.ManualCommit(true)</code> in sequential mode, you can achieve at-most-once processing guarantees for each message or batch. <strong>xkafka</strong> ensures that the offset is committed before reading the next message.</p><p>#### At-Least-Once Guarantee</p><p>If you combine <code>xkafka.ManualCommit(true)</code> with <code>xkafka.Concurrency(N > 1)</code>, you can process messages or batches in parallel, while <strong>xkafka</strong> will ensure offsets are committed synchronously in order. This way, you get at-least-once processing guarantees.</p><p><h2>Error Handling</h2></p><p>One of the tricky parts of Kafka is handling broker errors, application errors, transient errors, and retries. <strong>xkafka</strong> allows you to handle errors in a layered way:</p><p><h3>Handler Level</h3></p><p>The simplest way is to handle application errors in your handler implementation itself.</p><p><pre><code>handler := func(ctx context.Context, msg <em>xkafka.Message) error {
    err := processMessage(ctx, msg)
    if err != nil {
        // log and/or trigger alert
        // optionally, move message to a dead letter topic or queue
        msg.AckSkip()
        return nil
    }</p><p>    msg.AckSuccess()
    return nil
}
</code></pre></p><p><h3>Middleware Level</h3></p><p>Middleware is a great way to reuse application-specific error handling logic across handlers and consumers.</p><p><pre><code>handler := xkafka.HandlerFunc(func(ctx context.Context, msg </em>xkafka.Message) error {
    // ...
    if err != nil {
        // propagate error to middlewares
        msg.AckFail(err)
        return err
    }</p><p>    // ack the message
    msg.AckSuccess()
    return nil
})
</code></pre></p><p>You can use a combination of retry and custom error handling middlewares to implement different retry strategies.</p><p><pre><code>consumer.Use(
    RetryMiddleware(/<em>...</em>/),
    xkafka.MiddlewareFunc(func(next xkafka.Handler) xkafka.Handler {
        return xkafka.HandlerFunc(func(ctx context.Context, m *xkafka.Message) error {
            err := next.Handle(ctx, m)
            if errors.Is(err, app.SomeError) {
                // handle application error
            }</p><p>            // differentiate between transient, retryable errors
            // and permanent failures</p><p>            return err
        })
    }),
)
</code></pre></p><p><h3>Global Level</h3></p><p><code>xkafka.ErrorHandler</code> is a mandatory option when creating a producer or consumer. Kafka broker and library errors are only visible to the <code>xkafka.ErrorHandler</code>.</p><p><pre><code>consumer, err := xkafka.NewConsumer(
    // ...
    xkafka.ErrorHandler(func(err error) error {
        // returning a non-nil error will stop the consumer
        return err
    }),
)
</code></pre></p><p>This layered approach forces you to think about error boundaries and how you want to handle errors in your application.
</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Career Growth - Beyond IC vs Manager]]></title>
            <link>https://raviatluri.in/articles/career-growth</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/career-growth</guid>
            <pubDate>Wed, 30 Apr 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Career growth in software engineering is often framed as a choice between the Individual Contributor (IC) and management tracks. But is it really that simple?]]></description>
            <content:encoded><![CDATA[<a href="https://xkcd.com/2756/" target="_blank">
  <img src="https://imgs.xkcd.com/comics/qualifications.png" alt="Qualifications" />
</a></p><p>A colleague once asked me, _"What was some of the good mentoring guideline that might have been shared with you in past to help you become a Sr PE?"_</p><p>The traditional view presents career progression as a binary choice: you either stay on the Individual Contributor (IC) track or switch to the management track. But in my experience, this is an oversimplified model that doesn't help you learn and grow.</p><p>Let me illustrate this with an example:</p><p>Imagine you're a CXO launching a critical, game-changing initiative. You have three candidates to choose from:</p><p><ul><li>A software architect skilled at building systems and applications</li>
<li>A manager with a track record of delivering results and managing effective teams</li>
<li>Someone who regularly contributes to code and architecture while successfully managing complex projects</li></p><p>Who would you pick to lead?</p><p><h2>Rethinking Career Growth</h2></p><p>While most companies define career ladders as mutually exclusive paths - either management or IC. I've developed a different mental model over my years working across various teams and roles. This model views career growth as a combination of three dimensions:</p><p><li>Spectrum of Skills</li>
<li>Problems to be Solved</li>
<li>Sphere of Influence</li></ul></p><p>True growth means expanding in all three dimensions, though not necessarily linearly or at the same time.</p><p><h2>Spectrum of Skills</h2></p><p>Your career typically begins as a software engineer, focusing on mastering programming languages, frameworks, and tools. You start by picking up tasks, writing code, and fixing bugs.</p><p>As you progress, you begin tackling architecture problems. While you're still coding, you're also writing architecture documents, designing systems, and evaluating trade-offs. This is when you start engaging in discussions, debates, and consensus-building. You begin collaborating with product teams, designers, and business stakeholders while building your reputation.</p><p>But here's the key: while technical skills dominate early career stages, you're simultaneously developing people skills and process understanding. The real growth comes from realizing that skills are skills - they're not confined to categorical boxes. Reaching senior levels isn't about becoming either a better coder or a better manager - it's about learning to learn whatever skills you need to get things done effectively.</p><p><h2>Problems to be Solved</h2></p><p>Early in your career, you work on assigned tasks - coding, reviewing, deploying, and marking them complete. As you advance, you handle more complex tasks and eventually lead initiatives and projects. However, the real growth begins when you start identifying problems yourself.</p><p>Problems exist everywhere - in codebases, architecture, product experience, and processes. The key is developing the ability to:</p><p>1. Identify issues and patterns that others might miss
2. Build compelling narratives around why these problems matter
3. Research and develop effective solutions
4. Execute and iterate on those solutions</p><p>This cycle of identifying problems and solving them becomes your engine for learning and career growth. The problems will change, evolve, but this fundamental approach remains constant.</p><p><h2>Sphere of Influence</h2></p><p>Drawing from Stephen Covey's concept of "circle of influence," think of your influence as a sphere that expands in multiple directions. This influence isn't tied to titles or positions - it's about creating visibility for your work and ideas.</p><p>Your narratives should always connect your work to its importance for team effectiveness, system architecture, product, user experience, and business outcomes. To be truly effective, your influence needs to extend in multiple directions—across technology, product, processes, and people. You should be able to communicate and collaborate effectively with peers, managers, and cross-functional teams.</p><p>A key part of career growth is learning how to scale your impact through others. There is only so much you can do on your own. As you progress to roles like Tech Lead, Principal Engineer, or Engineering Manager, your ability to mentor, unblock, and elevate others becomes essential. Helping your teammates succeed and grow helps you grow your sphere of influence.</p><p><h2>Easier Said Than Done</h2></p><p>On paper, this model might seem straightforward. But, in practice, your time & energy are finite. You'll constantly juggle multiple projects, initiatives, and responsibilities. You can't simultaneously focus on learning new skills, solving complex problems, and building influence.</p><p>Think of this model as a compass rather than a turn-by-turn navigation guide. It's perfectly fine - and often necessary - to focus on one dimension at a time. The key is maintaining awareness of all three dimensions while making conscious choices about where to direct your energy.</p><p><h2>Roles & Titles</h2></p><p>Who doesn't like important-sounding titles?</p><p>While titles and roles are important markers of career progression, they shouldn't be your primary focus. They define accountability and scope of work, but they don't limit what you can learn or how you can grow.</p><p>For example, a tech lead is accountable for service uptime, but that doesn't prevent them from learning new technologies or architectural patterns. You can follow the traditional title-based career progression, or you can chart your own path by continuously learning, solving meaningful problems, and building your reputation.</p><p>Remember: Focus on expanding your skills, tackling increasingly complex problems, and growing your sphere of influence. The roles, titles, and compensation will naturally follow.]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Introducing XDB]]></title>
            <link>https://raviatluri.in/articles/introducing-xdb</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/introducing-xdb</guid>
            <pubDate>Thu, 03 Apr 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[XDB is a new kind of database library based on tuples. Rather than writing database specific schemas, queries, and migrations, XDB allows developers to model their domain once and use it with one or more databases.]]></description>
            <content:encoded><![CDATA[<p>XDB is a new kind of database library based on tuples.</p><p>Rather than writing database-specific schemas, queries, and migrations, XDB allows developers to model their domain once and use it with one or more databases.</p><p>XDB separates the application domain model from the underlying database(s) by using a simple yet powerful data model based on tuples. This lets developers focus on modeling, ingesting, and querying data—without worrying about the underlying database infrastructure or operations.</p><p><h2>Why?</h2></p><p>Not all databases are created equal. Most applications at scale use multiple types of databases:</p><p><ul><li>PostgreSQL/MySQL as the main database</li>
<li>Redis for caching</li>
<li>Elasticsearch for search</li>
<li>Clickhouse for analytics</li>
<li>BigTable for versioning</li></ul></p><p>Each database solves a specific problem and comes with its own tradeoffs.</p><p>An application's domain model is often a combination of data that resides in different databases. Typically, each database has its own abstraction layer for migrations and queries. Sometimes, new microservices are spun up to manage specific use cases like search, analytics, or caching.</p><p>At the end of the day, developers must stitch together domain data from multiple databases or microservices to serve user-facing APIs. Looking at an end-to-end flow, there are several layers of data fetching, mutation, and transformation. Every time a feature adds new fields or relationships, the entire stack goes through churn.</p><p>XDB aims to separate the application domain model from database implementation and operations. What if, instead of maintaining multiple database-specific implementations, developers could model their domain once and seamlessly work with multiple databases?</p><p><h2>Inspiration</h2></p><p>XDB draws inspiration from two key concepts: <strong>Data Services</strong> and <strong>N-Quads</strong>.</p><p>Data services are intermediary services that sit between APIs and databases. They provide simple APIs for your domain data, while automating and abstracting away the underlying database management.</p><p><img src="https://raviatluri.in/images/introducing-xdb/data-services.png" alt="Data Services" /></p><p><a href="https://www.w3.org/TR/n-quads/">N-Quad</a> is a well-known format used to represent attributes and relationships in graphs.</p><p>Here's an example of N-Quad format:</p><p><pre><code><Post:9bsv0s5ocl6002kdg0fg> <title> "Hello World" .
<Post:9bsv0s5ocl6002kdg0fg> <description> "..." .
<Post:9bsv0s5ocl6002kdg0fg> <author> <1> .
<Post:9bsv0s5ocl6002kdg0fg> <created_at> "2025-04-01T00:00:00Z"
<Post:9bsv0s5ocl6002kdg0fg> <tags> <golang> .
<Post:9bsv0s5ocl6002kdg0fg> <tags> <xdb> .
<User:1> <follows> <User:2> .
<User:2> <likes> <Post:9bsv0s5ocl6002kdg0fg> .
</code></pre></p><p>XDB was inspired by Dgraph's <a href="https://github.com/hypermodeinc/dgo/blob/8fd6df819e01c401e89f57601fba40e5631a27de/protos/api.proto#L68">Mutation API</a>, which uses N-Quads to insert or update data. What if this idea could be extended to build an abstraction usable with any database?</p><p><h2>Data Model</h2></p><p>XDB is built around a fundamental building block - <strong>Tuple</strong></p><p><h3>Tuple</h3></p><p>A <strong>Tuple</strong> combines a id, attribute, value, and optional metadata.</p><p><img src="https://raviatluri.in/images/introducing-xdb/tuple.png" alt="Tuple" /></p><p>This simple yet powerful structure can represent any domain model and is easily mappable to various database formats.</p><p>Here's how to create a tuple:</p><p><pre><code>tuple := xdb.NewTuple("9bsv0s5ocl6002kdg0fg", "title", "Hello World")
</code></pre></p><p><h3>Edge</h3></p><p>An <strong>Edge</strong> is a special kind of tuple whose value is a reference. Edges are unidirectional and represent relationships between entities.</p><p><h3>Record</h3></p><p>A <strong>Record</strong> is a collection of tuples that share the same id. Records are similar to objects, structs, or rows in a database. Records represent entities in the domain model.</p><p>Here's how to create a record with tuples:</p><p><pre><code>record := xdb.NewRecord("Post", "9bsv0s5ocl6002kdg0fg").
	Set("title", "Hello World").
	Set("description", "...").
	Set("created_at", time.Now()).
	Set("author_id", "1").
	Set("tags", []string{"golang", "xdb"})
</code></pre></p><p><h2>Using XDB As A Library</h2></p><p>XDB can be used as a library replacing the traditional repository/database layer in Go services.</p><p>Let's first define a simple domain model using standard Go structs:</p><p><pre><code>type Post struct {
	ID          string    <code>xdb:"id,primary_key"</code>
	Title       string    <code>xdb:"title"</code>
	Description string    <code>xdb:"description"</code>
	CreatedAt   time.Time <code>xdb:"created_at"</code>
	AuthorID    string    <code>xdb:"author_id"</code>
	Tags        []string  <code>xdb:"tags"</code>
}
</code></pre></p><p>Now, let's walk through creating, storing, and retrieving a post:</p><p><pre><code>// Create a new post
post := &Post{
	ID:          "9bsv0s5ocl6002kdg0fg",
	Title:       "Hello World",
	Description: "A sample post about XDB",
	CreatedAt:   time.Now(),
	AuthorID:    "1",
	Tags:        []string{"golang", "xdb"},
}</p><p>// Convert the struct to a record
record, err := xdbstruct.ToRecord(post)
if err != nil {
	log.Fatal(err)
}</p><p>// Create a new store using any of the driver implementations
store := xdbmemory.New()</p><p>// Store the record in the database
err = store.PutRecord(ctx, record)
if err != nil {
	log.Fatal(err)
}</p><p>// Retrieve the record from the database
record, err = store.GetRecord(ctx, record.Key())
if err != nil {
	log.Fatal(err)
}</p><p>// Convert the record back to a struct
var fetchedPost Post
err = xdbstruct.FromRecord(record, &fetchedPost)
if err != nil {
	log.Fatal(err)
}
</code></pre></p><p><h2>Routing Data</h2></p><p>The real power of XDB lies in its ability to "route" the same domain model to different databases. Let's explore how to create a "routing" layer that moves around tuples, edges, and records between different databases:</p><p><pre><code>type RecordRouter struct {
	Primary   xdb.RecordWriter 	// e.g. PostgreSQL
	Cache     xdb.RecordWriter 	// e.g. Redis
	Indexer   xdb.RecordIndexer // e.g. Elasticsearch
}</p><p>func (r <em>RecordRouter) PutRecord(ctx context.Context, record </em>xdb.Record) error {
	// Save complete record to primary database as source of truth.
	r.Primary.PutRecord(ctx, record)</p><p>	// Then update the cache.
	r.Cache.PutRecord(ctx, record)</p><p>	// For search, only index relevant fields.
	indexRecord := record.Keep("title", "description", "author", "tags")</p><p>	r.Indexer.IndexRecord(ctx, indexRecord)
}</p><p>func (r <em>RecordRouter) GetRecord(ctx context.Context, key </em>xdb.Key) (<em>xdb.Record, error) {
	// Get the record from cache.
	record, err := r.Cache.GetRecord(ctx, key)
	if err != nil {
		return nil, err
	}</p><p>	// If not found, get from primary database.
	if record == nil {
		record, err = r.Primary.GetRecord(ctx, key)
		if err != nil {
			return nil, err
		}</p><p>		// Update the cache.
		r.Cache.PutRecord(ctx, record)
	}</p><p>	return record, nil
}
</code></pre></p><p>This pattern allows you to distribute specific attributes of your domain model to the most appropriate databases. It also centralizes code & logic for retries, error handling, monitoring, etc.</p><p><h2>Building Blocks</h2></p><p>XDB APIs are designed to be simple, composable, and easy to use. Let's explore the key building blocks that make up the XDB ecosystem.</p><p><h3>Core Types</h3></p><p>The core types used to create tuples, edges, and records form the foundation of XDB's data model.</p><p><pre><code>userID := xdb.NewID("User", "123")
tuples := []</em>types.Tuple{
	types.NewTuple(userID, "name", "John Doe"),
	types.NewTuple(userID, "age", 25),
	types.NewTuple(userID, "email", "john.doe@example.com"),
}</p><p>record := types.NewRecord("Post", "123").
	Set("title", "Hello, World!").
	Set("content", "This is my first post").
	Set("created_at", time.Now()).
	Set("author_id", userID).
	Set("tags", []string{"xdb", "golang"})
</code></pre></p><p><h3>Encoding</h3></p><p>The encoding APIs provide consistent methods for converting between XDB's data types and various formats. Here's how to use different encoding options:</p><p><pre><code>import (
	"github.com/xdb-dev/xdb/encoding/xdbjson"
	"github.com/xdb-dev/xdb/encoding/xdbproto"
	"github.com/xdb-dev/xdb/encoding/xdbstruct"
)</p><p>var record <em>xdb.Record
var post Post
var pb proto.Message</p><p>// Convert struct to record
record, err = xdbstruct.ToRecord(post)
// Convert record to struct
err = xdbstruct.FromRecord(record, &post)</p><p>// Convert protobuf message to record
record, err = xdbproto.ToRecord(pb)
// Convert record to protobuf message
err = xdbproto.FromRecord(record, &pb)</p><p>var jsonBytes []byte</p><p>// Convert record to JSON
jsonBytes, err = xdbjson.FromRecord(record)
// Convert JSON to record
err = xdbjson.ToRecord(jsonBytes, &record)
</code></pre></p><p><h3>Drivers</h3></p><p>Drivers serve as the bridge between XDB's tuple-based model and specific database implementations. All drivers always implement the basic <strong>Reader</strong> and <strong>Writer</strong> capabilities:</p><p><pre><code>type RecordReader interface {
	GetRecords(ctx context.Context, keys []</em>xdb.Key) ([]<em>xdb.Record, []</em>xdb.Key, error)
}</p><p>type RecordWriter interface {
	PutRecords(ctx context.Context, records []<em>xdb.Record) error
	DeleteRecords(ctx context.Context, keys []</em>xdb.Key) error
}
</code></pre></p><p>Advanced capabilities, like full-text search, aggregation, iteration, etc. are implemented by specific drivers based on their database features:</p><p><pre><code>type RecordIndexer interface {
	IndexRecords(ctx context.Context, records []<em>xdb.Record) error
}</p><p>type RecordSearcher interface {
	SearchRecords(ctx context.Context, query </em>xdb.Query) ([]<em>xdb.Record, error)
}</p><p>type TupleIterator interface {
	IterateTuples(ctx context.Context, func(tuple </em>xdb.Tuple) error, opts ...xdb.IteratorOption) error
}</p><p>type EdgeIterator interface {
	IterateEdges(ctx context.Context, func(edge <em>xdb.Edge) error, opts ...xdb.IteratorOption) error
}
</code></pre></p><p><h3>Stores</h3></p><p>Stores provide higher-level APIs that combine multiple drivers to support common use-cases. Here's an example of a cached store implementation:</p><p><pre><code>type RecordStore interface {
	xdb.RecordReader
	xdb.RecordWriter
}</p><p>type CachedRecordStore struct {
	Primary   RecordStore
	Cache     RecordStore
}</p><p>func (s </em>CachedRecordStore) GetRecords(ctx context.Context, keys []xdb.Key) ([]<em>xdb.Record, error) {
	// ...
}</p><p>func (s </em>CachedRecordStore) PutRecords(ctx context.Context, records []<em>xdb.Record) error {
	// ...
}</p><p>func (s </em>CachedRecordStore) DeleteRecords(ctx context.Context, keys []xdb.Key) error {
	// ...
}
</code></pre></p><p>Store implementations also satisfy the capability interfaces they implement. This allows you to use a store as a driver or to layer & compose stores & drivers for more complex use-cases.
</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA['Frontmatter in Obsidian']]></title>
            <link>https://raviatluri.in/articles/frontmatter-obsidian</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/frontmatter-obsidian</guid>
            <pubDate>Mon, 31 Mar 2025 00:00:00 GMT</pubDate>
            <description><![CDATA['A guide to using YAML frontmatter in Obsidian']]></description>
            <content:encoded><![CDATA[<p>After using Obsidian for few months, I accidentally discovered that it supports frontmatter.</p><p>Typing <code>---</code> in the first line of a file creates a "Properties" block. <a href="https://help.obsidian.md/properties">https://help.obsidian.md/properties</a> is the full documentation.</p><p></p>]]></content:encoded>
            <category>obsidian</category>
            <category>markdown</category>
        </item>
        <item>
            <title><![CDATA['Using mockery with go generate']]></title>
            <link>https://raviatluri.in/articles/using-mockery-go-generate</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/using-mockery-go-generate</guid>
            <pubDate>Wed, 09 Aug 2023 00:00:00 GMT</pubDate>
            <description><![CDATA['Example of using mockery with go generate']]></description>
            <content:encoded><![CDATA[<p>This is a simple example of how to use mockery with go generate. It is my preferred way of generating mocks, because the mocks configuration is co-located with the interface definition.</p><p>1. Install mockery</p><p><pre><code>go install github.com/vektra/mockery/v2@2.23.4 # use latest version
</code></pre></p><p>2. Add a go:generate comment to your interface</p><p><pre><code>//go:generate mockery -name=MyInterface -output=mocks -outpkg=mocks -case=underscore
type MyInterface interface {
    DoSomething()
}</p><p></code></pre></p><p>3. Run go generate</p><p><pre><code>go generate ./...
</code></pre></p><p>Mockery - https://vektra.github.io/mockery/latest/
</p>]]></content:encoded>
            <category>golang</category>
        </item>
        <item>
            <title><![CDATA['Using xrun']]></title>
            <link>https://raviatluri.in/articles/using-xrun</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/using-xrun</guid>
            <pubDate>Mon, 03 Jul 2023 00:00:00 GMT</pubDate>
            <description><![CDATA['Example of using xrun to manage multiple components in a Go service']]></description>
            <content:encoded><![CDATA[<p>Example of using xrun(https://github.com/gojekfarm/xrun) to manage multiple components in a Go service.</p><p>Components can implement the <code>xrun.Component</code> interface or can be wrapped with <code>xrun.ComponentFunc</code> to be used with <code>xrun</code>.</p><p><pre><code><h1>kafka consumer</h1>
consumer := newKafkaConsumer()</p><p><h1>gRPC server</h1>
server := newGRPCServer()</p><p><h1>metrics server</h1>
metrics := newMetricsServer()</p><p>err := xrun.All(
    xrun.NoTimeout,
    consumer,
    server,
    metrics,
)</p><p></code></pre></p><p>Blog: https://ajatprabha.in/2023/05/24/intro-xrun-package-managing-component-lifecycle-go
</p>]]></content:encoded>
            <category>golang</category>
        </item>
        <item>
            <title><![CDATA['My macOS Accessibility Setup']]></title>
            <link>https://raviatluri.in/articles/my-mac-accessibility-setup</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/my-mac-accessibility-setup</guid>
            <pubDate>Fri, 02 Jun 2023 00:00:00 GMT</pubDate>
            <description><![CDATA['Accessibility setup and tools for everyday use on macOS/OSX']]></description>
            <content:encoded><![CDATA[<p>I was <a href="/articles/the-als-story">diagnosed with ALS in 2019</a>. Over time, I have gradually lost the ability to type. Initially, I relied on a combination of dictation, one-handed typing, and a mouse. However, as my speech became more slurred and my left hand weakened, I transitioned to using an on-screen keyboard, mouse, and one-handed typing.</p><p>Nowadays, I solely rely on an on-screen keyboard and mouse, and I have been using this setup for over a year. Surprisingly, I have been able to continue coding, sending emails, and writing articles with this setup, enabling me to maintain most of my previous activities despite my condition.</p><p>This is my work coding calendar in September 2022:</p><p><img src="https://raviatluri.in/images/my-mac-accessibility-setup/contrib-calendar.png" alt="Contribution Calendar, September 2022" /></p><p>And the same calendar for May 2023:</p><p><img src="https://raviatluri.in/images/my-mac-accessibility-setup/contrib-calendar-may-2023.png" alt="Contribution Calendar, May 2023" /></p><p>I am sharing my accessibility setup in the hope that it can assist others in similar situations.</p><p><h2>Keyboard</h2></p><p><img src="https://raviatluri.in/images/my-mac-accessibility-setup/macos-accessibility-keyboard.png" alt="macOS Accessibility Keyboard" /></p><p>The on-screen keyboard I currently use is a custom layout created using the macOS Accessibility Keyboard. I began with the default ANSI keyboard and gradually added shortcuts, passwords, emails, phone numbers, and other frequently used items.</p><p>The on-screen keyboard is available in Settings > Accessibility > Keyboard > Accessibility Keyboard.</p><p><img src="https://raviatluri.in/images/my-mac-accessibility-setup/osx-settings.png" alt="macOS Accessibility Keyboard Settings" /></p><p>The keyboard can be customized using the Panel Editor.</p><p><img src="https://raviatluri.in/images/my-mac-accessibility-setup/panel-editor.png" alt="macOS Accessibility Keyboard Panel Editor" /></p><p>You can find more information about the <a href="https://support.apple.com/en-in/guide/mac-help/mchlc74c1c9f/mac">macOS Accessibility Keyboard</a> on Apple's website.</p><p><h2>Typing</h2></p><p>For typing, I rely on the native typing suggestions provided by macOS. Although the typing suggestions are mostly limited to a few sentences in most apps, they offer greater functionality in the Notes app.</p><p>When using Chrome, I heavily rely on <a href="https://compose.ai/">Compose AI</a> for composing emails and using Slack.</p><p>In VSCode, I utilize <a href="https://copilot.github.com/">GitHub Copilot</a> for coding and writing articles.</p><p>More recently, I have started using ChatGPT for writing. I begin by outlining the article's key points and main highlights, and then I use ChatGPT to generate the content. Finally, I review and edit the generated content to refine the tone or add additional details.</p><p><h2>Next Steps</h2></p><p>I plan to try <a href="https://glassouse.com/">Glassouse</a>, a head mouse, as my ability to use a traditional mouse will eventually be affected.</p><p>I recently discovered <a href="https://karabiner-elements.pqrs.org/">Karbiner-Elements</a>, a powerful keyboard customizer. I found the following articles particularly useful:</p><p><ul><li><a href="https://medium.com/@nikitavoloboev/karabiner-god-mode-7407a5ddc8f6">Karabiner God Mode</a></li>
<li><a href="https://wiki.nikiv.dev/macOS/apps/karabiner/">Karabiner Wiki</a></li></ul></p><p><h2>Similar Resources</h2></p><p>Josh Comeau's article on <a href="https://www.joshwcomeau.com/blog/hands-free-coding/">Hands-free Coding</a> introduced me to the idea that I can continue coding, writing, and creating even with my limitations.</p><p>For comprehensive information on accessibility features across all Apple products, I highly recommend exploring <a href="https://www.apple.com/accessibility/">Apple Accessibility</a>.
</p>]]></content:encoded>
            <category>accessibility</category>
            <enclosure url="https://raviatluri.in'/images/my-mac-accessibility-setup/macos-accessibility-keyboard.png'" length="0" type="image/png'"/>
        </item>
        <item>
            <title><![CDATA['The ALS Story']]></title>
            <link>https://raviatluri.in/articles/the-als-story</link>
            <guid isPermaLink="false">https://raviatluri.in/articles/the-als-story</guid>
            <pubDate>Thu, 23 Dec 2021 00:00:00 GMT</pubDate>
            <description><![CDATA['Story of my ALS diagnosis']]></description>
            <content:encoded><![CDATA[<p>_This is a ChatGPT generated summary of my Twitter thread. The original thread can be found here: <a href="https://twitter.com/sonnes/status/1474042833535262725">https://twitter.com/sonnes/status/1474042833535262725</a>_</p><p>Two years ago, my life took a dramatic turn. After seven successful years at PaGaLGuY, I made a bold decision to join GojekTech, drawn by the promise of Southeast Asia's hypergrowth phase. With my wife and one-year-old son, we packed our bags and moved to Jakarta, ready to embrace the new opportunities that awaited us.</p><p>Settling into a new city wasn't without its challenges. My wife's first mission was to find familiar grocery items, and our trips to Singapore became a ritual of stocking up on essentials from Mustafa. Amidst the settling-in process, our son began swimming lessons, my wife took up tennis, and I even achieved my PADI dive certification. However, my attempts at learning tennis were far from successful, and I blamed my clumsiness on overdoing deadlifts and snatches.</p><p>Little did I know that the following months would bring unexpected health concerns. It started with some discomfort, leading me to consult doctors, orthopedics, and neurologists. I underwent a series of tests, from ENMG to cervical MRI scans, in search of a diagnosis. The initial suspicions pointed toward spondylosis or even a motor neuron disease, most likely ALS. The spectrum of possibilities was overwhelming, and I couldn't help but hope it was just a pinched nerve.</p><p>The uncertainty lingered as we waited for conclusive results. An MRI scan left the radiologist puzzled, unable to provide a definitive diagnosis. We found ourselves back at square one, contemplating our next move. We decided to fly back to Hyderabad, seeking further medical evaluations and putting an end to the relentless search for answers.</p><p>November 2019 was marked by a week filled with more tests, including MRIs, PET-CT scans, and countless blood vials. Each moment was filled with apprehension and a sense of urgency. Yet, despite the numerous evaluations, there was still an air of uncertainty surrounding the diagnosis. The neurologist cautiously suggested that ALS was the most likely culprit. The state-of-the-art diagnostic test? Waiting. Waiting to see how the condition progresses. The doctor prepared us for the worst, explaining that I may have only a few years left, at best, and perhaps a little more if luck was on my side.</p><p>With these grim prospects, questions began to flood my mind. How much time do I have left? Are there any treatments available? Thoughts ranged from contemplating my legacy to planning the optimal investment strategy to put my young son through college. Amidst this turmoil, I found it difficult to shed tears, but everyone around me seemed to do so.</p><p>At some point, I reached a breaking point and said, "FUCK IT!" I made a conscious decision to move forward and live one month at a time, cherishing every precious moment that life had to offer.</p><p>The year 2020 brought its own set of challenges. In addition to the ALS scare, I received a diagnosis of Hirayama, which brought its own worries. The world was hit by the COVID-19 pandemic, imposing lockdowns and restrictions. Amidst the chaos, I underwent cervical fusion surgery, hoping to alleviate some of the discomfort. Fortunately, the diagnosis of Hirayama provided a glimmer of hope, as it was not a fatal condition.</p><p>But as 2021 rolled in, my speech deteriorated further. Simple tasks became increasingly difficult to perform. Halfway through the year, the neurologist confirmed what I had feared all along: ALS. The ground crumbled beneath me once again, but this time, I refused to let it consume me.</p><p>In 2019, I couldn't fathom making it to 35. And yet, here I am, still alive, still mobile, but facing the same diagnosis once more. Despite the challenges
</p>]]></content:encoded>
            <category>als</category>
        </item>
    </channel>
</rss>