<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[The Bug Shots]]></title><description><![CDATA[TheBugShots is home to posts on interesting programming topics &amp; software development techniques. If you're looking to level up your coding, be sure to check out the latest articles on TheBugShots :)]]></description><link>https://thebugshots.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 04:41:44 GMT</lastBuildDate><atom:link href="https://thebugshots.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Falling in Love with RSS Feeds Again!]]></title><description><![CDATA[Have you ever been so engrossed in your phone that when you finally look up, you realize an hour has vanished? It's like waking from a digital trance, and for me, it was a regular occurrence. This sudden awareness prompted a digital revolution in my ...]]></description><link>https://thebugshots.dev/falling-in-love-with-rss-feeds-again</link><guid isPermaLink="true">https://thebugshots.dev/falling-in-love-with-rss-feeds-again</guid><category><![CDATA[rssfeed]]></category><category><![CDATA[rss]]></category><category><![CDATA[reeder]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Wed, 14 May 2025 13:43:04 GMT</pubDate><content:encoded><![CDATA[<p>Have you ever been so engrossed in your phone that when you finally look up, you realize an hour has vanished? It's like waking from a digital trance, and for me, it was a regular occurrence. This sudden awareness prompted a digital revolution in my life to reclaim those lost hours, and I'm excited to share this journey with you today.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747230122069/d2aef699-70c7-4d02-bb52-22a7e563f3b4.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-social-media-treadmill"><strong>The Social Media Treadmill</strong></h2>
<p>Let's rewind about a decade. Back then, I was an avid user of RSS feeds, happily curating my own content from my choice of sources. It was a serene, distraction-free experience. But as social media platforms like Facebook and Instagram emerged, they gradually eclipsed my RSS habits. The instant validation from updates, likes, and shares was irresistible, and before long, I was deeply entrenched in the social media ecosystem.</p>
<p>Fast forward a few years, and I began to feel the weight of digital fatigue. I slowly started peeling away these layers by deactivating Facebook and Instagram, eventually paring down to only Reddit and YouTube.</p>
<p>I thought this minimalist approach was balanced. But, over the past month, it became clear these platforms still consumed more time than I'd like. Endless comment sections and algorithm-driven rabbit holes had me clicking on content I didn't actively choose.</p>
<p>This was the wake-up call I needed: I wasn't as in control of my content consumption as I had thought.</p>
<h2 id="heading-seeking-digital-autonomy"><strong>Seeking Digital Autonomy</strong></h2>
<p>I longed for a solution that would deliver precisely what I wanted from the sources I consciously chose—and nothing more. Suddenly, the light bulb went on: RSS feeds! It was like rediscovering a long-lost friend. I hadn't used them in years, preferring the convenience of social media, only to realize how much I missed the self-curated experience.</p>
<p>Could this tried-and-true technology be the answer to my modern woes?</p>
<h2 id="heading-rediscovering-rss"><strong>Rediscovering RSS</strong></h2>
<p>The RSS landscape has matured since my last venture. From dedicated platforms catering to a variety of preferences, I found myself navigating through options such as NetNewsWire, Inoreader, Reeder Classic, and the new Reeder. I know there are many more options. but I’m a sucker for good UI/UX. With a focus on simplicity and elegance, given my devices all reside in one happy ecosystem, my choice was clear.</p>
<h2 id="heading-finding-my-digital-home"><strong>Finding My Digital Home</strong></h2>
<p>After sampling the options, the new <a target="_blank" href="https://reederapp.com/">Reeder</a> caught my heart. It has become a vibrant digital oasis where I gather:</p>
<ul>
<li><p>My favorite Reddit communities (displaying only the posts without pulling me into distracting comments)</p>
</li>
<li><p>YouTube channels that i would like to subscribe.</p>
</li>
<li><p>Blogs that i am interested in</p>
</li>
<li><p>Podcasts that accompany me through commutes</p>
</li>
</ul>
<p>The downside is that i won’t get suggested content that are similar to my tastes. but that is not convincing for me to jump back again in the social media apps.</p>
<p>The magic lies in seeing all my chosen content in one place, curated by the only algorithm I trust—myself. It's incredibly satisfying to open an app where everything feels purposeful.</p>
<p>What sealed the deal was noticing how Reeder buffers me from the constant deluge of notifications. Gone are the dopamine triggers of likes and alerts, leaving room for focused peace of mind. The sleek, minimalistic design of Reeder contrasts starkly with the visual chaos found on social media.</p>
<p>With Reeder, it's like stepping into a serene library from a bustling, dimly lit party—calm and rewarding.</p>
<h2 id="heading-beyond-the-technology"><strong>Beyond the Technology</strong></h2>
<p>This journey isn't merely an app switch—it's a transformation in how I engage with online content. I've shifted from passive consumption to proactive curation. From being the product to seizing control of my online narrative.</p>
<p>Gone are the days of hollow browsing. Now, I savor content intentionally, my digital consumption evolving from incessant scrolling to meaningful interaction.</p>
<p>When friends ask if I feel I'm missing out by stepping back from mainstream social media, my answer is simple: I've gained a sense of what I was missing—space to think, reflect, and absorb content on my terms.</p>
<p>If digital overwhelm resonates with you or you crave control over your online experience, consider revisiting RSS feeds. This "vintage" tool could be the modern solution you've been searching for.</p>
]]></content:encoded></item><item><title><![CDATA[My Journey into embracing Neovim as my PDE]]></title><description><![CDATA[The First Encounter
It all started in 2016 in my first company. I wanted to edit a shell script in a Virtual Machine where my application was deployed. The manual instructed me to log into the terminal via PuTTY, run vi <script_name>.sh, and then add...]]></description><link>https://thebugshots.dev/my-journey-into-embracing-neovim-as-my-pde</link><guid isPermaLink="true">https://thebugshots.dev/my-journey-into-embracing-neovim-as-my-pde</guid><category><![CDATA[neovim]]></category><category><![CDATA[vim]]></category><category><![CDATA[Linux]]></category><category><![CDATA[vim linux]]></category><category><![CDATA[Programming Blogs]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Fri, 10 May 2024 21:25:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715376094175/0a9991fa-9068-44ce-918d-efeaad1bcb2b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-the-first-encounter">The First Encounter</h2>
<p>It all started in 2016 in my first company. I wanted to edit a shell script in a Virtual Machine where my application was deployed. The manual instructed me to log into the terminal via PuTTY, run <code>vi &lt;script_name&gt;.sh</code>, and then add a line to it. I followed the same, only to discover that I couldn't do <code>Ctrl+C</code>, <code>Ctrl+V</code>, or <code>Ctrl+S</code>, nor was I able to quit. Frustrated, I quit PuTTY and logged in again. I was like, "Yuck, what the hell is that editor?" and used Nano to edit the file. I thought, "Why don't they have Nano as the default editor in Linux?"</p>
<p><img src="https://i.redd.it/7wb1y164aj711.jpg" alt="When you try to exit VIM... : r/ProgrammerHumor" /></p>
<h2 id="heading-the-intriguing-encounter">The Intriguing Encounter</h2>
<p>In 2020, when I started working on NGINX, I was paired with a Senior Engineer. During my onboarding (via Zoom call - during COVID times), he was explaining the codebase to me. To my astonishment, instead of opening an IDE like a JetBrains product or VSCode, he opened his iTerm2, navigated to the project folder, and opened Vim. I was expecting a movement in the mouse pointer, but man, he was flying through the codebase like I've never seen before. At that moment, I thought, "I want that."</p>
<p><img src="https://miro.medium.com/v2/resize:fit:640/1*qM-Y5x3pcnzdl0R4uDPKNA.jpeg" alt="Why every software engineer should use vim | Level Up Coding" class="image--center mx-auto" /></p>
<h2 id="heading-the-initial-struggles">The Initial Struggles</h2>
<p>So, I started looking at tutorial videos and documentation, but it was hard for me totally. I tried to hang on, but over the week, I lost interest (it was hard to begin as a total noob). My workflow continued to be Goland or VSCode over the year. I again gave it a try in 2021 (went into the YouTube rabbit hole one night and ended up in a Vim tutorial). I configured the editor as per the tutorial (used Vim Plug) and found it comfortable to edit files. I spent the next couple of weeks forcing myself to use it as my primary editor. At the end of the sprint, I saw that I didn't finish up my tasks. It dawned on me that maybe I'm not that kind of guy, and it took a total nerd to use Vim as fluently as an IDE, which I'm used to. So, I stopped using it.</p>
<h2 id="heading-the-renewed-attempts">The Renewed Attempts</h2>
<p>In 2022, I moved to a different company, and there, another Senior Engineer in my team started using Neovim, and I thought I would try it one more time. Let's see; maybe I'm up for it now. The same pattern emerged where I was dead slow. I thought this is it. I'm not gonna try again and stopped using it once and for all (except when doing interactive rebase in Git).</p>
<p><img src="https://i.redd.it/6bmxucg6c8181.jpg" alt="Vscode &amp; Vim : r/ProgrammerHumor" /></p>
<h2 id="heading-the-pivotal-moment">The Pivotal Moment</h2>
<p>Everything was going well until I joined a new company again (yes, hectic times, I moved to 3 different companies in 3 years - moved countries and other things out of my hand). But anyway, back to the subject. I wanted to give it one more try again because of a guy called <a target="_blank" href="https://www.youtube.com/c/theprimeagen">Primeagen</a>. YouTube suggested his video for some reason, the video was cool, and there it was waiting for me. He was using Neovim, and man, I got to see the editor again, which I thought I wouldn't touch or go near. He was basically flying at lightning speed, and it was fun watching him navigate through the codebase. This time, I wanted to use it badly (wanted to have fun while coding). I was impatient in configuring my own settings, so I installed <a target="_blank" href="https://nvchad.com/">NvChad</a>. It looked so pretty, and I was learning the cheatsheet and was like, "Wow, this is interesting." I was learning all the keycombos and Vim motions; it was fun, and I was happy. But still, I was very slow and used GoLand and VSCode like 80% of my dev work. But still, I was getting used to the hjkl navigation and other keystrokes, but then I slowly moved to using IDEs again and forgot about it.</p>
<h2 id="heading-the-perseverance">The Perseverance</h2>
<p>I did continuously watch <a target="_blank" href="https://www.youtube.com/c/theprimeagen">ThePrimeagen</a> and <a target="_blank" href="https://www.youtube.com/@teej_dv">TJDeVries</a>, loved all their uploads. I was using Neovim here and there sometimes but not fully as my primary editor. I tried <a target="_blank" href="https://www.lunarvim.org/">LunarVim</a>, <a target="_blank" href="https://www.lazyvim.org/">LazyVim</a>, <a target="_blank" href="https://astronvim.com/">AstroNvim</a>; all had different keycombos and project structure. I was pretty confused and thought, no way I will be able to create my own config, and even if I did, it would be shitty compared to what others are using and with the other Nvim distros. I tried using <a target="_blank" href="https://github.com/nvim-lua/kickstart.nvim">kickstart.nvim</a> that TJ DeVries suggested, but for some reason, it didn't stick with me.</p>
<h2 id="heading-the-eureka-moment">The Eureka Moment</h2>
<p>Finally, I stumbled upon a video from <a target="_blank" href="https://www.youtube.com/@joseanmartinez">Josean Martinez</a>. I watched the video entirely, and man, at the end of the video, I got the hang of it. It was my eureka moment where I was like, "Shit, I've been dumb all along." I mean, other videos I watched over the year were good too, but this one hit my brain like nothing else. I was like, "Yeah, let's roll, baby." And there started my Neovim journey.</p>
<p>It's been two weeks since I watched that video, and I joined the crowd, I think, tweaking my Nvim config, adding plugins, removing them, and doing this never-ending tinkering process. Programming is fun again, and I started using Neovim as my primary editor. As <a target="_blank" href="https://www.youtube.com/@teej_dv">TJDeVries</a> says, I finally found my PDE (Personal Development Environment). I now get the hang of it and can't think of not using Vim motions in any editor (PS: I installed the <a target="_blank" href="https://chromewebstore.google.com/detail/vimium/dbepggeogbaibhgnhhndojpepiihcmeb?hl=en">Vimium</a> extension in my browser and love it too).</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/6pAG3BHurdM?si=rjDlKCdt9bpLdkH_">https://youtu.be/6pAG3BHurdM?si=rjDlKCdt9bpLdkH_</a></div>
<p> </p>
<h2 id="heading-the-takeaway">The Takeaway</h2>
<p>Anyway, what I was about to tell is that, to all fellow people who are trying to use Neovim or trying to do programming in a fun way, just hang in there; there is light at the end of the tunnel. I know that I barely scratched the surface and am a beginner in Neovim now, but let's see what the future holds.</p>
<p>Some YouTube channels that I follow for Neovim content:</p>
<ul>
<li><p><a target="_blank" href="https://www.youtube.com/c/theprimeagen">ThePrimeagean</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/@teej_dv">TJDeVries</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/@joseanmartinez">Josean Martinez</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/@typecraft_dev">typecraft</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/@dreamsofcode">Dreams of Code</a></p>
</li>
</ul>
<p>And finally all thanks to my Senior Engineers, <a target="_blank" href="https://www.youtube.com/c/theprimeagen">ThePrimeagen</a>, <a target="_blank" href="https://www.youtube.com/@teej_dv">TJDevries</a> and the whole Neovim and Vim community for everything and I guess its time for me to contribute to the ecosystem. I'll share my Neovim configurations and the plugins that I use in future blogs. 😁</p>
<p>My Nvim Config - <a target="_blank" href="https://github.com/cksidharthan/nvim">https://github.com/cksidharthan/nvim</a></p>
<p><img src="https://programmerhumor.io/wp-content/uploads/2023/08/programmerhumor-io-linux-memes-programming-memes-3566d5d70c7972d.jpg" alt="What is this Gooey you speak of? | vim-memes | ProgrammerHumor.io" class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Visualizing Your Git Repository History with Gource]]></title><description><![CDATA[Git is a powerful version control system that allows you to track changes to your code over time. While git log gives you a textual history of your commits, it can be hard to get a big-picture view of what's happening in your repository. That's where...]]></description><link>https://thebugshots.dev/visualizing-your-git-repository-history-with-gource</link><guid isPermaLink="true">https://thebugshots.dev/visualizing-your-git-repository-history-with-gource</guid><category><![CDATA[Developer Tools]]></category><category><![CDATA[Git]]></category><category><![CDATA[visualization]]></category><category><![CDATA[presentations]]></category><category><![CDATA[gource]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Thu, 28 Sep 2023 16:31:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695916537066/56c9cb49-de3f-4d2c-9cea-ddbd1ed84f9d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Git is a powerful version control system that allows you to track changes to your code over time. While <code>git log</code> gives you a textual history of your commits, it can be hard to get a big-picture view of what's happening in your repository. That's where Gource comes in.</p>
<p>Gource is an open-source command-line tool that visualizes activity in your Git repository over time through a mesmerizing animated graphical representation. It's like a real-time interactive movie that reveals the evolution of your project.</p>
<p>In this post, we'll cover what Gource is, why it's useful for visualizing your Git history, and how to use it with your repositories. It's especially great as a cool party trick when showing off your project's development history in presentations!</p>
<h2 id="heading-what-is-gource"><strong>What is Gource?</strong></h2>
<p><a target="_blank" href="https://github.com/acaudwell/Gource">Gource</a> is a captivating and interactive visualization tool designed to breathe life into your Git repositories. Created by <a target="_blank" href="https://github.com/acaudwell">Andrew Caudwell</a>, this open-source software offers a unique perspective on your project's history by translating code commits into a mesmerizing animated tree. It provides an intuitive visualization of your repo's development lifecycle over time. It's available for Windows, macOS, and Linux, making it accessible to a wide range of developers and teams.</p>
<h2 id="heading-why-use-gource"><strong>Why Use Gource?</strong></h2>
<p>Here are some of the key benefits of using Gource:</p>
<ul>
<li><p>See an overview of commit frequency and patterns over time</p>
</li>
<li><p>Identify major events like new branches, merges, deletions etc</p>
</li>
<li><p>Get a sense of the pace of development and who is contributing</p>
</li>
<li><p>Easily spot organizational changes like renamed files or directories</p>
</li>
<li><p>Enjoy a cool visualization of your project's history that works great in presentations!</p>
</li>
</ul>
<p>Gource visualizations are often eye-catching and mesmerizing, revealing insights through dynamic graphical representations. It's a fantastic party trick to quickly convey your project's growth and evolution when presenting to audiences.</p>
<h2 id="heading-using-gource"><strong>Using Gource</strong></h2>
<p>Gource is available for Linux, macOS, and Windows. To start using it, you simply need to install it and point it to a Git repository directory.</p>
<h3 id="heading-installation"><strong>Installation</strong></h3>
<p>On Linux/macOS, Gource is usually available through your package manager, e.g.:</p>
<pre><code class="lang-bash">sudo apt install gource <span class="hljs-comment"># on Ubuntu/Debian </span>
brew install gource <span class="hljs-comment"># on Mac with Homebrew</span>
</code></pre>
<p>For Windows, you can download and run a <a target="_blank" href="https://github.com/acaudwell/Gource/releases"><strong>pre-compiled executable</strong></a>.</p>
<h3 id="heading-basic-usage"><strong>Basic Usage</strong></h3>
<p>Navigate to your Git repository directory on the command line. Then run:</p>
<pre><code class="lang-bash">gource --auto-skip-seconds 0.1 -s 0.1
</code></pre>
<p>This will generate a real-time animation visualizing the entire commit history of the repository from inception to now.</p>
<p><strong>To create a video file of the gource visualization use the below command. (NOTE: this requires</strong> <code>ffmpeg</code> tool to be installed in your machine.</p>
<pre><code class="lang-bash">gource --auto-skip-seconds 0.1 -s 0.1 --output-ppm-stream - | ffmpeg -y -r 30 -f image2pipe -vcodec ppm -i - -b 65536K movie.mp4
</code></pre>
<p>This pipes the animation into ffmpeg to encode an MP4 video. Adjust frame rate and encoding as needed.</p>
<h3 id="heading-customizing-the-visualization"><strong>Customizing the Visualization</strong></h3>
<p>Gource provides many command line options to customize the visualization. I have listed some of them below.</p>
<ul>
<li><p>Set start/end dates with <code>--start-date</code> and <code>--stop-date</code></p>
</li>
<li><p>Highlight specific users with <code>--highlight-users</code></p>
</li>
<li><p>Adjust simulation speed with <code>--max-files</code> and <code>--time-scale</code></p>
</li>
<li><p>Change render size with <code>--viewport</code></p>
</li>
</ul>
<p>See the <a target="_blank" href="https://github.com/acaudwell/Gource"><strong>docs</strong></a> for all available options.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/UZ2iAGL_LI8">https://youtu.be/UZ2iAGL_LI8</a></div>
<p> </p>
<h2 id="heading-summary"><strong>Summary</strong></h2>
<p>Gource provides a mesmerizing way to visualize activity in your Git repository over time. It's easy to install and use - just point it at a repo directory to generate an animation. The graphical representation quickly conveys insights into the development process and makes for an impressive interlude in presentations.</p>
<p>With some customization, you can highlight important events and contributors in the visualization. Give Gource a try if you want a graphical overview of your project's Git history! Your audiences will love the eye-catching animated visualization.</p>
<hr />
<h2 id="heading-references">References</h2>
<p><a target="_blank" href="https://github.com/acaudwell/Gource">https://github.com/acaudwell/Gource</a></p>
]]></content:encoded></item><item><title><![CDATA[Git Submodules: A Game-Changer for Streamlining Microservices Development]]></title><description><![CDATA[In the world of microservices development, maintaining consistency across different repositories can be a real challenge. It's not uncommon to find duplicated code, configuration files, and development environments across various microservices. Howev...]]></description><link>https://thebugshots.dev/git-submodules-a-game-changer-for-streamlining-microservices-development</link><guid isPermaLink="true">https://thebugshots.dev/git-submodules-a-game-changer-for-streamlining-microservices-development</guid><category><![CDATA[Git]]></category><category><![CDATA[git-submodule]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[developer experience]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Sun, 24 Sep 2023 19:45:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695584642805/cef36ccb-8670-4dfd-a4a6-130fb960fd2e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of microservices development, maintaining consistency across different repositories can be a real challenge. It's not uncommon to find duplicated code, configuration files, and development environments across various microservices. However, there is a solution that can help alleviate these pains - <strong>Git submodules</strong>. In this blog post, we'll explore how Git submodules can be a game-changer for managing shared code, tasks, and development environments across multiple microservice repositories.</p>
<h2 id="heading-understanding-git-submodules"><strong>Understanding Git Submodules</strong></h2>
<p>Before diving into our journey of using Git submodules to streamline microservices development, let's briefly understand what Git submodules are.</p>
<p><strong>Git submodules</strong> are Git repositories nested within another Git repository. They allow you to include one Git repository (the submodule) inside another (the main repository) as a subdirectory. This means you can keep a reference to an external repository within your own repository, effectively importing its contents.</p>
<p>Now, let's embark on our journey and understand why and how we used Git submodules.</p>
<h2 id="heading-the-challenge-duplicated-code-and-configuration"><strong>The Challenge: Duplicated Code and Configuration</strong></h2>
<p>In our development landscape, we had multiple microservices, each residing in its own Git repository. While these microservices had distinct functionalities, they all adhered to the same project structure and shared common tasks defined in a <code>Taskfile.yml</code>. Additionally, we used a common <code>Vagrantfile</code> to create standardized development environments across different operating systems.</p>
<p>However, maintaining this consistency became increasingly cumbersome. We found ourselves duplicating tasks and the <code>Vagrantfile</code> across multiple repositories, leading to a maintenance nightmare. Any changes or improvements in these shared components required manual updates in every microservice repository. It was time-consuming and prone to errors.</p>
<h2 id="heading-the-solution-git-submodules"><strong>The Solution: Git Submodules</strong></h2>
<p>As we searched for a solution to streamline our microservices development, we stumbled upon Git submodules. They seemed like the perfect fit for our use case.</p>
<h3 id="heading-step-1-creating-a-common-repository"><strong>Step 1: Creating a Common Repository</strong></h3>
<p>Our first step was to create a dedicated Git repository called <code>dev-templates</code>. This repository would house the shared taskfiles each separated into different files like <code>docker.yml</code>, <code>go.yml</code>, <code>helm.yml</code>, <code>vagrant.yml</code> etc., with common development tasks and standardized <code>Vagrantfile</code> for creating development environments.</p>
<h3 id="heading-step-2-adding-dev-templates-as-a-submodule"><strong>Step 2: Adding</strong> <code>dev-templates</code> as a Submodule</h3>
<p>With <code>dev-templates</code> ready, we added it as a submodule to all our microservice repositories. This was achieved with a simple Git command:</p>
<pre><code class="lang-bash">git submodule add https://github.com/yourusername/dev-templates.git &lt;folder-name&gt;
</code></pre>
<p>the <code>folder-name</code> can be replaced with the folder name where you want the submodule to be cloned inside the project.</p>
<p>By doing this, we imported the entire <code>dev-templates</code> repository into each microservice repository as a subdirectory. This effectively eliminated the need to duplicate code and configuration files.</p>
<h3 id="heading-step-3-simplifying-maintenance"><strong>Step 3: Simplifying Maintenance</strong></h3>
<p>Now, when we needed to update common tasks or the <code>Vagrantfile</code>, we made the changes in the <code>dev-templates</code> repository. Whenever we wanted to pull those changes into our microservice repositories, we executed the following command:</p>
<pre><code class="lang-bash">git submodule update --recursive --remote
</code></pre>
<p>We added this as a task in our <code>Taskfile.yml</code> in the individual microservices repo. And just like that, all our microservices were up to date with the latest changes from <code>dev-templates</code>.</p>
<h2 id="heading-the-benefits-of-using-git-submodules"><strong>The Benefits of Using Git Submodules</strong></h2>
<p>Our journey to incorporating Git submodules into our microservices development process brought several key benefits:</p>
<h3 id="heading-1-code-reusability"><strong>1. Code Reusability</strong></h3>
<p>By centralizing common code and configurations in a dedicated repository, we significantly reduced code duplication across microservices. This not only saved us time but also made our codebase more maintainable.</p>
<h3 id="heading-2-streamlined-maintenance"><strong>2. Streamlined Maintenance</strong></h3>
<p>Updating shared components became a breeze. Instead of manually propagating changes to every repository, we only had to update <code>dev-templates</code>. The submodule update command took care of the rest.</p>
<h3 id="heading-3-consistency"><strong>3. Consistency</strong></h3>
<p>All microservices now adhered to the same project structure and development environment, ensuring consistency and reducing the chances of issues arising from differences in configurations.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Our journey of using Git submodules to manage shared code, tasks, and development environments across multiple microservice repositories has proven to be a game-changer. It simplified our development process, reduced maintenance overhead, and enhanced the overall consistency of our projects.</p>
<p>If you're facing similar challenges in your microservices development, consider giving Git submodules a try. They might be the solution you've been searching for to streamline your development workflow.</p>
<p>By implementing Git submodules, we were able to streamline our microservices development workflow, reduce code duplication, and simplify maintenance. If you have any questions or would like more detailed examples of how to work with Git submodules, feel free to reach out. :)</p>
]]></content:encoded></item><item><title><![CDATA[Sherlock: The Internet Sleuth for User Profiles]]></title><description><![CDATA[Introduction
In today's digital age, online presence is a fundamental aspect of our lives. From social media platforms to discussion forums, we leave traces of our identity across the internet. But have you ever wondered how to track down someone's o...]]></description><link>https://thebugshots.dev/sherlock-the-internet-sleuth-for-user-profiles</link><guid isPermaLink="true">https://thebugshots.dev/sherlock-the-internet-sleuth-for-user-profiles</guid><category><![CDATA[sherlock]]></category><category><![CDATA[webscraping ]]></category><category><![CDATA[OSINT]]></category><category><![CDATA[Python]]></category><category><![CDATA[Investigation]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Fri, 22 Sep 2023 07:07:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695366232727/3d23c4dd-1816-4ecf-bede-23233300fe50.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In today's digital age, online presence is a fundamental aspect of our lives. From social media platforms to discussion forums, we leave traces of our identity across the internet. But have you ever wondered how to track down someone's online profiles using just their username? Enter Sherlock, a powerful tool designed to scrape the internet and unearth user profiles associated with a given username. In this blog post, we'll delve into the world of Sherlock, exploring its features, use cases and ethical considerations. Plus, we'll guide you through how to use Sherlock responsibly.</p>
<h2 id="heading-what-is-sherlock">What is Sherlock?</h2>
<p>Sherlock is an open-source tool developed by <a target="_blank" href="https://github.com/sdushantha">Siddharth Dushantha</a>. It's designed to help users discover online profiles connected to a specific username. Whether you're an investigator, a security professional, or simply curious, Sherlock can assist in finding information about individuals on the web.</p>
<h2 id="heading-key-features-of-sherlock">Key Features of Sherlock</h2>
<ol>
<li><p><strong><em>User-Friendly Command Line Interface (CLI):</em></strong> Sherlock boasts a straightforward CLI, making it accessible to users with varying levels of technical expertise. This ease of use sets it apart from more complex web scraping tools.</p>
</li>
<li><p><strong><em>Extensive Platform Support:</em></strong> Sherlock supports a wide range of social media platforms, including Twitter, Instagram, LinkedIn, and many others. This versatility ensures that you can search for profiles across popular networks.</p>
</li>
<li><p><strong><em>Information Aggregation:</em></strong> Once Sherlock has gathered information on profiles linked to a username, it neatly presents the results, making it easy to analyze and cross-reference data.</p>
</li>
<li><p><strong><em>Active Development:</em></strong> Sherlock is an open-source project, meaning it's actively maintained and improved by a community of contributors. This ensures its reliability and adaptability over time.</p>
</li>
</ol>
<h2 id="heading-how-does-sherlock-work">How Does Sherlock Work?</h2>
<p>Sherlock employs web scraping techniques to search for profiles associated with a given username. Here's a simplified overview of its operation:</p>
<ol>
<li><p><strong><em>Installation:</em></strong> Before using Sherlock, ensure you have Python 3 installed on your system. Then, clone the Sherlock repository and navigate to its directory in the terminal.</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/sherlock-project/sherlock.git 
 <span class="hljs-built_in">cd</span> sherlock
 python3 -m pip install -r requirements.txt
</code></pre>
</li>
<li><p><em>Running Sherlock</em>: Once installed, run Sherlock by providing a username as a command-line argument. Replace <code>&lt;username&gt;</code> with the username you want to search for.</p>
<pre><code class="lang-bash"> python3 sherlock.py john_doe
</code></pre>
<pre><code class="lang-bash"> Output:

 ❯ python3 sherlock john_doe
 [*] Checking username john_doe on:

 [+] 8tracks: https://8tracks.com/john_doe
 [+] 9GAG: https://www.9gag.com/u/john_doe
 [+] About.me: https://about.me/john_doe
 [+] Amino: https://aminoapps.com/u/john_doe
 [+] Apple Developer: https://developer.apple.com/forums/profile/john_doe
 [+] Apple Discussions: https://discussions.apple.com/profile/john_doe
 [+] Archive of Our Own: https://archiveofourown.org/users/john_doe
 [+] Archive.org: https://archive.org/details/@john_doe
 [+] AskFM: https://ask.fm/john_doe
 [+] Audiojungle: https://audiojungle.net/user/john_doe
 [+] BLIP.fm: https://blip.fm/john_doe
 [+] Bandcamp: https://www.bandcamp.com/john_doe
 [+] Behance: https://www.behance.net/john_doe
 [+] Bikemap: https://www.bikemap.net/en/u/john_doe/routes/created/
 [+] Bitwarden Forum: https://community.bitwarden.com/u/john_doe/summary
 [+] BodyBuilding: https://bodyspace.bodybuilding.com/john_doe
 [+] Bookcrossing: https://www.bookcrossing.com/mybookshelf/john_doe/
 [+] BuzzFeed: https://buzzfeed.com/john_doe
 ...
 ...
 ... cutting short the results to keep the blog small :)

 [*] Search completed with 97 results
</code></pre>
</li>
<li><p><em>Analyzing the Results</em>: Sherlock will query various social media platforms and display the results in your terminal. Positive matches are displayed in green, while negative matches are in red.</p>
</li>
<li><p><em>Advanced Usage</em>: Sherlock offers various options and flags to enhance your searches. You can exclude specific websites from the search, save results to a text file, and more. Refer to the documentation for advanced usage.</p>
</li>
</ol>
<h2 id="heading-ethical-considerations">Ethical Considerations</h2>
<p>While Sherlock is a powerful tool, it's essential to use it responsibly and ethically:</p>
<ol>
<li><p><strong><em>Privacy and Consent</em></strong>: Always respect privacy and obtain consent when conducting searches on individuals. Using Sherlock for malicious purposes or stalking is unethical and potentially illegal.</p>
</li>
<li><p><strong><em>Verification</em>:</strong> Information gathered through Sherlock should be verified independently before taking any action or drawing conclusions.</p>
</li>
<li><p><strong><em>Legality</em>:</strong> Be aware of the legal implications of using web scraping tools like Sherlock in your jurisdiction. Laws regarding data privacy and web scraping vary from place to place.</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Sherlock is a valuable tool that can help users find online profiles associated with a username efficiently. Whether you're conducting investigations, managing your digital footprint, or reconnecting with lost connections, Sherlock can be a useful asset. However, it's crucial to approach its use with ethics and legality in mind, respecting privacy and obtaining consent whenever necessary. As with any tool, responsible and considerate usage should always be the guiding principle.</p>
<h2 id="heading-references">References</h2>
<p>Sherlock Github Repo - <a target="_blank" href="https://github.com/sherlock-project/sherlock">https://github.com/sherlock-project/sherlock</a></p>
]]></content:encoded></item><item><title><![CDATA[E2E / Integration Testing in Golang with ory/dockertest]]></title><description><![CDATA[While unit and component tests are crucial for verifying individual parts of an application work as expected, e2e/integration tests play an equally important role in validating how those components function together. They test the entire system from ...]]></description><link>https://thebugshots.dev/e2e-integration-testing-in-golang-with-orydockertest</link><guid isPermaLink="true">https://thebugshots.dev/e2e-integration-testing-in-golang-with-orydockertest</guid><category><![CDATA[Go Language]]></category><category><![CDATA[E2ETesting]]></category><category><![CDATA[integration test]]></category><category><![CDATA[dockertest]]></category><category><![CDATA[Automated Testing]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Wed, 20 Sep 2023 16:26:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695227020983/d6c67981-f0ba-416b-b7d4-c34b5c3d3548.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While <code>unit</code> and <code>component</code> tests are crucial for verifying individual parts of an application work as expected, <code>e2e/integration</code> tests play an equally important role in validating how those components function together. They test the entire system from end to end, spanning multiple components or services. This catches issues that occur between integrated components that unit tests would not find.</p>
<p>The <a target="_blank" href="https://github.com/ory/dockertest"><strong>ory/dockertest</strong></a> package is a useful tool for testing Go applications that interact with databases and other services running in Docker containers. In this post, we'll look at how to use dockertest to write automated tests for a Go application.</p>
<h2 id="heading-overview"><strong>Overview</strong></h2>
<p>When testing an application that depends on external services like databases, queues, etc., we often run into the problem of setting up and managing these dependencies during tests. The dockertest package makes this easier by spinning up Docker containers for the dependencies as needed, then cleaning them up after the tests finish.</p>
<p>Some key features of dockertest:</p>
<ul>
<li><p>Automatically pulls Docker images if needed</p>
</li>
<li><p>Launches containers in the background</p>
</li>
<li><p>Provides helpers to wait for containers to become ready before running tests</p>
</li>
<li><p>Removes containers after tests complete</p>
</li>
</ul>
<p>This means we can focus on writing the actual test logic rather than all the setup/teardown boilerplate.</p>
<h2 id="heading-basic-example">Basic Example</h2>
<p>Let's look at a simple example. Say we have a Go API that needs to connect to a Redis database. Here is how we could use dockertest to spin up a temporary Redis container for our tests:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
  <span class="hljs-string">"testing"</span>

  <span class="hljs-string">"github.com/ory/dockertest/v3"</span>
  <span class="hljs-string">"github.com/ory/dockertest/v3/docker"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">TestMain</span><span class="hljs-params">(m *testing.M)</span></span> {
  pool, err := dockertest.NewPool(<span class="hljs-string">""</span>)
  <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    log.Fatalf(<span class="hljs-string">"Could not connect to docker: %s"</span>, err)
  }

  <span class="hljs-comment">// pulls an image, creates a container based on it and runs it</span>
  resource, err := pool.Run(<span class="hljs-string">"redis"</span>, <span class="hljs-string">"latest"</span>, []<span class="hljs-keyword">string</span>{<span class="hljs-string">"POSTGRES_PASSWORD=secret"</span>, <span class="hljs-string">"POSTGRES_DB=myappdb"</span>})
  <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    log.Fatalf(<span class="hljs-string">"Could not start resource: %s"</span>, err)
  }

  <span class="hljs-comment">// exponential backoff-retry, because the application in the container might not be ready to accept connections yet</span>
  <span class="hljs-keyword">if</span> err := pool.Retry(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-title">error</span></span> {
    <span class="hljs-keyword">var</span> err error
    db, err = sql.Open(<span class="hljs-string">"postgres"</span>, fmt.Sprintf(<span class="hljs-string">"postgres://postgres:secret@localhost:%s/myappdb?sslmode=disable"</span>, resource.GetPort(<span class="hljs-string">"5432/tcp"</span>)))
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
      <span class="hljs-keyword">return</span> err
    }
    <span class="hljs-keyword">return</span> db.Ping()
  }); err != <span class="hljs-literal">nil</span> {
    log.Fatalf(<span class="hljs-string">"Could not connect to docker: %s"</span>, err)
  }

  code := m.Run()

  <span class="hljs-comment">// You can't defer this because os.Exit doesn't care for defer</span>
  <span class="hljs-keyword">if</span> err := pool.Purge(resource); err != <span class="hljs-literal">nil</span> {
    log.Fatalf(<span class="hljs-string">"Could not purge resource: %s"</span>, err)
  }

  os.Exit(code)
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">TestSomething</span><span class="hljs-params">(t *testing.T)</span></span> {
  <span class="hljs-comment">// do something with db</span>
}
</code></pre>
<p>The key steps are:</p>
<ol>
<li><p>Create a <code>dockertest.Pool</code> to manage Docker containers.</p>
</li>
<li><p>Use <a target="_blank" href="http://pool.Run"><code>pool.Run</code></a> to start a PostgreSQL container.</p>
</li>
<li><p>Wait for the container to start using <code>pool.Retry</code> - the application may take some time to launch and accept connections.</p>
</li>
<li><p>Run tests as usual, connecting to the PostgreSQL container.</p>
</li>
<li><p>Cleanup the container after tests finish with <code>pool.Purge</code>.</p>
</li>
</ol>
<p>This allows running the tests without having to manually install a PostgreSQL server on the test environment. The dockertest package handles all the container management under the hood.</p>
<h2 id="heading-implementation-in-a-go-rest-api-project">Implementation in a Go REST API Project</h2>
<p>Let's Implement this in a Real Golang REST API project that will expose an endpoint called <code>/healthz</code> which when called will connect with a redis database and return a 200 if the database is healthy and if not, then it will return a 503 service unavailable.</p>
<h3 id="heading-project-structure">Project Structure</h3>
<p>I've added the project with all the files and logic in this <a target="_blank" href="https://github.com/cksidharthan/go-e2e-test-blog">repository</a>. The project structure is as follows:</p>
<pre><code class="lang-plaintext">├── Dockerfile         -- Dockerfile for the API
├── README.md   
├── main.go            -- Main file for the API containing the routes 
├── go.mod   
├── go.sum
└── e2e                -- Folder containing the e2e tests
    ├── init_test.go   -- File containing the TestMain function
    └── main_test.go   -- File containing the e2e tests
</code></pre>
<h3 id="heading-dockerfile">Dockerfile</h3>
<p>The Dockerfile defines how to build the image for the API. It starts from the <code>golang</code> base image, copies the source code into the container, builds the Go binary, and defines the startup command.</p>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> golang:<span class="hljs-number">1.21</span>.<span class="hljs-number">1</span> as build

<span class="hljs-keyword">ENV</span> GOPATH=/go

<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-keyword">RUN</span><span class="bash"> GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /app/myapi main.go</span>

<span class="hljs-keyword">FROM</span> alpine:<span class="hljs-number">3.18</span>.<span class="hljs-number">3</span>

<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-keyword">COPY</span><span class="bash"> --from=build /app/myapi /app/myapi</span>

<span class="hljs-keyword">RUN</span><span class="bash"> addgroup -S appgroup &amp;&amp; adduser -S appuser -G appgroup</span>
<span class="hljs-keyword">RUN</span><span class="bash"> chmod +x /app/myapi</span>
<span class="hljs-keyword">USER</span> appuser

<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">8080</span>

<span class="hljs-keyword">HEALTHCHECK</span><span class="bash"> --interval=60s --timeout=3s --start-period=5s --retries=3 CMD [ <span class="hljs-string">"wget"</span>, <span class="hljs-string">"-q"</span>, <span class="hljs-string">"http://localhost:8080/healthz"</span>, <span class="hljs-string">"-O"</span>, <span class="hljs-string">"-"</span> ]</span>

<span class="hljs-keyword">ENTRYPOINT</span><span class="bash"> [<span class="hljs-string">"/app/myapi"</span>]</span>
</code></pre>
<h3 id="heading-rest-api-file-maingo">REST API file (main.go)</h3>
<p>We'll create a simple API that connects to a Redis database and exposes a health endpoint. The API will be built using the <strong>gin-gonic/gin</strong> package.</p>
<p>It exposes the <code>/healthz</code> endpoint that returns a 200 response if the database is healthy, or a 503 response if the database is not healthy.</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"context"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"os"</span>

    <span class="hljs-string">"github.com/gin-gonic/gin"</span>
    <span class="hljs-string">"github.com/redis/go-redis/v9"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Set up Gin router</span>
    router := gin.Default()

    <span class="hljs-comment">// Define a route for the "/healthz" endpoint</span>
    router.GET(<span class="hljs-string">"/healthz"</span>, <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(c *gin.Context)</span></span> {
        <span class="hljs-comment">// Check the database connection health</span>
        <span class="hljs-keyword">if</span> err := checkDatabaseHealth(); err != <span class="hljs-literal">nil</span> {
            c.JSON(http.StatusServiceUnavailable, gin.H{<span class="hljs-string">"status"</span>: <span class="hljs-string">"Database is not healthy"</span>})
            <span class="hljs-keyword">return</span>
        }

        c.JSON(http.StatusOK, gin.H{<span class="hljs-string">"status"</span>: <span class="hljs-string">"Database is healthy"</span>})
    })

    <span class="hljs-comment">// Start the Gin server</span>
    port := os.Getenv(<span class="hljs-string">"PORT"</span>)
    <span class="hljs-keyword">if</span> port == <span class="hljs-string">""</span> {
        port = <span class="hljs-string">"8080"</span>
    }
    router.Run(<span class="hljs-string">":"</span> + port)
}

<span class="hljs-comment">// checks the connection to redis database</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">checkDatabaseHealth</span><span class="hljs-params">()</span> <span class="hljs-title">error</span></span> {
    db := redis.NewClient(&amp;redis.Options{
        Addr:     <span class="hljs-string">"redis-container:6379"</span>,
        Password: <span class="hljs-string">""</span>,
        DB:       <span class="hljs-number">0</span>,
    })

    _, err := db.Ping(context.Background()).Result()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> err
    }

    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>
}
</code></pre>
<h3 id="heading-configuring-the-test-suite-with-testmain-e2einittestgo">Configuring the Test Suite with TestMain (e2e/init_test.go)</h3>
<p>The TestMain function is a special function that runs before any tests are run. We can use this to set up the test suite, including starting the Redis container and connecting to it.</p>
<p>This file will do the following:</p>
<ul>
<li><p>Create a Docker pool to manage containers.</p>
</li>
<li><p>Create a network for the containers to communicate over.</p>
</li>
<li><p>Deploy the Redis container.</p>
</li>
<li><p>Build the API container as specified in the Dockerfile.</p>
</li>
<li><p>Deploy the API container.</p>
</li>
<li><p>Run the tests that are in the e2e_test package.</p>
</li>
<li><p>Tear down the containers and network after the tests finish.</p>
</li>
<li><p>Exit the test suite.</p>
</li>
</ul>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> e2e_test

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"context"</span>
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"github.com/ory/dockertest/v3"</span>
    <span class="hljs-string">"github.com/ory/dockertest/v3/docker"</span>
    <span class="hljs-string">"github.com/redis/go-redis/v9"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"os"</span>
    <span class="hljs-string">"testing"</span>
)

<span class="hljs-comment">// Declare a global variable to hold the Docker pool and resource.</span>
<span class="hljs-keyword">var</span> (
    network *dockertest.Network
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">TestMain</span><span class="hljs-params">(m *testing.M)</span></span> {
    <span class="hljs-comment">// Initialize Docker pool and ensure it's closed at the end.</span>
    pool, err := dockertest.NewPool(<span class="hljs-string">""</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Could not connect to Docker: %v"</span>, err)
    }

    <span class="hljs-comment">// Create a Docker network for the tests.</span>
    network, err = pool.CreateNetwork(<span class="hljs-string">"test-network"</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Could not create network: %v"</span>, err)
    }

    <span class="hljs-comment">// Deploy the Redis container.</span>
    redisResource, err := deployRedis(pool)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Could not start resource: %v"</span>, err)
    }

    <span class="hljs-comment">// Deploy the API container.</span>
    apiResource, err := deployAPIContainer(pool)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Could not start resource: %v"</span>, err)
    }

    resources := []*dockertest.Resource{
        redisResource,
        apiResource,
    }

    <span class="hljs-comment">// Run the tests.</span>
    exitCode := m.Run()

    <span class="hljs-comment">// Exit with the appropriate code.</span>
    err = TearDown(pool, resources)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Could not purge resource: %v"</span>, err)
    }

    os.Exit(exitCode)
}

<span class="hljs-comment">// deployRedis builds and runs the Redis container.</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">deployRedis</span><span class="hljs-params">(pool *dockertest.Pool)</span> <span class="hljs-params">(*dockertest.Resource, error)</span></span> {
    resource, err := pool.RunWithOptions(&amp;dockertest.RunOptions{
        Hostname:     <span class="hljs-string">"redis-container"</span>,
        Repository:   <span class="hljs-string">"redis"</span>,
        Tag:          <span class="hljs-string">"latest"</span>,
        ExposedPorts: []<span class="hljs-keyword">string</span>{<span class="hljs-string">"6379"</span>},
        PortBindings: <span class="hljs-keyword">map</span>[docker.Port][]docker.PortBinding{
            <span class="hljs-string">"6379/tcp"</span>: {{HostIP: <span class="hljs-string">""</span>, HostPort: <span class="hljs-string">"6379"</span>}},
        },
        Networks: []*dockertest.Network{
            network,
        },
    })
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"could not start resource: %v"</span>, err)
    }

    <span class="hljs-comment">// Ensure the Redis container is ready to accept connections.</span>
    <span class="hljs-keyword">if</span> err := pool.Retry(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-title">error</span></span> {
        fmt.Println(<span class="hljs-string">"Checking Redis connection..."</span>)
        db := redis.NewClient(&amp;redis.Options{
            Addr:     <span class="hljs-string">"localhost:6379"</span>,
            Password: <span class="hljs-string">""</span>,
            DB:       <span class="hljs-number">0</span>,
        })

        _, err := db.Ping(context.Background()).Result()
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            <span class="hljs-keyword">return</span> err
        }

        <span class="hljs-keyword">defer</span> db.Close()

        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>
    }); err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"could not connect to docker: %v"</span>, err)
    }

    <span class="hljs-keyword">return</span> resource, <span class="hljs-literal">nil</span>
}

<span class="hljs-comment">// TearDown purges the resources and removes the network.</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">TearDown</span><span class="hljs-params">(pool *dockertest.Pool, resources []*dockertest.Resource)</span> <span class="hljs-title">error</span></span> {
    <span class="hljs-keyword">for</span> _, resource := <span class="hljs-keyword">range</span> resources {
        <span class="hljs-keyword">if</span> err := pool.Purge(resource); err != <span class="hljs-literal">nil</span> {
            <span class="hljs-keyword">return</span> fmt.Errorf(<span class="hljs-string">"could not purge resource: %v"</span>, err)
        }
    }

    <span class="hljs-keyword">if</span> err := pool.RemoveNetwork(network); err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> fmt.Errorf(<span class="hljs-string">"could not remove network: %v"</span>, err)
    }

    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>
}

<span class="hljs-comment">// deployAPIContainer builds and runs the API container.</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">deployAPIContainer</span><span class="hljs-params">(pool *dockertest.Pool)</span> <span class="hljs-params">(*dockertest.Resource, error)</span></span> {
    <span class="hljs-comment">// build and run the API container</span>
    resource, err := pool.BuildAndRunWithBuildOptions(&amp;dockertest.BuildOptions{
        ContextDir: <span class="hljs-string">"../"</span>,
        Dockerfile: <span class="hljs-string">"Dockerfile"</span>,
    }, &amp;dockertest.RunOptions{
        Name:         <span class="hljs-string">"api-container"</span>,
        ExposedPorts: []<span class="hljs-keyword">string</span>{<span class="hljs-string">"8080"</span>},
        PortBindings: <span class="hljs-keyword">map</span>[docker.Port][]docker.PortBinding{
            <span class="hljs-string">"8080"</span>: {{HostIP: <span class="hljs-string">"0.0.0.0"</span>, HostPort: <span class="hljs-string">"8080"</span>}},
        },
        Networks: []*dockertest.Network{
            network,
        },
    })

    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"could not start resource: %v"</span>, err)
    }

    <span class="hljs-comment">// check if the API container is ready to accept connections</span>
    <span class="hljs-keyword">if</span> err = pool.Retry(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-title">error</span></span> {
        fmt.Println(<span class="hljs-string">"Checking API connection..."</span>)
        _, err := http.Get(<span class="hljs-string">"http://localhost:8080/healthz"</span>)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            <span class="hljs-keyword">return</span> err
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>
    }); err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"could not start resource: %v"</span>, err)
    }

    <span class="hljs-keyword">return</span> resource, <span class="hljs-literal">nil</span>
}
</code></pre>
<p>Now we have the test suite ready and configured. We can write the actual tests in the e2e_test package.</p>
<h2 id="heading-writing-the-tests"><strong>Writing the Tests</strong></h2>
<p>The tests will be written in the e2e_test package. This package will be run as part of the TestMain function in the init_test.go file.</p>
<p>When this test is run locally, it will spin up the Redis and API containers, run the tests, and then tear down the containers. When this test is run in the pipeline, it will spin up the Redis and API containers, run the tests, and then tear down the containers.</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> e2e_test

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"testing"</span>
)

<span class="hljs-comment">// TestHealthRoute tests the /healthz endpoint.</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">TestHealthRoute</span><span class="hljs-params">(t *testing.T)</span></span> {
    <span class="hljs-comment">// create a get request to localhost:8080/healthz</span>
    <span class="hljs-comment">// check that the response status code is 200</span>

    request, err := http.NewRequest(<span class="hljs-string">"GET"</span>, <span class="hljs-string">"http://localhost:8080/healthz"</span>, <span class="hljs-literal">nil</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        t.Fatalf(<span class="hljs-string">"Could not create request: %v"</span>, err)
    }

    response, err := http.DefaultClient.Do(request)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        t.Fatalf(<span class="hljs-string">"Could not make request: %v"</span>, err)
    }

    <span class="hljs-keyword">if</span> response.StatusCode != http.StatusOK {
        t.Errorf(<span class="hljs-string">"Expected status 200, got %d"</span>, response.StatusCode)
    }
}
</code></pre>
<h2 id="heading-running-the-tests"><strong>Running the Tests</strong></h2>
<p>To run the tests locally, run the following command:</p>
<pre><code class="lang-go"><span class="hljs-keyword">go</span> test ./...
</code></pre>
<p>This will run the tests in the e2e_test package, which will spin up the Redis and API containers, run the tests, and then tear down the containers. One benefit of using dockertest is that it will automatically pull the Redis and API images if they are not already present on the local machine. Also, other tests in the project will be run as usual.</p>
<p>The TestMain function will be used only for running tests inside the e2e_test package.</p>
<p>If you have another test package that needs a similar setup/teardown logic, you can create a similar TestMain function in that package.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this post, we looked at how to use dockertest to write automated tests for a Go application. We saw how to use dockertest to spin up a temporary Redis container and a temporary API container for our tests, then tear it down after the tests are finished.</p>
<p>The ory/dockertest package provides a powerful and convenient way to spin up Docker containers for testing dependencies. By handling the setup and teardown of containers in the background, it frees us to focus on writing robust integration tests for our Go applications. The full lifecycle testing enabled by dockertest increases confidence in releasing code to production. While it does require configuring tests to use containers, this overhead pays dividends in catching issues early. The dockertest documentation contains many more examples for testing various databases, queues, and services. Overall, dockertest is an invaluable tool for anyone looking to improve their automated testing workflows with Docker and Go.</p>
<h2 id="heading-references">References</h2>
<p>ory/dockertest - <a target="_blank" href="https://github.com/ory/dockertest">https://github.com/ory/dockertest</a></p>
<p>go-e2e-test-blog - <a target="_blank" href="https://github.com/cksidharthan/go-e2e-test-blog">https://github.com/cksidharthan/go-e2e-test-blog</a></p>
]]></content:encoded></item><item><title><![CDATA[Unleashing Productivity with a Customized Zsh Terminal]]></title><description><![CDATA[The terminal is a developer's playground, and customizing it can significantly boost your productivity. In this blog post, I'll walk you through my Zsh terminal configuration, including the shell prompt, plugins, and additional tools that I use to st...]]></description><link>https://thebugshots.dev/unleashing-productivity-with-a-customized-zsh-terminal</link><guid isPermaLink="true">https://thebugshots.dev/unleashing-productivity-with-a-customized-zsh-terminal</guid><category><![CDATA[zsh]]></category><category><![CDATA[terminal]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[terminal prompt]]></category><category><![CDATA[starship]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Sat, 16 Sep 2023 19:41:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694892801918/154eee36-f3bc-42c3-bf24-12f90c60074a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The terminal is a developer's playground, and customizing it can significantly boost your productivity. In this blog post, I'll walk you through my Zsh terminal configuration, including the shell prompt, plugins, and additional tools that I use to streamline my workflow. By the end of this post, you'll have a comprehensive understanding of how to supercharge your terminal setup.</p>
<h2 id="heading-starship-prompt-a-versatile-and-customizable-command-prompt"><strong>Starship Prompt: A Versatile and Customizable Command Prompt</strong></h2>
<p>The command prompt is your gateway to the terminal, and Starship takes it to the next level. Here's a closer look at why I prefer it:</p>
<p><strong>Customizability Beyond Compare</strong>: Starship's configuration is based on a simple TOML file, making it incredibly approachable for customization. You can define not only the prompt's appearance but also its behavior. Want to display the current Git branch, Python version, and virtual environment? Starship can do that.</p>
<p><strong>Language Detection</strong>: Starship can intelligently detect the programming language of your current project. For instance, if you're in a Python project directory, it will display a Python logo and provide information related to Python. This dynamic adaptation ensures your prompt is always context-aware.</p>
<p><strong>Multi-Shell Compatibility</strong>: Starship isn't just for Zsh enthusiasts; it's designed to work across multiple shells. Whether you switch between Zsh, Bash, Fish, or even PowerShell, you can have a consistent and feature-rich prompt experience.</p>
<p>To install Starship, follow the instructions on their <a target="_blank" href="https://starship.rs"><strong>official website</strong></a>.</p>
<h2 id="heading-supercharging-zsh-with-plugins"><strong>Supercharging Zsh with Plugins</strong></h2>
<p>Zsh's extensibility is one of its greatest strengths. With an array of plugins at your disposal, you can fine-tune your shell to perfection. Let's explore the plugins that enhance my Zsh experience:</p>
<p><a target="_blank" href="https://github.com/ohmyzsh/ohmyzsh/blob/master/plugins/kubectl/README.md"><strong>Kubectl Plugin</strong></a>: Kubernetes is a cornerstone of modern containerized applications. The kubectl plugin provides auto-completions and suggestions for kubectl commands, streamlining your interaction with Kubernetes clusters.</p>
<p><a target="_blank" href="https://github.com/zsh-users/zsh-autosuggestions"><strong>zsh-autosuggestions</strong></a>: This plugin takes inspiration from modern text editors. As you type, it predicts and suggests commands based on your history and habits, reducing keystrokes and minimizing typos.</p>
<p><a target="_blank" href="https://github.com/zsh-users/zsh-syntax-highlighting"><strong>zsh-syntax-highlighting</strong>:</a> Command syntax highlighting is more than just eye candy; it's a real-time error prevention tool. With this plugin, your commands are color-coded to highlight potential mistakes, helping you identify mistakes in commands as you type them in the terminal.</p>
<p><a target="_blank" href="https://github.com/MichaelAquilina/zsh-you-should-use"><strong>you-should-use</strong></a>: It's easy to forget the optimal command in a given situation. This plugin acts as your knowledgeable assistant, suggesting better alternatives to the commands you're using. It's like having a mentor in your terminal.</p>
<p><a target="_blank" href="https://github.com/ohmyzsh/ohmyzsh/blob/master/plugins/alias-finder/README.md"><strong>alias-finder</strong></a>: Managing aliases can become unwieldy, especially if you have many of them. The alias-finder plugin simplifies the process by allowing you to search for and discover your aliases effortlessly.</p>
<h2 id="heading-fzf-the-ultimate-fuzzy-finder"><strong>FZF: The Ultimate Fuzzy Finder</strong></h2>
<p>Traditional command history search (Ctrl + R) can be a hit or miss, especially as your history grows. Enter <a target="_blank" href="https://github.com/junegunn/fzf">FZF (Fuzzy Finder)</a>, a remarkable tool for searching through your history and finding files:</p>
<p><strong>Efficient and Intuitive Searching</strong>: FZF introduces a fuzzy search mechanism that understands typos and incomplete input. It prioritizes and ranks results based on relevance, making it an intuitive and efficient way to locate the command you need.</p>
<p><strong>Versatile File Navigation</strong>: Beyond history, FZF extends its capabilities to file navigation. You can seamlessly locate and open files with just a few keystrokes, simplifying your interactions with your file system.</p>
<p><strong>Customizable Keybindings</strong>: FZF's flexibility doesn't stop at search behavior. You can create custom keybindings to tailor the tool to your specific preferences and workflow.</p>
<h2 id="heading-zoxide-your-path-to-effortless-file-navigation"><strong>Zoxide: Your Path to Effortless File Navigation</strong></h2>
<p>File navigation is a fundamental aspect of terminal usage, and <a target="_blank" href="https://github.com/ajeetdsouza/zoxide">Zoxide</a> takes it to the next level:</p>
<p><strong>Intelligent Path Suggestion</strong>: Zoxide's magic lies in its ability to predict the path you want to navigate to based on your usage patterns. This means less time spent manually traversing directories and more time focused on your tasks.</p>
<p><strong>Interactive Directory Jumping</strong>: With Zoxide, changing directories becomes an interactive experience. A simple 'z' command followed by a search term takes you directly to your desired directory.</p>
<p><strong>Cross-Shell Compatibility</strong>: Just like Starship, Zoxide is not confined to a single shell. You can use it with Zsh, Bash, Fish, and others, ensuring a consistent experience across different shell environments.</p>
<p><strong>TIP</strong>: I created an alias for zoxide with <code>alias cd=z</code> in the <code>~/.zshrc</code> file so that my workflow is unchanged :)</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Your terminal is more than just a command line; it's your productivity hub. By configuring Zsh with Starship and essential plugins, embracing FZF for efficient searches, and adopting Zoxide for intuitive file navigation, you can transform your terminal into a powerhouse of productivity.</p>
<p>Experiment with these tools, fine-tune them to your liking and watch your terminal experience become more intuitive, efficient, and enjoyable. Whether you're a seasoned developer or just starting your journey, a well-crafted terminal setup can make a world of difference in your daily workflow.</p>
<p>Below is my basic <code>.zshrc</code> file :)</p>
<pre><code class="lang-bash">ZSH_THEME=<span class="hljs-string">"frisk"</span> <span class="hljs-comment"># set by `omz`</span>

<span class="hljs-comment"># Path to your oh-my-zsh installation.</span>
<span class="hljs-built_in">export</span> ZSH=<span class="hljs-string">"<span class="hljs-variable">$HOME</span>/.oh-my-zsh"</span>

fpath+=<span class="hljs-variable">${ZSH_CUSTOM:-<span class="hljs-variable">${ZSH:-~/.oh-my-zsh}</span>/custom}</span>/plugins/zsh-completions/src
<span class="hljs-built_in">source</span> <span class="hljs-variable">$ZSH</span>/oh-my-zsh.sh

<span class="hljs-built_in">export</span> GOROOT=<span class="hljs-string">"/opt/homebrew/Cellar/go/1.21.0/libexec"</span>
<span class="hljs-built_in">export</span> GOPATH=<span class="hljs-variable">$HOME</span>/go
<span class="hljs-built_in">export</span> PATH=<span class="hljs-variable">$PATH</span>:<span class="hljs-variable">$GOPATH</span>/bin

<span class="hljs-comment"># Startship Config</span>
<span class="hljs-built_in">eval</span> <span class="hljs-string">"<span class="hljs-subst">$(starship init zsh)</span>"</span>
<span class="hljs-built_in">export</span> STARSHIP_CONFIG=~/.config/starship/starship.toml

<span class="hljs-comment"># Zoxide</span>
<span class="hljs-built_in">eval</span> <span class="hljs-string">"<span class="hljs-subst">$(zoxide init zsh)</span>"</span>
<span class="hljs-built_in">alias</span> <span class="hljs-built_in">cd</span>=<span class="hljs-string">"z"</span>

plugins=(
        kubectl
        git
        alias-finder
        <span class="hljs-built_in">history</span>
        zsh-autosuggestions
        zsh-syntax-highlighting
        you-should-use
)

<span class="hljs-comment"># Syntax Highlighting</span>
<span class="hljs-built_in">source</span> /opt/homebrew/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
<span class="hljs-built_in">export</span> ZSH_HIGHLIGHT_HIGHLIGHTERS_DIR=/opt/homebrew/share/zsh-syntax-highlighting/highlighters

<span class="hljs-comment"># Example aliases</span>
<span class="hljs-built_in">alias</span> zshconfig=<span class="hljs-string">"mate ~/.zshrc"</span>
<span class="hljs-built_in">alias</span> ohmyzsh=<span class="hljs-string">"mate ~/.oh-my-zsh"</span>
<span class="hljs-built_in">alias</span> home=<span class="hljs-string">"cd ~"</span>

<span class="hljs-built_in">export</span> BAT_THEME=<span class="hljs-string">"Monokai Extended Bright"</span>

[ -f ~/.fzf.zsh ] &amp;&amp; <span class="hljs-built_in">source</span> ~/.fzf.zsh

<span class="hljs-comment"># Kubectl alias</span>
<span class="hljs-built_in">alias</span> k=<span class="hljs-string">"kubectl"</span>

<span class="hljs-comment"># Kubernetes Autocompletions</span>
<span class="hljs-built_in">autoload</span> -Uz compinit
compinit
<span class="hljs-built_in">source</span> &lt;(kubectl completion zsh)

<span class="hljs-comment"># Go</span>
<span class="hljs-built_in">alias</span> makedeps=<span class="hljs-string">"go mod download &amp;&amp; go mod tidy &amp;&amp; go mod verify &amp;&amp; go mod vendor"</span>

<span class="hljs-built_in">source</span> /Users/sid/.docker/init-zsh.sh || <span class="hljs-literal">true</span> <span class="hljs-comment"># Added by Docker Desktop</span>
</code></pre>
<p>Below is the <code>starship.toml</code> file</p>
<pre><code class="lang-bash"><span class="hljs-comment">#### starship.toml file</span>

<span class="hljs-comment"># Don't print a new line at the start of the prompt</span>
add_newline = <span class="hljs-literal">true</span>

<span class="hljs-comment"># Replace the "❯" symbol in the prompt with "➜"</span>
[character]      <span class="hljs-comment"># The name of the module we are configuring is "character"</span>
error_symbol = <span class="hljs-string">"✗"</span>

<span class="hljs-comment"># Disable the package module, hiding it from the prompt completely</span>
[package]
disabled = <span class="hljs-literal">true</span>

[battery]
disabled = <span class="hljs-literal">false</span>
full_symbol = <span class="hljs-string">"🔋 "</span>
charging_symbol = <span class="hljs-string">"⚡️ "</span>

[[battery.display]]  <span class="hljs-comment"># "bold red" style when capacity is between 0% and 10%</span>
threshold = 10
style = <span class="hljs-string">"bold red"</span>

[[battery.display]]  <span class="hljs-comment"># "bold yellow" style when capacity is between 10% and 30%</span>
threshold = 30
style = <span class="hljs-string">"bold yellow"</span>

[[battery.display]]
threshold = 70
style = <span class="hljs-string">"bold blue"</span>

[[battery.display]]
threshold = 100
style = <span class="hljs-string">"bold green"</span>

[directory]
<span class="hljs-comment"># truncation_length = 4</span>

<span class="hljs-comment"># docker</span>
[docker_context]
symbol = <span class="hljs-string">"🐋 "</span>
disabled = <span class="hljs-literal">false</span>

<span class="hljs-comment"># git</span>
[git_commit]
commit_hash_length = 6

[git_status]
style=<span class="hljs-string">"dimmed green"</span>
up_to_date = <span class="hljs-string">"[✓](green)"</span>
staged = <span class="hljs-string">'[++\($count\)](green)'</span>

<span class="hljs-comment"># hostname</span>
[hostname]
ssh_only = <span class="hljs-literal">false</span>
prefix = <span class="hljs-string">"⟪"</span>
suffix = <span class="hljs-string">"⟫"</span>
trim_at = <span class="hljs-string">".companyname.com"</span>
disabled = <span class="hljs-literal">true</span>

<span class="hljs-comment"># kubernetes</span>
[kubernetes]
format = <span class="hljs-string">'on [⎈ ($cluster) ](#FFA500)'</span>
style = <span class="hljs-string">"dimmed green"</span>
disabled = <span class="hljs-literal">false</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Moving from Azure Git Wiki to Vitepress: A Painless Journey to Speedy Documentation]]></title><description><![CDATA[In the world of software development, documentation is often an overlooked yet crucial aspect of a project's success. At our company, we rely on Azure DevOps for software development, and one of the included packages is Azure Git Wiki. Initially, we ...]]></description><link>https://thebugshots.dev/moving-from-azure-git-wiki-to-vitepress-a-painless-journey-to-speedy-documentation</link><guid isPermaLink="true">https://thebugshots.dev/moving-from-azure-git-wiki-to-vitepress-a-painless-journey-to-speedy-documentation</guid><category><![CDATA[documentation]]></category><category><![CDATA[#vitepress]]></category><category><![CDATA[Static Website]]></category><category><![CDATA[#AzureDevOps]]></category><category><![CDATA[azure wiki]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Fri, 15 Sep 2023 20:54:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694033909209/adcd4132-f99b-4069-a1a7-6eaf63a0a2d7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of software development, documentation is often an overlooked yet crucial aspect of a project's success. At our company, we rely on Azure DevOps for software development, and one of the included packages is Azure Git Wiki. Initially, we found Azure Git Wiki to be a convenient choice for documentation due to its rich markdown editor and support for features like Mermaid diagrams. However, as time went on, we encountered performance issues that hindered our productivity. The website became slow and sometimes unresponsive, and the search functionality was frustratingly sluggish. This situation led to undocumented findings and knowledge gaps within our team.</p>
<p>Faced with these challenges, we embarked on a journey to find an alternative documentation solution that would allow us to continue using markdown for documentation, offer a painless migration process, and enhance the overall documentation experience. Our search led us to Vitepress, a static site generator that proved to be the perfect fit for our needs. In this blog post, we'll delve into our transition from Azure Git Wiki to Vitepress, highlighting the reasons behind our decision and the steps we took to make it happen.</p>
<h2 id="heading-why-we-sought-an-alternative"><strong>Why We Sought an Alternative</strong></h2>
<p><strong>Performance Issues:</strong> The primary motivation behind our quest for an alternative documentation platform was the performance issues we encountered with Azure Git Wiki. The website's slowness and occasional unresponsiveness became major roadblocks to efficient documentation creation and access.</p>
<p><strong>Slow Search:</strong> The search functionality in Azure Git Wiki was frustratingly slow, making it challenging to find the information we needed quickly. This hindered our ability to navigate and explore the documentation effectively.</p>
<p><strong>Lack of Contributor Engagement:</strong> The combination of performance issues and slow search functionality led to a lack of enthusiasm among team members for contributing to the documentation. Valuable insights and findings remained undocumented, which was detrimental to our knowledge-sharing efforts.</p>
<h2 id="heading-exploring-alternative-documentation-solutions"><strong>Exploring Alternative Documentation Solutions</strong></h2>
<p>In our search for an alternative, we explored various static site generation tools, including Vuepress, Hugo, and Docusaurus etc., However, Vitepress emerged as the standout choice for several reasons:</p>
<p><strong>Folder Structure:</strong> Vitepress offered an intuitive and easy-to-understand folder structure for organizing our documentation. This simplicity was crucial to ensuring that our team could easily navigate and contribute to the documentation without getting lost in complex project structures.</p>
<p><strong>Search Functionality:</strong> Vitepress came with built-in search functionality, addressing one of our primary pain points. This feature greatly improved the speed and efficiency of searching and navigating through our documentation.</p>
<p><strong>Community Support:</strong> Vitepress had an active community, with plugins and themes readily available. This allowed us to enhance our documentation with features like Mermaid diagrams and customize the look and feel of our documentation site effortlessly.</p>
<h2 id="heading-migration-process"><strong>Migration Process</strong></h2>
<p>Our migration from Azure Git Wiki to Vitepress was carefully planned and executed. Here are the key steps we followed:</p>
<p><strong>Create a New "Docs" Folder:</strong> We started by creating a new folder named <code>docs</code> and moved our existing documentation into this folder while maintaining the original folder structure.</p>
<p><strong>Configure Vitepress:</strong> We made the necessary configuration changes in the <code>config.js</code> file located inside the <code>.vitepress</code> folder. These changes included setting up the navigation structure, theme customization, and enabling the built-in search functionality.</p>
<p><strong>Local Testing:</strong> Before deploying the documentation, we ran the site locally to ensure that everything worked as expected. This step allowed us to catch any issues or discrepancies in the migration process.</p>
<p><strong>CI/CD Pipeline:</strong> To ensure continuous deployment, we configured a CI/CD pipeline in our development environment. This automated the process of publishing changes to our Vitepress documentation site whenever updates were pushed to the repository.</p>
<h2 id="heading-benefits-of-the-transition"><strong>Benefits of the Transition</strong></h2>
<p>The transition to Vitepress brought about several notable benefits:</p>
<p><strong>Speed and Performance:</strong> Vitepress proved to be significantly faster and more responsive than Azure Git Wiki, eliminating the frustrations we previously experienced.</p>
<p><strong>Improved Search:</strong> The built-in search functionality in Vitepress drastically improved the speed and accuracy of searching within our documentation, making it a joy to use.</p>
<p><strong>Enhanced Contribution:</strong> With the improved performance and user-friendly interface of Vitepress, team members were more willing to contribute actively to the documentation, leading to a more comprehensive and up-to-date knowledge base.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In summary, our journey from Azure Git Wiki to Vitepress was driven by the need for faster, more responsive documentation that would encourage team collaboration and knowledge sharing. Vitepress not only met but exceeded our expectations with its simplicity, performance, and search capabilities. The painless migration process ensured a smooth transition, and the positive impact on our team's productivity and engagement was immediately evident.</p>
<p>By making the switch to Vitepress, we've not only improved our documentation workflow but also created a more conducive environment for learning and sharing within our development team. The power of a well-structured and efficient documentation system cannot be understated, and with Vitepress, we've found a solution that fits our needs perfectly.</p>
]]></content:encoded></item><item><title><![CDATA[Synchronising Periodic Tasks and Graceful Shutdown with Goroutines and Tickers | Golang]]></title><description><![CDATA[Goroutines and channels are very useful and powerful primitives provided in the Go programming language for concurrently handling events, signals, and other asynchronous operations. The ticker can be used to create a repeatedly firing timer that will...]]></description><link>https://thebugshots.dev/synchronising-periodic-tasks-and-graceful-shutdown-with-goroutines-and-tickers-golang</link><guid isPermaLink="true">https://thebugshots.dev/synchronising-periodic-tasks-and-graceful-shutdown-with-goroutines-and-tickers-golang</guid><category><![CDATA[golang]]></category><category><![CDATA[synchronization]]></category><category><![CDATA[signals]]></category><category><![CDATA[posix]]></category><category><![CDATA[tickers]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Tue, 12 Sep 2023 14:40:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694529513075/b3af6d50-4ffa-4e1a-bfe4-a7c70f92d58b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Goroutines and channels are very useful and powerful primitives provided in the Go programming language for concurrently handling events, signals, and other asynchronous operations. The ticker can be used to create a repeatedly firing timer that will send signals on a channel at regular intervals. This channel can be passed to a goroutine that blocks waiting to receive from it. The goroutine provides concurrency safety while the ticker allows periodic checking and responding to external async events like signals. In this blog post, we will explore a specific example of utilizing a goroutine together with a ticker from the time package to gracefully handle POSIX operating system signals like SIGTERM and SIGINT.</p>
<h2 id="heading-the-scenario"><strong>The Scenario</strong></h2>
<p>Imagine we have a long-running Go program that needs to execute some logic periodically. For example, it could be polling a server for updates or performing other scheduled tasks. In such a scenario, we want the program to:</p>
<ol>
<li><p>Execute the logic every minute.</p>
</li>
<li><p>Gracefully shut down when it receives a SIGTERM or SIGINT signal.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694532012126/631c7a16-b654-4cf9-9dfd-dec4907bffbf.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-a-ticker-goroutine"><strong>A Ticker Goroutine</strong></h2>
<p>Go's time package provides a convenient Ticker object that can fire events on a channel at regular intervals. Here's how we can create a ticker that fires every minute:</p>
<pre><code class="lang-go">ticker := time.NewTicker(time.Minute)
</code></pre>
<p>Additionally, we need a channel to receive OS signals:</p>
<pre><code class="lang-go">signalChannel := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> os.Signal, <span class="hljs-number">1</span>)
signal.Notify(signalChannel, syscall.SIGTERM, syscall.SIGINT)
</code></pre>
<p>With the ticker and signal channel set up, we can proceed to launch a goroutine that uses them:</p>
<pre><code class="lang-go"><span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(ticker *time.Ticker)</span></span> {
    <span class="hljs-keyword">for</span> {
        <span class="hljs-keyword">select</span> {
        <span class="hljs-keyword">case</span> &lt;-ticker.C:
            <span class="hljs-comment">// Run required logic every minute</span>
            fmt.Println(<span class="hljs-string">"Ticker was fired!"</span>)
            ExampleLogic()
        <span class="hljs-keyword">case</span> &lt;-signalChannel:
            <span class="hljs-comment">// remove tempfiles, close database connections etc.,</span>
            <span class="hljs-comment">// Shutdown goroutine</span>
            os.Exit(<span class="hljs-number">0</span>)
        }
    }
}(ticker)
</code></pre>
<p>In the code above, we define an anonymous function that accepts the ticker as a parameter. Inside this goroutine:</p>
<ul>
<li><p>The <code>select</code> block concurrently waits on the ticker and signal channel.</p>
</li>
<li><p>When the ticker fires, it triggers the execution of the recurring logic.</p>
</li>
<li><p>If a signal (SIGTERM or SIGINT) is received, the goroutine proceeds to handle the graceful shutdown logic.</p>
</li>
</ul>
<p><strong>Putting it All Together</strong></p>
<p>Our <code>main</code> function simply needs to start the ticker goroutine:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"os"</span>
    <span class="hljs-string">"os/signal"</span>
    <span class="hljs-string">"syscall"</span>
    <span class="hljs-string">"time"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    ticker := time.NewTicker(time.Minute)
    TriggerGoroutine(ticker)
    time.Sleep(<span class="hljs-number">10</span> * time.Minute)
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">TriggerGoroutine</span><span class="hljs-params">(ticker *time.Ticker)</span></span> {

    signalChannel := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> os.Signal, <span class="hljs-number">1</span>)
    signal.Notify(signalChannel, syscall.SIGTERM, syscall.SIGINT)

    <span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(ticker *time.Ticker)</span></span> {
    <span class="hljs-keyword">for</span> {
        <span class="hljs-keyword">select</span> {
        <span class="hljs-keyword">case</span> &lt;-ticker.C:
            <span class="hljs-comment">// Run required logic every minute</span>
            fmt.Println(<span class="hljs-string">"Ticker was fired!"</span>)
            ExampleLogic()
        <span class="hljs-keyword">case</span> &lt;-signalChannel:
            <span class="hljs-comment">// remove tempfiles, close database connections etc.,</span>
            <span class="hljs-comment">// Shutdown goroutine</span>
            fmt.Println(<span class="hljs-string">"Got signal, exiting..."</span>)
            os.Exit(<span class="hljs-number">0</span>)
        }
    }
    }(ticker)
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">ExampleLogic</span><span class="hljs-params">()</span></span> {
    fmt.Println(<span class="hljs-string">"ExampleLogic was called!"</span>)
}
</code></pre>
<p>With this setup, the goroutine will execute every minute until the program is terminated with CTRL-C or a kill signal. When a shutdown signal comes in, the goroutine will handle the graceful shutdown logic before the program exits.</p>
<p>This pattern provides a clean and efficient way to build periodically executing processes that respond to common signals, ensuring the program behaves predictably and gracefully.</p>
<p>Feel free to reach out if you have any questions or need further clarification!</p>
]]></content:encoded></item><item><title><![CDATA[Safeguard Your REST APIs Using Open Policy Agent - OPA]]></title><description><![CDATA[Authorization is a crucial concern for most applications. As app logic grows, permission checks often get scattered across handlers, middlewares, and external services. This leads to duplicated logic and inconsistencies.
Open Policy Agent (OPA) provi...]]></description><link>https://thebugshots.dev/safeguard-your-rest-apis-using-open-policy-agent-opa</link><guid isPermaLink="true">https://thebugshots.dev/safeguard-your-rest-apis-using-open-policy-agent-opa</guid><category><![CDATA[opa]]></category><category><![CDATA[openpolicyagent]]></category><category><![CDATA[golang]]></category><category><![CDATA[gin]]></category><category><![CDATA[authorization]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Wed, 06 Sep 2023 20:00:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694030340932/ea5db487-850d-4199-97f5-7b567c9eb537.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Authorization is a crucial concern for most applications. As app logic grows, permission checks often get scattered across handlers, middlewares, and external services. This leads to duplicated logic and inconsistencies.</p>
<p>Open Policy Agent (OPA) provides a unified approach to manage authorization policies separate from application code. We can use the same OPA file and use it for applications written in multiple languages, but for the sake of this blog, we'll integrate it with a Golang application.</p>
<h2 id="heading-introduction-to-opa">Introduction to OPA</h2>
<p>OPA is an open-source policy engine that evaluates policies to make decisions about access control, configuration validation, quota management and more.</p>
<p>OPA decouples policy decisions from policy enforcement. Developers define policies using OPA’s declarative language Rego. These policies get enforced across infrastructure like API gateways, Kubernetes, CI/CD pipelines etc.</p>
<h2 id="heading-creating-opa-policy">Creating OPA Policy</h2>
<p>OPA policies are written in the Rego language. Rego is a declarative language designed for specifying policy rules concisely.</p>
<p>Some key concepts in Rego:</p>
<ul>
<li><p><strong>Rules</strong> - These define relations between entities. For example:</p>
<pre><code class="lang-rego">  allow <span class="hljs-punctuation">{
</span>    <span class="hljs-variable">input</span>.<span class="hljs-variable">subject</span>.clearance == <span class="hljs-string">"secret"</span>
    <span class="hljs-variable">input</span>.action == <span class="hljs-string">"GET"</span>
  <span class="hljs-punctuation">}</span>
</code></pre>
</li>
<li><p><strong>Packages</strong> - Related rules can be grouped into packages:</p>
<pre><code class="lang-rego">  <span class="hljs-keyword">package</span> authz

  allow <span class="hljs-punctuation">{.</span><span class="hljs-punctuation">..</span><span class="hljs-punctuation">}
</span>
  deny <span class="hljs-punctuation">{.</span><span class="hljs-punctuation">..</span><span class="hljs-punctuation">}</span>
</code></pre>
</li>
<li><p><strong>Input</strong> - This contains the request details like subject, action etc.</p>
</li>
<li><p><strong>Query</strong> - Executing a rule returns true/false for the query.</p>
</li>
</ul>
<p>Let's look at an example policy implementing role-based access control:</p>
<pre><code class="lang-rego"><span class="hljs-comment"># filename - auth.rego</span>
<span class="hljs-keyword">package</span> authz

<span class="hljs-keyword">import</span> <span class="hljs-variable">future</span>.keywords

<span class="hljs-keyword">default</span> allow = <span class="hljs-variable">false</span>

allow <span class="hljs-punctuation">{
</span>  <span class="hljs-variable">input</span>.role == <span class="hljs-string">"admin"</span>
  access_groups = <span class="hljs-punctuation">["</span>write<span class="hljs-string">", "</span>read<span class="hljs-string">"]
  input.access in access_groups
}

allow {
  input.role == "</span>user<span class="hljs-string">"
  access_groups = ["</span>read<span class="hljs-string">"]
  input.access in access_groups
}</span>
</code></pre>
<p>This rego file allows the admin to have <code>read</code> and <code>write</code> access but will restrict the user from only having <code>read</code> access.</p>
<h2 id="heading-testing-opa-policies">Testing OPA Policies</h2>
<p>We can test the OPA file that we have created above with a test file like below</p>
<pre><code class="lang-rego"><span class="hljs-comment"># filename - auth_test.rego</span>
<span class="hljs-keyword">package</span> authz

<span class="hljs-comment"># Test allow rule for admin</span>
test_allow_admin_write <span class="hljs-punctuation">{
</span>  allow <span class="hljs-keyword">with</span> input <span class="hljs-keyword">as</span> <span class="hljs-punctuation">{"</span>role<span class="hljs-string">": "</span>admin<span class="hljs-string">", "</span>access<span class="hljs-string">": "</span>write<span class="hljs-string">"}
}

test_allow_admin_read {
  allow with input as {"</span>role<span class="hljs-string">": "</span>admin<span class="hljs-string">", "</span>access<span class="hljs-string">": "</span>read<span class="hljs-string">"}
}

# Test allow rule for user
test_allow_user_read {
  allow with input as {"</span>role<span class="hljs-string">": "</span>user<span class="hljs-string">", "</span>access<span class="hljs-string">": "</span>read<span class="hljs-string">"}
}

test_deny_user_write {
  not allow with input as {"</span>role<span class="hljs-string">": "</span>user<span class="hljs-string">", "</span>access<span class="hljs-string">": "</span>write<span class="hljs-string">"}
}

# Test default deny
test_default_deny {
  not allow with input as {"</span>role<span class="hljs-string">": "</span>unknown<span class="hljs-string">", "</span>access<span class="hljs-string">": "</span>something<span class="hljs-string">"}
}</span>
</code></pre>
<p>To see if the tests are passing and the rego file is working perfectly, we can run the tests using the below command.</p>
<p><strong>NOTE:</strong> You will need to install the OPA tool to run the commands. <a target="_blank" href="https://sangkeon.github.io/opaguide/chap2/installandusage.html">Click here to view the install docs</a></p>
<pre><code class="lang-rego">$❯ opa test <span class="hljs-variable">auth</span>.rego <span class="hljs-variable">auth_test</span>.rego --verbose

<span class="hljs-variable">auth_test</span>.rego:
<span class="hljs-variable">data</span>.<span class="hljs-variable">authz</span>.test_allow_admin_write: PASS <span class="hljs-punctuation">(913.</span><span class="hljs-number">375</span>µs<span class="hljs-punctuation">)
</span><span class="hljs-variable">data</span>.<span class="hljs-variable">authz</span>.test_allow_admin_read: PASS <span class="hljs-punctuation">(133.</span><span class="hljs-number">875</span>µs<span class="hljs-punctuation">)
</span><span class="hljs-variable">data</span>.<span class="hljs-variable">authz</span>.test_allow_user_read: PASS <span class="hljs-punctuation">(194.</span><span class="hljs-number">625</span>µs<span class="hljs-punctuation">)
</span><span class="hljs-variable">data</span>.<span class="hljs-variable">authz</span>.test_deny_user_write: PASS <span class="hljs-punctuation">(186.</span><span class="hljs-number">958</span>µs<span class="hljs-punctuation">)
</span><span class="hljs-variable">data</span>.<span class="hljs-variable">authz</span>.test_default_deny: PASS <span class="hljs-punctuation">(120.</span><span class="hljs-number">333</span>µs<span class="hljs-punctuation">)
</span>--------------------------------------------------------------------------------
PASS: <span class="hljs-number">5</span>/<span class="hljs-number">5</span>
</code></pre>
<p>You can also check the test coverage of the OPA file using the below command</p>
<pre><code class="lang-rego">$❯ opa test <span class="hljs-variable">auth</span>.rego <span class="hljs-variable">auth_test</span>.rego --<span class="hljs-variable">coverage</span>

<span class="hljs-punctuation">..</span><span class="hljs-punctuation">. </span>start of the output
  <span class="hljs-string">"covered_lines"</span>: <span class="hljs-number">19</span><span class="hljs-punctuation">,
</span>  <span class="hljs-string">"not_covered_lines"</span>: <span class="hljs-number">0</span><span class="hljs-punctuation">,
</span>  <span class="hljs-string">"coverage"</span>: <span class="hljs-number">100</span>
<span class="hljs-punctuation">..</span><span class="hljs-punctuation">..</span>
</code></pre>
<h2 id="heading-integrating-opa-with-golang-gin-application">Integrating OPA with Golang Gin Application</h2>
<h3 id="heading-creating-gin-middleware">Creating Gin Middleware</h3>
<p>To Integrate OPA with Golang Gin application, we will need to add it as middleware as below</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">OpaMiddlware</span><span class="hljs-params">()</span> <span class="hljs-title">gin</span>.<span class="hljs-title">HandlerFunc</span></span> {
    <span class="hljs-comment">// open rego file</span>
    authzFile, err := os.Open(<span class="hljs-string">"auth.rego"</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
       log.Fatalf(<span class="hljs-string">"error opening file: %v"</span>, err)
    }
    <span class="hljs-keyword">defer</span> authzFile.Close()

    <span class="hljs-comment">// read rego file</span>
    module, err := io.ReadAll(authzFile)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
       log.Fatalf(<span class="hljs-string">"error reading file: %v"</span>, err)
    }

    <span class="hljs-comment">// return middleware</span>
    <span class="hljs-keyword">return</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(c *gin.Context)</span></span> {
       <span class="hljs-comment">// prepare rego query</span>
       query, err := rego.New(
          rego.Query(<span class="hljs-string">"data.authz.allow"</span>),
          rego.Module(<span class="hljs-string">"authz.rego"</span>, <span class="hljs-keyword">string</span>(module)),
       ).PrepareForEval(c)
       <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
          log.Printf(<span class="hljs-string">"error preparing query: %v\n"</span>, err)
       }

       <span class="hljs-comment">// evaluate rego query by supplying values extracted from header</span>
       result, err := query.Eval(context.Background(), rego.EvalInput(<span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">interface</span>{}{
          <span class="hljs-string">"role"</span>: c.Request.Header.Get(<span class="hljs-string">"role"</span>),
          <span class="hljs-string">"access"</span>:   c.Request.Header.Get(<span class="hljs-string">"access"</span>),
       }))
       <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
          log.Printf(<span class="hljs-string">"error evaluating query: %v\n"</span>, err)
       }

       <span class="hljs-comment">// check if the user is allowed to access the resource</span>
       <span class="hljs-keyword">if</span> result[<span class="hljs-number">0</span>].Expressions[<span class="hljs-number">0</span>].Value == <span class="hljs-literal">true</span> {
          c.Next()
          <span class="hljs-keyword">return</span>
       } <span class="hljs-keyword">else</span> {
          c.JSON(http.StatusForbidden, gin.H{
             <span class="hljs-string">"message"</span>: <span class="hljs-string">"access forbidden"</span>,
          })
          c.Abort()
          <span class="hljs-keyword">return</span>
       }
    }
}
</code></pre>
<p>This middleware reads the <code>auth.rego</code> file and prepare the rego query. This function returns a handler function that will run the opa policy for all the incoming requests by supplying headers, before forwarding the request to appropriate endpoints.</p>
<h3 id="heading-integrating-auth-middleware-with-gin-router">Integrating Auth Middleware with Gin Router</h3>
<p>We'll instruct Gin to attach the middleware to the router by calling the <code>router.Use</code> function like below</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    r := gin.Default()
    r.Use(OpaMiddlware()) <span class="hljs-comment">// we are attaching the auth middleware here</span>

    r.GET(<span class="hljs-string">"/ping"</span>, <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(c *gin.Context)</span></span> {
        <span class="hljs-comment">// do some logic with header</span>
        c.JSON(http.StatusOK, gin.H{
            <span class="hljs-string">"message"</span>: <span class="hljs-string">"pong"</span>,
        })
    })
    r.Run()
}
</code></pre>
<h3 id="heading-running-the-go-api">Running the Go API</h3>
<p>When we run the golang application and hit the <code>/ping</code> endpoint we get the output as below. (added print header statements in logs for easy understanding)</p>
<pre><code class="lang-go">$❯ <span class="hljs-keyword">go</span> run main.<span class="hljs-keyword">go</span>

.......
[GIN-debug] Listening and serving HTTP on :<span class="hljs-number">8080</span>
access: read
role: writer
[GIN] <span class="hljs-number">2023</span>/<span class="hljs-number">09</span>/<span class="hljs-number">06</span> - <span class="hljs-number">21</span>:<span class="hljs-number">42</span>:<span class="hljs-number">00</span> | <span class="hljs-number">403</span> |    <span class="hljs-number">5.589708</span>ms |       <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span> | GET      <span class="hljs-string">"/ping"</span>
access: read
role: admin
[GIN] <span class="hljs-number">2023</span>/<span class="hljs-number">09</span>/<span class="hljs-number">06</span> - <span class="hljs-number">21</span>:<span class="hljs-number">42</span>:<span class="hljs-number">10</span> | <span class="hljs-number">200</span> |    <span class="hljs-number">1.925834</span>ms |       <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span> | GET      <span class="hljs-string">"/ping"</span>
access: read
role: user
[GIN] <span class="hljs-number">2023</span>/<span class="hljs-number">09</span>/<span class="hljs-number">06</span> - <span class="hljs-number">21</span>:<span class="hljs-number">42</span>:<span class="hljs-number">15</span> | <span class="hljs-number">200</span> |       <span class="hljs-number">775.5</span>µs |       <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span> | GET      <span class="hljs-string">"/ping"</span>
access: write
role: user
[GIN] <span class="hljs-number">2023</span>/<span class="hljs-number">09</span>/<span class="hljs-number">06</span> - <span class="hljs-number">21</span>:<span class="hljs-number">42</span>:<span class="hljs-number">20</span> | <span class="hljs-number">403</span> |    <span class="hljs-number">3.746209</span>ms |       <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span> | GET      <span class="hljs-string">"/ping"</span>
</code></pre>
<p>From the console logs we see that when invalid requests are sent, we get <code>403</code> errors as per the policy and when valid requests are sent, we get back the response with the status code <code>200</code></p>
<p>To be exact, The middleware extracts the subject role and request method, queries OPA, and denies access if the policy evaluation fails.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694035892465/c6b2d7f1-cfb2-421b-9b62-8bfb10b43f72.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In this post, we looked at using Open Policy Agent to externalize authorization logic from a Golang application. OPA provides a unified way to manage access policies across services using its declarative language Rego.</p>
<p>We saw how to:</p>
<ul>
<li><p>Authorize Rego policies for enforcing role-based access control</p>
</li>
<li><p>Test policies thoroughly using OPA's built-in test framework</p>
</li>
<li><p>Integrate OPA with a Golang API server using middleware</p>
</li>
<li><p>Evaluate policies on each request to make access decisions</p>
</li>
</ul>
<p>OPA integrates well with infrastructures like Kubernetes, allowing consistent policy enforcement across large distributed environments.</p>
<p>Using OPA results in more maintainable applications by separating policy code from business logic. Authorization policies can be modified independently without changing backend services. OPA provides a scalable way to manage fine-grained access control that evolves with application needs.</p>
<p>Hope this gives a good overview of securing Golang apps with OPA. :)</p>
<p><strong>Github Link</strong> - <a target="_blank" href="https://github.com/cksidharthan/opa-blog">https://github.com/cksidharthan/opa-blog</a></p>
]]></content:encoded></item><item><title><![CDATA[Understanding Hadolint: Dockerfile Linting Made Easy]]></title><description><![CDATA[In the world of containerization and DevOps, Docker has emerged as a transformative technology that enables developers to package their applications and their dependencies into portable, isolated environments known as containers. Dockerfiles are at t...]]></description><link>https://thebugshots.dev/understanding-hadolint-dockerfile-linting-made-easy</link><guid isPermaLink="true">https://thebugshots.dev/understanding-hadolint-dockerfile-linting-made-easy</guid><category><![CDATA[2Articles1Week]]></category><category><![CDATA[Docker]]></category><category><![CDATA[docker images]]></category><category><![CDATA[Linting]]></category><category><![CDATA[CI/CD]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Tue, 29 Aug 2023 15:33:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693323094138/e718a835-8aec-43e2-bdd5-21d5f80616ac.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of containerization and DevOps, Docker has emerged as a transformative technology that enables developers to package their applications and their dependencies into portable, isolated environments known as containers. Dockerfiles are at the heart of this process, providing a blueprint for building these containers. However, writing Dockerfiles that are efficient, secure, and conform to best practices can be a challenging task. This is where <strong>Hadolint</strong> comes into play.</p>
<h2 id="heading-introducing-hadolint"><strong>Introducing Hadolint</strong></h2>
<p><strong>Hadolint</strong> is an open-source linter specifically designed to analyze Dockerfiles and identify issues, inconsistencies, and potential problems in the Dockerfiles. The primary goal of Hadolint is to enforce best practices and guidelines for writing Dockerfiles, ensuring that the resulting containers are well-structured, secure, and optimized.</p>
<h2 id="heading-the-importance-of-linting-dockerfiles"><strong>The Importance of Linting Dockerfiles</strong></h2>
<p>Linting is a software development practice that involves using automated tools to analyze source code or configuration files for potential issues, deviations from best practices, and coding standards violations. Linting is particularly crucial in the context of Dockerfiles for several reasons:</p>
<ol>
<li><p><strong>Security</strong>: Docker containers are widely used in production environments, making security a top concern. A poorly written Dockerfile can inadvertently introduce security vulnerabilities, such as running processes as root or using outdated packages. Hadolint helps catch these security risks early in the development process.</p>
</li>
<li><p><strong>Efficiency</strong>: Optimized Dockerfiles lead to smaller image sizes and faster build times. Hadolint suggests improvements that can help streamline the build process, reducing the overall resource consumption and making the deployment pipeline more efficient.</p>
</li>
<li><p><strong>Consistency</strong>: Dockerfiles are often collaboratively written by different team members. Ensuring consistent formatting and adherence to best practices across the team is challenging. Hadolint acts as an impartial judge, helping maintain a uniform style and structure.</p>
</li>
</ol>
<h2 id="heading-features-and-benefits"><strong>Features and Benefits</strong></h2>
<h3 id="heading-1-static-analysis"><strong>1. Static Analysis:</strong></h3>
<p>Hadolint performs static analysis on Dockerfiles without actually executing it. This allows it to identify issues and potential problems without the need for running a container. This is particularly important for catching problems early in the development cycle.</p>
<h3 id="heading-2-customizable-rules"><strong>2. Customizable Rules:</strong></h3>
<p>Hadolint comes with a set of default rules that cover a wide range of common best practices. However, it also allows you to customize or extend these rules to align with your organization's specific requirements. This flexibility ensures that the linter remains adaptable to different project needs.</p>
<h3 id="heading-3-integration-with-cicd-pipelines"><strong>3. Integration with CI/CD Pipelines:</strong></h3>
<p>Hadolint can be seamlessly integrated into Continuous Integration (CI) and Continuous Deployment (CD) pipelines. By incorporating Hadolint checks into your pipeline, you can automatically detect and prevent Dockerfile issues before they reach the production environment.</p>
<h3 id="heading-4-dockerfile-styles"><strong>4. Dockerfile Styles:</strong></h3>
<p>Different projects might have different styles and conventions for writing Dockerfiles. Hadolint provides support for multiple Dockerfile styles, allowing you to tailor the linting process to match the coding standards of your project.</p>
<h3 id="heading-5-docker-image-integration"><strong>5. Docker Image Integration:</strong></h3>
<p>Hadolint can be run within a Docker container itself, ensuring that the linting environment matches the runtime environment. This prevents issues arising due to differences between the linting environment and the actual build environment.</p>
<h2 id="heading-getting-started-with-hadolint"><strong>Getting Started with Hadolint</strong></h2>
<p>Using Hadolint is straightforward and can be done in a few simple steps:</p>
<ol>
<li><p><strong>Installation</strong>:</p>
<p> Hadolint can be installed using various methods, including downloading pre-built binaries, using package managers like <a target="_blank" href="https://github.com/hadolint/hadolint/blob/master/README.md"><code>apt</code></a> or <a target="_blank" href="https://formulae.brew.sh/formula/hadolint"><code>brew</code></a>, or running it as a <a target="_blank" href="https://hub.docker.com/r/hadolint/hadolint">Docker container</a>.</p>
</li>
<li><p><strong>Basic Usage</strong>:</p>
<p> Running Hadolint on a Dockerfile is as simple as executing the following command:</p>
<pre><code class="lang-plaintext"> hadolint Dockerfile
</code></pre>
<p> Hadolint will analyze the specified Dockerfile and provide feedback on any issues it finds.</p>
</li>
<li><p><strong>Customizing Rules</strong>:</p>
<p> You can create a configuration file (usually named <code>.hadolint.yaml</code>) to customize Hadolint's rules. This allows you to enable, disable, or modify rules to suit your project's needs.</p>
</li>
<li><p><strong>Integration with CI/CD</strong>:</p>
<p> To integrate Hadolint into your CI/CD pipeline, you can add a step that runs Hadolint on your Dockerfiles as part of the build process. This ensures that Dockerfile issues are caught before the image is pushed to a registry or deployed.</p>
</li>
</ol>
<h2 id="heading-common-hadolint-rules"><strong>Common Hadolint Rules</strong></h2>
<p>Hadolint comes with a comprehensive set of rules, each targeting specific aspects of Dockerfile development. Some common rules include:</p>
<ul>
<li><p><strong>DL3000</strong>: Use absolute WORKDIR.</p>
</li>
<li><p><strong>DL3003</strong>: Use <code>WORKDIR</code> to switch to a directory.</p>
</li>
<li><p><strong>DL3007</strong>: Use <code>--no-install-recommends</code>.</p>
</li>
<li><p><strong>DL4000</strong>: Avoid additional packages by specifying <code>--no-install-recommends</code>.</p>
</li>
<li><p><strong>DL3008</strong>: Pin versions in apt-get install.</p>
</li>
<li><p><strong>DL3015</strong>: Avoid additional packages by specifying <code>--no-install-recommends</code>.</p>
</li>
</ul>
<h2 id="heading-final-thoughts"><strong>Final Thoughts</strong></h2>
<p>Dockerfiles are the backbone of containerization, and writing them correctly is essential for building secure, efficient, and reliable containers. Hadolint serves as a powerful tool to assist developers and DevOps engineers in this endeavor. By automating the process of identifying Dockerfile issues, Hadolint contributes to a smoother development workflow and ultimately helps in delivering high-quality containerized applications.</p>
<p>As containerization continues to gain momentum in the software industry, tools like Hadolint play a vital role in ensuring that best practices are followed, security is maintained, and the containerization process remains efficient. By incorporating Hadolint into your development process, you can elevate the quality of your Dockerfiles and, consequently, the containers they produce.</p>
]]></content:encoded></item><item><title><![CDATA[Packaging a Golang Application Using Multi-Stage Docker Builds]]></title><description><![CDATA[Packaging a Golang Application Using Multi-Stage Docker Builds is a technique that allows developers to optimize Docker images of Golang applications by reducing the final image size. This approach is particularly useful for Go applications because i...]]></description><link>https://thebugshots.dev/packaging-a-golang-application-using-multi-stage-docker-builds</link><guid isPermaLink="true">https://thebugshots.dev/packaging-a-golang-application-using-multi-stage-docker-builds</guid><category><![CDATA[2Articles1Week]]></category><category><![CDATA[golang]]></category><category><![CDATA[Docker]]></category><category><![CDATA[#dockerimage]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Sat, 26 Aug 2023 14:45:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693061030531/4d97ebb4-bc5d-4062-b128-27bf24bdf31d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Packaging a Golang Application Using Multi-Stage Docker Builds is a technique that allows developers to optimize Docker images of Golang applications by reducing the final image size. This approach is particularly useful for Go applications because it eliminates the need for the compiler and dependencies at runtime, resulting in smaller and more efficient Docker images.</p>
<h2 id="heading-single-build-stage">Single Build Stage</h2>
<p>Traditionally, a Dockerfile for a Go application would look something like this:</p>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> golang:<span class="hljs-number">1.21</span>.<span class="hljs-number">0</span>-alpine3.<span class="hljs-number">17</span>

<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-keyword">RUN</span><span class="bash"> go build -o goApp .</span>

<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"/app/goApp"</span>]</span>
</code></pre>
<p>In this approach, the Dockerfile uses the <code>golang:alpine</code> image as the base image to compile the Go application. The code is copied into the image, and then the <code>go build</code> command is executed to build the application. Finally, the <code>CMD</code> instruction specifies the command to run the application.</p>
<p>However, this approach has a downside. It includes unnecessary bloat from the Go compiler, build tools, and dependencies in the final image, resulting in a larger image size. For a simple Go application, the image size can be over 300MB, which is not ideal for production deployments.</p>
<h2 id="heading-multi-stage-build">Multi-Stage Build</h2>
<p>To address this issue, multi-stage Docker builds can be used. With multi-stage builds, the Dockerfile is divided into multiple stages, each with its own base image and set of instructions. The final image only includes the necessary artifacts from the build stage, resulting in a smaller and more optimized image.</p>
<p>Here is an example of a multi-stage Dockerfile for a Go application:</p>
<pre><code class="lang-dockerfile"><span class="hljs-comment"># Build stage</span>
<span class="hljs-keyword">FROM</span> golang:<span class="hljs-number">1.21</span>.<span class="hljs-number">0</span>-alpine3.<span class="hljs-number">17</span> AS build
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>
<span class="hljs-keyword">RUN</span><span class="bash"> go build -o main .</span>

<span class="hljs-comment"># Run stage  </span>
<span class="hljs-keyword">FROM</span> alpine:<span class="hljs-number">3.18</span>.<span class="hljs-number">3</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=build /app/main .</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"/app/main"</span>]</span>
</code></pre>
<p>In this example, the Dockerfile consists of two stages <code>Build Stage</code> and <code>Run stage</code></p>
<h3 id="heading-build-stage">Build stage</h3>
<p>The build stage uses the <code>golang alpine</code> image as the base image and performs the compilation of the Go application, If some external dependencies are required for build, it can be added in the build stage</p>
<h3 id="heading-run-stage">Run stage</h3>
<p>The resulting binary is then copied into the run stage, which uses the <code>alpine</code> image as the base image. The final image only includes the built binary and its dependencies, resulting in a significantly smaller image size.</p>
<h2 id="heading-benefits-of-multi-stage-docker-build">Benefits of Multi-stage docker build</h2>
<p>With multi-stage builds, the runtime image size can be reduced to as little as 15MB, compared to over 300MB in the traditional approach. This reduction in image size has several benefits, including faster image pull and deployment times, reduced storage requirements, and improved overall performance.</p>
<h2 id="heading-further-optimization">Further Optimization</h2>
<p>For even further optimization, the Alpine runtime stage can be replaced with a <code>scratch</code> image. The <code>scratch</code> image is a special Docker image that is completely empty, with no operating system or libraries included. This approach further reduces the image size to around 10MB. However, it comes at the cost of losing the tools and package manager provided by the Alpine base image.</p>
<p>Here is an example of a Dockerfile using the <code>scratch</code> image:</p>
<pre><code class="lang-dockerfile"><span class="hljs-comment"># Build stage</span>
<span class="hljs-keyword">FROM</span> golang:<span class="hljs-number">1.21</span>.<span class="hljs-number">0</span>-alpine3.<span class="hljs-number">17</span> AS build
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>
<span class="hljs-keyword">RUN</span><span class="bash"> go build -o main .</span>

<span class="hljs-comment"># Run stage  </span>
<span class="hljs-keyword">FROM</span> scratch
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=build /app/main .</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"/app/main"</span>]</span>
</code></pre>
<h2 id="heading-summary">Summary</h2>
<p>In summary, multi-stage Docker builds are a powerful technique for optimizing Dockerfiles, especially for Go applications. By separating the build tools and dependencies from the runtime content, multi-stage builds significantly reduce the final image size. This reduction in image size has numerous benefits, including improved performance, faster deployment times, and reduced storage requirements.</p>
<p>When using multi-stage builds for Go applications, it is possible to achieve a final image size that is over 90% smaller compared to a single-stage build. Additionally, by using the <code>scratch</code> image as the base image, developers can further minimize the container footprint, resulting in a truly minimal runtime image.</p>
<p>Overall, multi-stage Docker builds are a valuable tool in the developer's toolbox for creating efficient and optimized Docker images. They are particularly beneficial for compiled languages like Go, where the elimination of unnecessary build artifacts can have a significant impact on the final image size. Give multi-stage builds a try for your next Go application and experience the benefits of smaller and more efficient Docker images.</p>
]]></content:encoded></item><item><title><![CDATA[Generating Fake Data for Testing in Go with GoFakeIt]]></title><description><![CDATA[When writing tests, sample data is essential. But manually mocking up data for every test is tiring and messy. That's where the GoFakeIt Go library comes in handy!
In this post, we'll explore how GoFakeIt can effortlessly generate realistic fake data...]]></description><link>https://thebugshots.dev/generating-fake-data-for-testing-in-go-with-gofakeit</link><guid isPermaLink="true">https://thebugshots.dev/generating-fake-data-for-testing-in-go-with-gofakeit</guid><category><![CDATA[faker]]></category><category><![CDATA[golang]]></category><category><![CDATA[Mocking]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Testing]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Fri, 25 Aug 2023 09:22:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1692955211050/c6e155a4-f82c-47ed-8e5d-16502e9e6b20.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When writing tests, sample data is essential. But manually mocking up data for every test is tiring and messy. That's where the GoFakeIt Go library comes in handy!</p>
<p>In this post, we'll explore how GoFakeIt can effortlessly generate realistic fake data to fuel your tests.</p>
<h2 id="heading-why-use-a-mock-data-library">Why Use a Mock Data Library?</h2>
<p>Mock data is crucial for writing robust unit and integration tests:</p>
<ul>
<li><p>Allows testing logic and behavior independently from databases and services.</p>
</li>
<li><p>Avoids polluting your dev environment with fake data.</p>
</li>
<li><p>Data can be randomly generated for complete coverage.</p>
</li>
<li><p>Speeds up test execution since no dependencies are required.</p>
</li>
</ul>
<p>Manually defining sample data for every test is cumbersome. The data is often stale and lacks variety.</p>
<p>A dedicated mock data library like GoFakeIt solves these issues by programmatically generating fresh data on the fly.</p>
<h2 id="heading-overview-of-gofakeit-for-go">Overview of GoFakeIt for Go</h2>
<p>GoFakeIt is a popular mock data library ported to Go. It provides data generators for common scenarios like names, addresses, companies, users, transactions, and more.</p>
<p>Some key features:</p>
<ul>
<li><p>Over 100 data provider methods (and growing).</p>
</li>
<li><p>Generates random, unique data every time.</p>
</li>
<li><p>Localization support for different locales.</p>
</li>
<li><p>Easy to extend with custom data providers.</p>
</li>
<li><p>Struct and slice population helpers.</p>
</li>
<li><p>Thorough test coverage.</p>
</li>
<li><p>MIT license.</p>
</li>
</ul>
<p>The GoFakeIt repo is actively maintained by Brian Voe and has over 500 contributors.</p>
<h2 id="heading-installing-gofakeit">Installing GoFakeIt</h2>
<p>To install GoFakeIt, simply run:</p>
<pre><code class="lang-go"><span class="hljs-keyword">go</span> get github.com/brianvoe/gofakeit/v6
</code></pre>
<p>Then import it:</p>
<pre><code class="lang-go"><span class="hljs-keyword">import</span> <span class="hljs-string">"github.com/brianvoe/gofakeit/v6"</span>
</code></pre>
<p>Let's look at how to use it.</p>
<h2 id="heading-generating-random-fake-data">Generating Random Fake Data</h2>
<p>The basic pattern is to call GoFakeIt data provider methods:</p>
<pre><code class="lang-go">name := gofakeit.Name()
email := gofakeit.Email()
color := gofakeit.Color()
</code></pre>
<p>This gives you a fresh, randomized value for each call.</p>
<p>Some handy methods include:</p>
<ul>
<li><p><strong>Name()</strong> - Full fake name</p>
</li>
<li><p><strong>Email()</strong> - Fake email address</p>
</li>
<li><p><strong>Phone()</strong> - Random phone number</p>
</li>
<li><p><strong>Address()</strong> - Fictional street address</p>
</li>
<li><p><strong>BS()</strong> - Random bullshit (for filler text!)</p>
</li>
<li><p><strong>Company()</strong> - Fake company name</p>
</li>
<li><p><strong>Sentence()</strong> - Random sentence</p>
</li>
<li><p><strong>paragraphs()</strong> - Multiple paragraphs of text</p>
</li>
</ul>
<p>Check the <a target="_blank" href="https://pkg.go.dev/github.com/brianvoe/gofakeit/v6">godoc</a> for the full set of data providers available.</p>
<h2 id="heading-generating-structured-data">Generating Structured Data</h2>
<p>For more structured data, GoFakeIt can populate:</p>
<ul>
<li><p>Structs</p>
</li>
<li><p>Slices</p>
</li>
<li><p>Maps</p>
</li>
<li><p>Arrays</p>
</li>
</ul>
<p>With random values.</p>
<p><strong>NOTE</strong>: `fake` struct tags have to be present in the struct for GoFakeIt to produce correct data.</p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> User  <span class="hljs-keyword">struct</span> {
    ID    <span class="hljs-keyword">int</span>    <span class="hljs-string">`fake:"{number:1,100}"`</span>
    Name  <span class="hljs-keyword">string</span> <span class="hljs-string">`fake:"{firstname}"`</span>
    Email <span class="hljs-keyword">string</span> <span class="hljs-string">`fake:"{email}"`</span>
}

<span class="hljs-keyword">var</span> user User
gofakeit.Struct(&amp;user)
</code></pre>
<p>Now <code>user</code> will contain random fake data!</p>
<p>To populate a slice, provide the slice and the desired length:</p>
<pre><code class="lang-go"><span class="hljs-keyword">var</span> users []User
gofakeit.Slice(&amp;users)
</code></pre>
<p>This makes mocking datasets a breeze.</p>
<h2 id="heading-seeding-randomness">Seeding Randomness</h2>
<p>By default, each call generates unpredictable data.</p>
<p>To generate repeatable data, seed the RNG:</p>
<pre><code class="lang-go">gofakeit.Seed(<span class="hljs-number">1234</span>) <span class="hljs-comment">// any int64 number</span>

<span class="hljs-comment">// Repeatable results now</span>
name1 := gofakeit.Name() 
name2 := gofakeit.Name()
</code></pre>
<p>Useful for regenerating the same fake datasets.</p>
<h2 id="heading-examples-and-recipes">Examples and Recipes</h2>
<p>GoFakeIt is handy for mocking all sorts of data:</p>
<ul>
<li><p>User accounts</p>
</li>
<li><p>Blog posts and comments</p>
</li>
<li><p>Product listings</p>
</li>
<li><p>Transaction histories</p>
</li>
<li><p>File uploads</p>
</li>
<li><p>API responses</p>
</li>
</ul>
<p>Some examples:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"github.com/brianvoe/gofakeit/v6"</span>
)

<span class="hljs-keyword">type</span> User <span class="hljs-keyword">struct</span> {
    ID    <span class="hljs-keyword">int</span>    <span class="hljs-string">`fake:"{number:1,100}"`</span>
    Name  <span class="hljs-keyword">string</span> <span class="hljs-string">`fake:"{firstname}"`</span>
    Email <span class="hljs-keyword">string</span> <span class="hljs-string">`fake:"{email}"`</span>
}

<span class="hljs-keyword">type</span> Product <span class="hljs-keyword">struct</span> {
    ID          <span class="hljs-keyword">int</span>    <span class="hljs-string">`fake:"{number:1,100}"`</span>
    Name        <span class="hljs-keyword">string</span> <span class="hljs-string">`fake:"{firstname}"`</span>
    Description <span class="hljs-keyword">string</span> <span class="hljs-string">`fake:"{sentence:30}"`</span>
    Price       <span class="hljs-keyword">int</span>    <span class="hljs-string">`fake:"{number:1,100}"`</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Fake users</span>
    <span class="hljs-keyword">var</span> user User
    gofakeit.Struct(&amp;user)

    fmt.Print(user)
    <span class="hljs-comment">// Fake product listings</span>

    <span class="hljs-keyword">var</span> product Product
    gofakeit.Struct(&amp;product)

    fmt.Print(product)

    <span class="hljs-comment">// Fake upload files</span>
    <span class="hljs-keyword">const</span> numFiles = <span class="hljs-number">5</span>
    <span class="hljs-keyword">var</span> files []<span class="hljs-keyword">string</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; numFiles; i++ {
        files = <span class="hljs-built_in">append</span>(files, gofakeit.ImageURL(<span class="hljs-number">50</span>, <span class="hljs-number">50</span>))
    }

    fmt.Print(files)
}
</code></pre>
<p>The possibilities are endless!</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>GoFakeIt for Go makes generating mock data for tests almost too easy. No more hand-rolled stubs that go stale!</p>
<p>Some key benefits:</p>
<ul>
<li><p>Huge time savings compared to manual mocking.</p>
</li>
<li><p>Randomized data that avoids stale samples.</p>
</li>
<li><p>Localization for region-specific data.</p>
</li>
<li><p>Extensible via custom providers.</p>
</li>
<li><p>Helpers for populating structs and slices.</p>
</li>
</ul>
<p>Proper mock data is crucial for writing robust Go tests. GoFakeIt delivers with a robust set of data providers in an easy-to-use package.</p>
<p>I highly recommend installing GoFakeIt in your next Go project. Browse the full docs and examples to see all it can do.</p>
<p>Have you used GoFakeIt in your Go code? Are there any other mock data tools you recommend? Let me know in the comments!</p>
]]></content:encoded></item><item><title><![CDATA[Flexible Caching in Go with Interfaces]]></title><description><![CDATA[Caching is a common technique in programming to improve performance by storing expensive computations or IO results for fast lookup. In this post, we'll look at how Go's interfaces enable building flexible and extensible caches.
Defining a Cache Inte...]]></description><link>https://thebugshots.dev/flexible-caching-in-go-with-interfaces</link><guid isPermaLink="true">https://thebugshots.dev/flexible-caching-in-go-with-interfaces</guid><category><![CDATA[golang]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[Interfaces Go]]></category><category><![CDATA[Interfaces]]></category><category><![CDATA[Programming Blogs]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Thu, 24 Aug 2023 07:08:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1692860498030/0ba1723b-aac7-4dee-bdb4-0410a938127d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Caching is a common technique in programming to improve performance by storing expensive computations or IO results for fast lookup. In this post, we'll look at how Go's interfaces enable building flexible and extensible caches.</p>
<h2 id="heading-defining-a-cache-interface">Defining a Cache Interface</h2>
<p>First, let's define an interface specifying cache capabilities:</p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> Cache <span class="hljs-keyword">interface</span> {
  Get(key <span class="hljs-keyword">string</span>) <span class="hljs-keyword">interface</span>{}
  Set(key <span class="hljs-keyword">string</span>, value <span class="hljs-keyword">interface</span>{})
}
</code></pre>
<p>This <code>Cache</code> interface has two methods - <code>Get</code> to retrieve a cached value by key, and <code>Set</code> to store a key-value pair.</p>
<p>By defining an interface, we decouple the cache usage from a specific implementation. Any cache library that implements these methods satisfies the interface.</p>
<h2 id="heading-a-simple-memory-cache">A Simple Memory Cache</h2>
<p>Let's implement a simple in-memory cache conforming to the interface:</p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> InMemoryCache <span class="hljs-keyword">struct</span> {
  store <span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">interface</span>{} 
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *InMemoryCache)</span> <span class="hljs-title">Get</span><span class="hljs-params">(key <span class="hljs-keyword">string</span>)</span> <span class="hljs-title">interface</span></span>{} {
  <span class="hljs-keyword">return</span> c.store[key] 
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *InMemoryCache)</span> <span class="hljs-title">Set</span><span class="hljs-params">(key <span class="hljs-keyword">string</span>, value <span class="hljs-keyword">interface</span>{})</span></span> {
  c.store[key] = value
}
</code></pre>
<p>The <code>InMemoryCache</code> uses a Go map to store entries in memory. It implements the Get and Set methods to manage entries in the map.</p>
<h2 id="heading-using-the-cache">Using the Cache</h2>
<p>We can now easily use the cache:</p>
<pre><code class="lang-go">cache := InMemoryCache{<span class="hljs-built_in">make</span>(<span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">interface</span>{})}

cache.Set(<span class="hljs-string">"foo"</span>, <span class="hljs-string">"bar"</span>)

value := cache.Get(<span class="hljs-string">"foo"</span>) <span class="hljs-comment">// "bar"</span>
</code></pre>
<p>The interface allows us to call <code>Set</code> and <code>Get</code> without worrying about the implementation.</p>
<h2 id="heading-swapping-cache-implementations">Swapping Cache Implementations</h2>
<p>Now let's say we want to use Redis instead of in-memory. We can create a <code>RedisCache</code> implementing the same interface:</p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> RedisCache <span class="hljs-keyword">struct</span> {
  client *redis.Client 
}

<span class="hljs-comment">// Redis Get/Set implementation</span>
</code></pre>
<p>And swap it in:</p>
<pre><code class="lang-go">cache := RedisCache{redisClient} 

cache.Set(<span class="hljs-string">"foo"</span>,<span class="hljs-string">"bar"</span>) <span class="hljs-comment">// Now uses Redis</span>
</code></pre>
<p>The client code remains unchanged. This demonstrates the flexibility provided by interfaces.</p>
<h2 id="heading-benefits-of-interface-based-caching">Benefits of Interface-based Caching</h2>
<p>Using interface-based caching gives several benefits:</p>
<ul>
<li><p><strong>Decoupling</strong> - Client code isn't coupled to a specific cache library.</p>
</li>
<li><p><strong>Maintainability</strong> - The cache implementation can be changed without modifying the client code.</p>
</li>
<li><p><strong>Testability</strong> - Caches can be stubbed or mocked for testing.</p>
</li>
<li><p><strong>Reusability</strong> - Generic cache interface enables writing reusable caching logic.</p>
</li>
</ul>
<h2 id="heading-summary">Summary</h2>
<p>Interfaces in Go helm in building flexible libraries and applications. Defining simple interfaces makes code more:</p>
<ul>
<li><p><strong>Modular</strong> - Different implementations can be plugged in.</p>
</li>
<li><p><strong>Extensible</strong> - New implementations can be added without disruption.</p>
</li>
<li><p><strong>Maintainable</strong> - Components can be swapped for easy maintenance.</p>
</li>
<li><p><strong>Testable</strong> - Components can be stubbed and mocked.</p>
</li>
</ul>
<p>By providing powerful abstraction with minimal overhead, interfaces are invaluable in Go for creating loosely coupled and scalable systems.</p>
]]></content:encoded></item><item><title><![CDATA[Postbot: A Postman AI Assistant]]></title><description><![CDATA[In the dynamic world of software development, APIs (Application Programming Interfaces) play a crucial role in enabling seamless communication between different applications and services. As developers, we strive to build robust, efficient, and error...]]></description><link>https://thebugshots.dev/postbot-a-postman-ai-assistant</link><guid isPermaLink="true">https://thebugshots.dev/postbot-a-postman-ai-assistant</guid><category><![CDATA[Postman]]></category><category><![CDATA[API TESTING]]></category><category><![CDATA[#PostmanAPI]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[postbot]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Tue, 25 Jul 2023 15:38:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1690299453719/1792ce1a-9f68-4f17-9344-6f94e17ece18.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the dynamic world of software development, APIs (Application Programming Interfaces) play a crucial role in enabling seamless communication between different applications and services. As developers, we strive to build robust, efficient, and error-free APIs that power our applications and services. To ensure the reliability of these APIs, rigorous testing is essential.</p>
<p>Enter Postbot, your revolutionary AI-powered Postman API testing assistant. Postbot is set to transform the way developers approach API testing, making the process faster, more accurate, and downright effortless. Gone are the days of manually testing each endpoint and meticulously recording results; Postbot takes the reins, unleashing the true potential of automation and artificial intelligence.</p>
<p>In this blog post, we'll delve into the world of Postbot, exploring its innovative features, how it streamlines API testing, and how it can empower developers to ship high-quality applications with confidence.</p>
<h3 id="heading-automated-test-case-generation">Automated Test case generation</h3>
<p>When you integrate Postbot into your Postman workflow, it instantly scans and analyzes your API endpoints, understanding their structure, expected behavior, and potential inputs and outputs. This deep understanding is made possible by Postbot's sophisticated AI engine, which is trained on vast amounts of data and possesses a comprehensive knowledge of API testing best practices.</p>
<p>Once Postbot has gathered sufficient information about your APIs, it proceeds to automatically generate test cases based on the insights gained from its analysis.</p>
<h3 id="heading-steps">Steps</h3>
<ul>
<li><p>Open your Postman collection and click on the more actions icon and click on Generate tests.</p>
</li>
<li><p>This is open up a tab that lists all the requests in a table. Now click on the Generate tests in the top right. This will trigger PostBot, which will in turn analyse all the requests in your collection and generate test cases for you.</p>
</li>
<li><p>You can inspect the generated test cases, delete unwanted ones and also save the changes, by clicking on the save tests button at the top right.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690298749213/171332e1-24c0-4e04-b5fb-d2d1b4fab652.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-generating-test-cases-for-individual-requests">Generating Test cases for Individual requests</h3>
<p>In addition to generating test cases for the entire collection, you can also generate test cases for individual API requests. To do this, navigate to the tests tab of a request and click on <strong>Script with Postbot</strong> option. It opens a ChatGPT-like prompt window, where you can ask Postbot to write test cases, visualize responses, fix tests etc.,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690299162215/a0444867-7fa0-4d74-b422-d1e001318f8f.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Postbot represents a significant step forward in the world of API testing. By harnessing the power of AI, Postman has created a powerful assistant capable of streamlining API testing workflows, improving efficiency, and ensuring higher API quality. While there may be some challenges to overcome, the benefits it offers to development teams make it a compelling tool to consider integrating into your API testing arsenal. As AI technology continues to advance, we can expect even more intelligent and capable AI assistants like Postbot to reshape the future of software testing and development.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/vTm0EiFmpHY">https://youtu.be/vTm0EiFmpHY</a></div>
]]></content:encoded></item><item><title><![CDATA[Understanding Value and Pointer Receivers in Golang]]></title><description><![CDATA[Go (often referred to as Golang) is a powerful and efficient programming language that provides a unique feature called receivers for methods. Receivers allow you to associate a method with a type, and they come in two flavors: value receivers and po...]]></description><link>https://thebugshots.dev/understanding-value-and-pointer-receivers-in-golang</link><guid isPermaLink="true">https://thebugshots.dev/understanding-value-and-pointer-receivers-in-golang</guid><category><![CDATA[golang]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[Programming Tips]]></category><category><![CDATA[Programming Blogs]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Mon, 24 Jul 2023 20:15:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1690229146419/d4383469-197f-4581-8f62-fba94b4e4cd1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Go (often referred to as Golang) is a powerful and efficient programming language that provides a unique feature called receivers for methods. Receivers allow you to associate a method with a type, and they come in two flavors: <strong>value receivers</strong> and <strong>pointer receivers</strong>. Understanding the differences between these two types of receivers is crucial for designing efficient and correct Go programs. In this blog post, we'll dive deep into Golang's value and pointer receivers, their characteristics, use cases, and the implications of choosing one over the other.</p>
<h2 id="heading-what-are-receivers"><strong>What are Receivers</strong></h2>
<p>In Go, a receiver is a parameter of a method that binds the method to a specific type. This is similar to what is commonly known as "this" or "self" in other programming languages. By using a receiver, you can define methods that operate on instances of the associated type. Receivers can be applied to both value and pointer types.</p>
<h3 id="heading-value-receivers"><strong>Value Receivers</strong></h3>
<p>A value receiver is a method that operates on a copy of the instance of the associated type. When you define a method with a value receiver, any modifications made to the receiver inside the method will not affect the original instance. Value receivers are represented by using the type without an asterisk (e.g., <code>func (v MyType) MethodName()</code>).</p>
<p>Here's an example of a value receiver method in Go:</p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> Circle <span class="hljs-keyword">struct</span> {
    radius <span class="hljs-keyword">float64</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c Circle)</span> <span class="hljs-title">Area</span><span class="hljs-params">()</span> <span class="hljs-title">float64</span></span> {
    <span class="hljs-keyword">return</span> <span class="hljs-number">3.14</span> * c.radius * c.radius
}
</code></pre>
<h2 id="heading-pointer-receivers"><strong>Pointer Receivers</strong></h2>
<p>A pointer receiver is a method that operates directly on the instance of the associated type. When you define a method with a pointer receiver, any modifications made to the receiver inside the method will directly affect the original instance. Pointer receivers are represented by using the type with an asterisk (e.g., <code>func (p *MyType) MethodName()</code>).</p>
<p>Here's an example of a pointer receiver method in Go</p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> Counter <span class="hljs-keyword">struct</span> {
    count <span class="hljs-keyword">int</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *Counter)</span> <span class="hljs-title">Increment</span><span class="hljs-params">()</span></span> {
    c.count++
}
</code></pre>
<h3 id="heading-choosing-between-value-and-pointer-receivers"><strong>Choosing Between Value and Pointer Receivers</strong></h3>
<p>The choice between value and pointer receivers depends on the use case and the behavior you want to achieve. Here are some considerations:</p>
<ol>
<li><p><strong>Value Receivers:</strong></p>
<ul>
<li><p>Use value receivers when the method doesn't need to modify the instance's state and operates purely on a copy of the instance.</p>
</li>
<li><p>Value receivers are ideal for methods that are read-only and don't mutate the internal state of the type.</p>
</li>
</ul>
</li>
<li><p><strong>Pointer Receivers:</strong></p>
<ul>
<li><p>Use pointer receivers when the method needs to modify the instance's state directly.</p>
</li>
<li><p>Pointer receivers are essential when you want to modify the underlying state of a struct or any other type.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-performance-implications"><strong>Performance Implications</strong></h3>
<p>There is a performance difference between value and pointer receivers. When you use a value receiver, a copy of the instance is made for the method to operate on, which can lead to more memory usage and potentially slower execution. On the other hand, pointer receivers directly operate on the instance, which is more memory-efficient and can result in faster execution, especially for large structs.</p>
<h3 id="heading-guidelines-for-receivers"><strong>Guidelines for Receivers</strong></h3>
<ul>
<li><p>Use value receivers for read-only methods that don't modify the instance's state.</p>
</li>
<li><p>Use pointer receivers when you need to modify the instance's state directly or when the method operates on a large struct to avoid unnecessary copying.</p>
</li>
<li><p>For consistency, consider using either value or pointer receivers consistently across methods of a type.</p>
</li>
</ul>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In Go, value and pointer receivers provide a way to define methods that operate on instances of a type. Value receivers make copies of the instance to operate on, while pointer receivers directly operate on the instance. Choosing the right receiver type depends on the use case, performance considerations, and whether you need to modify the instance's state. Understanding the differences between value and pointer receivers is essential for writing efficient and correct Go programs.</p>
<p>In this blog post, we've explored the concept of value and pointer receivers in Go, their characteristics, use cases, and the implications of choosing one over the other. I hope this blog post has shed light on the significance of value and pointer receivers in Go, enabling you to make informed decisions while designing methods for your custom types.</p>
]]></content:encoded></item><item><title><![CDATA[Demystifying new() and make() Functions in Go]]></title><description><![CDATA[Go (or Golang) is a modern, statically typed, compiled programming language designed for building scalable, concurrent, and efficient software. It comes with various built-in functions and features that help developers write concise and performant co...]]></description><link>https://thebugshots.dev/demystifying-new-and-make-functions-in-go</link><guid isPermaLink="true">https://thebugshots.dev/demystifying-new-and-make-functions-in-go</guid><category><![CDATA[golang]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Programming Tips]]></category><category><![CDATA[Go]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Sun, 23 Jul 2023 05:28:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1690089796045/72a16404-b06c-48ab-ab69-08498b1333fb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Go (or Golang) is a modern, statically typed, compiled programming language designed for building scalable, concurrent, and efficient software. It comes with various built-in functions and features that help developers write concise and performant code. Among these functions are <code>new()</code> and <code>make()</code>, which might seem similar at first but serve different purposes and are crucial for memory allocation and data initialization in Go.</p>
<p>In this blog post, we will explore the differences between <code>new()</code> and <code>make()</code> functions and understand when and how to use them effectively.</p>
<h2 id="heading-new-and-make-functions"><code>new()</code> <strong>and</strong> <code>make()</code> Functions</h2>
<p>Both <code>new()</code> and <code>make()</code> are built-in functions in Go, used to allocate memory. However, they are used for different data types and scenarios:</p>
<ol>
<li><p><code>new()</code> function:</p>
<ul>
<li><p><code>new()</code> is used to allocate memory for value types (e.g., integers, floats, structs) and returns a pointer to the newly allocated zeroed value.</p>
</li>
<li><p>It takes a single argument, which is a type, and returns a pointer to that type.</p>
</li>
</ul>
</li>
<li><p><code>make()</code> function:</p>
<ul>
<li><p><code>make()</code> is used to create and initialize slices, maps, and channels, which are reference types in Go.</p>
</li>
<li><p>It takes two or three arguments, depending on the type, and returns an initialized (not zeroed) value ready for use.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-understanding-new-function"><strong>Understanding</strong> <code>new()</code> Function</h2>
<p>The syntax of the <code>new()</code> function is straightforward as shown below.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">new</span><span class="hljs-params">(Type)</span> *<span class="hljs-title">Type</span></span>
</code></pre>
<p>Here, <code>Type</code> represents the type of the value we want to allocate memory for. Let's see an example of how to use <code>new()</code></p>
<p>In this example, we create a new instance of the <code>Person</code> struct using <code>new()</code> and then assign values to its fields using the pointer.</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> <span class="hljs-string">"fmt"</span>

<span class="hljs-keyword">type</span> Person <span class="hljs-keyword">struct</span> {
    Name <span class="hljs-keyword">string</span>
    Age  <span class="hljs-keyword">int</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Using new() to allocate memory for a Person struct</span>
    p := <span class="hljs-built_in">new</span>(Person)

    fmt.Printf(<span class="hljs-string">"%T\n"</span>, p)

    <span class="hljs-comment">// Accessing struct fields using the pointer</span>
    p.Name = <span class="hljs-string">"Alice"</span>
    p.Age = <span class="hljs-number">30</span>

    <span class="hljs-comment">// Displaying the values</span>
    fmt.Println(<span class="hljs-string">"Name:"</span>, p.Name)
    fmt.Println(<span class="hljs-string">"Age:"</span>, p.Age)
}
</code></pre>
<p>This program will produce an output as shown below</p>
<pre><code class="lang-go">&gt; <span class="hljs-keyword">go</span> run main.<span class="hljs-keyword">go</span>
*main.Person
Name: Alice
Age: <span class="hljs-number">30</span>
</code></pre>
<h2 id="heading-understanding-make-function"><strong>Understanding</strong> <code>make()</code> Function</h2>
<p>The syntax of the <code>make()</code> function varies depending on the type it is used with</p>
<p><strong><em>For Slices</em></strong></p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">make</span><span class="hljs-params">([]Type, <span class="hljs-built_in">len</span>, <span class="hljs-built_in">cap</span>)</span> []<span class="hljs-title">Type</span></span>
</code></pre>
<ul>
<li><p><code>Type</code>: The type of elements the slice will hold.</p>
</li>
<li><p><code>len</code>: The initial length of the slice.</p>
</li>
<li><p><code>cap</code>: The capacity of the slice, which is optional and used to specify the underlying array's capacity. If not provided, it defaults to the same value as the length.</p>
</li>
</ul>
<p>Example of creating a slice using <code>make()</code>:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> <span class="hljs-string">"fmt"</span>

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Using make() to create a slice of integers</span>
    numbers := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">int</span>, <span class="hljs-number">5</span>, <span class="hljs-number">10</span>)

    <span class="hljs-comment">// Displaying the slice's length, capacity, and values</span>
    fmt.Println(<span class="hljs-string">"Length:"</span>, <span class="hljs-built_in">len</span>(numbers))
    fmt.Println(<span class="hljs-string">"Capacity:"</span>, <span class="hljs-built_in">cap</span>(numbers))
    fmt.Println(<span class="hljs-string">"Values:"</span>, numbers)

    <span class="hljs-comment">// Using make() to create a slice of integers</span>
    numbersWithoutOptional := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">int</span>, <span class="hljs-number">5</span>)

    <span class="hljs-comment">// Displaying the slice's length, capacity, and values</span>
    fmt.Println(<span class="hljs-string">"Length:"</span>, <span class="hljs-built_in">len</span>(numbersWithoutOptional))
    fmt.Println(<span class="hljs-string">"Capacity:"</span>, <span class="hljs-built_in">cap</span>(numbersWithoutOptional))
    fmt.Println(<span class="hljs-string">"Values:"</span>, numbersWithoutOptional)
}
</code></pre>
<p>This program will produce an output as below</p>
<pre><code class="lang-bash">&gt; go run main.go
Length: 5
Capacity: 10
Values: [0 0 0 0 0]
Length: 5
Capacity: 5
Values: [0 0 0 0 0]
</code></pre>
<p><strong><em>For Maps</em></strong></p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">make</span><span class="hljs-params">(<span class="hljs-keyword">map</span>[KeyType]ValueType, initialCapacity <span class="hljs-keyword">int</span>)</span> <span class="hljs-title">map</span>[<span class="hljs-title">KeyType</span>]<span class="hljs-title">ValueType</span></span>
</code></pre>
<ul>
<li><p><code>KeyType</code>: The type of keys in the map.</p>
</li>
<li><p><code>ValueType</code>: The type of values associated with the keys.</p>
</li>
<li><p><code>initialCapacity</code>: The initial capacity of the map. This is optional but can be used to optimize performance when the number of elements is known in advance.</p>
</li>
</ul>
<p>Example of creating a map using <code>make()</code></p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> <span class="hljs-string">"fmt"</span>

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Using make() to create a map of string keys and int values</span>
    scores := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">int</span>)

    <span class="hljs-comment">// Adding values to the map</span>
    scores[<span class="hljs-string">"Alice"</span>] = <span class="hljs-number">95</span>
    scores[<span class="hljs-string">"Bob"</span>] = <span class="hljs-number">87</span>

    <span class="hljs-comment">// Displaying the map</span>
    fmt.Println(<span class="hljs-string">"Scores:"</span>, scores)
}
</code></pre>
<pre><code class="lang-go">&gt; <span class="hljs-keyword">go</span> run main.<span class="hljs-keyword">go</span>
Scores: <span class="hljs-keyword">map</span>[Alice:<span class="hljs-number">95</span> Bob:<span class="hljs-number">87</span>]
</code></pre>
<p><strong><em>For Channels</em></strong></p>
<pre><code class="lang-bash">func make(chan Type, capacity int) chan Type
</code></pre>
<ul>
<li><p><code>Type</code>: The type of values that can be sent and received through the channel.</p>
</li>
<li><p><code>capacity</code>: The buffer size of the channel. If set to 0, the channel is unbuffered.</p>
</li>
</ul>
<p>Example of creating a channel using <code>make()</code></p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"time"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Using make() to create an unbuffered channel of integers</span>
    ch := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> <span class="hljs-keyword">int</span>)

    <span class="hljs-comment">// Sending data into the channel using a goroutine</span>
    <span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
        <span class="hljs-keyword">for</span> i := <span class="hljs-number">1</span>; i &lt;= <span class="hljs-number">5</span>; i++ {
            ch &lt;- i
            time.Sleep(time.Second) <span class="hljs-comment">// Simulating some work before sending the next value</span>
        }
        <span class="hljs-built_in">close</span>(ch)
    }()

    <span class="hljs-comment">// Receiving data from the channel</span>
    <span class="hljs-keyword">for</span> num := <span class="hljs-keyword">range</span> ch {
        fmt.Println(<span class="hljs-string">"Received:"</span>, num)
    }
}
</code></pre>
<pre><code class="lang-bash">&gt; go run main.go
Received: 1
Received: 2
Received: 3
Received: 4
Received: 5
</code></pre>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In this blog post, we have demystified the <code>new()</code> and <code>make()</code> functions in Go and explained their differences and use cases. To summarize:</p>
<ul>
<li><p>Use <code>new()</code> to allocate memory for value types and obtain a pointer to the zeroed value.</p>
</li>
<li><p>Use <code>make()</code> to create and initialize slices, maps, and channels (reference types) with their respective types and initial capacities.</p>
</li>
</ul>
<p>Understanding the distinctions between <code>new()</code> and <code>make()</code> is crucial for efficient memory allocation and data initialization in Go. Properly applying these functions will lead to cleaner and more optimized code in your Golang projects. Happy coding!</p>
]]></content:encoded></item><item><title><![CDATA[Create a simple fileserver in Golang]]></title><description><![CDATA[In modern software development, there are often scenarios where you need to share files or directories over the internet quickly and securely. Whether it's for collaborating with team members, sharing assets, or providing a simple file-sharing soluti...]]></description><link>https://thebugshots.dev/create-a-simple-fileserver-in-golang</link><guid isPermaLink="true">https://thebugshots.dev/create-a-simple-fileserver-in-golang</guid><category><![CDATA[golang]]></category><category><![CDATA[create http server]]></category><category><![CDATA[file-share]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Programming Tips]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Sat, 22 Jul 2023 04:53:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1690001203085/a3d7bf43-e9f1-49cf-9f6a-dfc02db26ff1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In modern software development, there are often scenarios where you need to share files or directories over the internet quickly and securely. Whether it's for collaborating with team members, sharing assets, or providing a simple file-sharing solution, having a quick way to expose a local directory through an HTTP server can be incredibly useful. In this blog post, we'll explore a concise and straightforward Go code snippet that allows you to achieve just that.</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"os"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Replace "." with the actual path of the directory you want to expose.</span>
    directoryPath := <span class="hljs-string">"."</span>

    <span class="hljs-comment">// Check if the directory exists</span>
    _, err := os.Stat(directoryPath)
    <span class="hljs-keyword">if</span> os.IsNotExist(err) {
        fmt.Printf(<span class="hljs-string">"Directory '%s' not found.\n"</span>, directoryPath)
        <span class="hljs-keyword">return</span>
    }

    <span class="hljs-comment">// Create a file server handler to serve the directory's contents</span>
    fileServer := http.FileServer(http.Dir(directoryPath))

    <span class="hljs-comment">// Create a new HTTP server and handle requests</span>
    http.Handle(<span class="hljs-string">"/"</span>, fileServer)

    <span class="hljs-comment">// Start the server on port 8080</span>
    port := <span class="hljs-number">8080</span>
    fmt.Printf(<span class="hljs-string">"Server started at http://localhost:%d\n"</span>, port)
    err = http.ListenAndServe(fmt.Sprintf(<span class="hljs-string">":%d"</span>, port), <span class="hljs-literal">nil</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Printf(<span class="hljs-string">"Error starting server: %s\n"</span>, err)
    }
}
</code></pre>
<h3 id="heading-understanding-the-code">Understanding the code</h3>
<ol>
<li><p><strong>Package and Imports</strong>: The code begins with the standard Go package declaration (<code>package main</code>) and the necessary imports. It utilizes the <code>"fmt"</code>, <code>"net/http"</code>, and <code>"os"</code> packages.</p>
</li>
<li><p><strong>Setting the Directory Path</strong>: The variable <code>directoryPath</code> stores the path of the local directory that you want to expose. In this example, it is set to <code>"."</code>, which represents the current working directory. You can change this to the desired directory path on your system.</p>
</li>
<li><p><strong>Checking Directory Existence</strong>: Before proceeding, the code checks if the specified directory exists. It does this by using <code>os.Stat()</code> which returns an error if the directory does not exist. The function <code>os.IsNotExist()</code> is then used to handle this specific error condition. If the directory doesn't exist, the code will print an error message and terminate.</p>
</li>
<li><p><strong>Creating the File Server</strong>: Assuming the directory exists, the code creates an HTTP file server handler using <code>http.FileServer(http.Dir(directoryPath))</code>. This file server will serve the contents of the specified directory.</p>
</li>
<li><p><strong>Handling Requests</strong>: The code then sets up a new HTTP server using <code>http.Handle("/", fileServer)</code>. This means that any incoming HTTP requests to the server will be handled by the file server created in the previous step.</p>
</li>
<li><p><strong>Starting the Server</strong>: The server is started on port 8080 by calling <code>http.ListenAndServe(fmt.Sprintf(":%d", port), nil)</code>. If the server encounters any errors during startup, they will be printed out.</p>
</li>
</ol>
<h3 id="heading-running-the-code">Running the code</h3>
<p>To run this code on your machine, follow these steps:</p>
<ol>
<li><p>Save the code to a file with a ".go" extension (e.g., <code>main.go</code>).</p>
</li>
<li><p>Open your terminal or command prompt and navigate to the directory containing the file.</p>
</li>
<li><p>Compile and run the code by entering the command: <code>go run main.go</code>.</p>
</li>
</ol>
<p>Once the server is up and running, you can access the contents of the specified directory by visiting <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a> in your web browser.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>In this blog post, we explored a simple yet powerful Go code snippet that allows you to expose a local directory through an HTTP server. This can be immensely useful for sharing files, collaborating with team members, or even setting up a basic file-sharing solution. By understanding this code, you've taken a step towards harnessing the potential of Go for handling web-related tasks efficiently. Remember to ensure the directory you want to expose exists and to keep security considerations in mind when making files accessible over the internet.</p>
]]></content:encoded></item><item><title><![CDATA[Taskfiles Essentials: Advanced workflows]]></title><description><![CDATA[In the previous blog post, we explored a basic use case of Taskfiles for build automation and developer workflow. In this post, I will cover some advanced workflows that help in maintaining common taskfiles across multiple projects.
As your developme...]]></description><link>https://thebugshots.dev/taskfiles-essentials-advanced-workflows</link><guid isPermaLink="true">https://thebugshots.dev/taskfiles-essentials-advanced-workflows</guid><category><![CDATA[taskfiles]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Programming Tips]]></category><category><![CDATA[Build tool]]></category><category><![CDATA[taskfile]]></category><category><![CDATA[Build Automation]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Fri, 21 Jul 2023 14:20:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1689947285231/3927ad38-34b9-48a6-93ea-26a597683b98.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the previous blog post, we explored a basic use case of Taskfiles for build automation and developer workflow. In this post, I will cover some advanced workflows that help in maintaining common taskfiles across multiple projects.</p>
<p>As your development projects grow, you may find yourself repeating the same taskfiles over and over again in multiple repositories. This can be a waste of time and effort as you need to make similar changes in multiple projects, and it can also lead to errors and inconsistencies.</p>
<p>The team I worked on also had a similar situation, to overcome this, we created a Git repository called <code>templates</code> where we store our taskfiles which are common to almost all our projects (for common tasks like, build, setting environment variables, helm, docker and other commands).</p>
<h3 id="heading-common-repository">Common Repository</h3>
<p>Once we created a common repository, we started splitting up our common tasks into multiple files like <code>docker.yml</code>, <code>env.yml</code>, <code>helm.yml</code>, <code>go.yml</code> etc., After splitting up our taskfiles, our git repository looked like this</p>
<pre><code class="lang-go">├── README.md
├── taskfiles
│   ├── docker.yml
│   ├── env.yml
│   ├── git.yml
│   ├── <span class="hljs-keyword">go</span>.yml
│   ├── helm.yml
│   └── vagrant.yml
└── vagrant
    ├── Vagrantfile
    └── devenv_customization.sh
</code></pre>
<p>Since we were using Vagrant to set up our development environment (both Windows and Mac) as the same, we used Vagrant. But this is a topic for another blog :)</p>
<p>Anyway, we stored our multiple taskfiles in a folder named <code>taskfiles</code>. Once our repository was prepared, we proceeded to the individual repositories of our services and added the common repository as a Git submodule by executing the command mentioned below. (This command only needs to be run once, and then we'll commit the changes to our current repository so that we don't need to run it again.)</p>
<pre><code class="lang-bash">git submodule add https://github.com/&lt;repo url&gt; templates
</code></pre>
<p>Once we run this command, a new folder called <code>templates</code> will be created and all our files from the common repo will be present here.</p>
<p>Now, it's time to incorporate these common Taskfiles into our service repository. Create a new file called Taskfile.yml in the root directory of the repository and import the Taskfiles as described below.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">3</span>

<span class="hljs-comment">## add tasksfiles as per the project requirements</span>
<span class="hljs-attr">includes:</span>
  <span class="hljs-attr">go:</span>
    <span class="hljs-attr">taskfile:</span> <span class="hljs-string">templates/taskfiles/go.yml</span>
    <span class="hljs-attr">optional:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">docker:</span>
    <span class="hljs-attr">taskfile:</span> <span class="hljs-string">templates/taskfiles/docker.yml</span>
    <span class="hljs-attr">optional:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">helm:</span>
    <span class="hljs-attr">taskfile:</span> <span class="hljs-string">templates/taskfiles/helm.yml</span>
    <span class="hljs-attr">optional:</span> <span class="hljs-literal">true</span>

<span class="hljs-attr">tasks:</span>
  <span class="hljs-attr">submodule:</span>
    <span class="hljs-attr">desc:</span> <span class="hljs-string">"Update submodules"</span>
    <span class="hljs-attr">cmds:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">git</span> <span class="hljs-string">submodule</span> <span class="hljs-string">update</span> <span class="hljs-string">--init</span> <span class="hljs-string">--recursive</span> <span class="hljs-string">--remote</span> <span class="hljs-string">--merge</span>

  <span class="hljs-attr">default:</span>
    <span class="hljs-attr">desc:</span> <span class="hljs-string">"Default task"</span>
    <span class="hljs-attr">cmds:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">task</span> <span class="hljs-string">--list-all</span>
</code></pre>
<p>In this example, we have imported the necessary taskfiles for this project using the "includes" keyword. Additionally, we added a task called "submodule," which will be utilized when someone clones this repository and wants to retrieve files from the common repo into the <code>templates</code> directory.</p>
<p>Now that everything is set up, it's time to give it a test run. Execute the task command in the terminal, and you will see that all the tasks from the imported taskfiles are displayed in the terminal.</p>
<pre><code class="lang-plaintext">❯ task
task: [default] task --list-all
task: Available tasks for this project:
* default:                        Default task
* submodule:                      Update submodules
* docker:build-image:             Build Docker Image
* go:build                        Build the golang application
* helm:up                         Deployes the helm chart
* helm:down                       Deletes the helm deployment
</code></pre>
<p>Voila! Everything works as expected. Now that we have imported all the tasks into our repo's Taskfile, we can run commands like <code>task go:build</code>, <code>task helm:up</code>, etc.</p>
<p>This method worked great for our team to clean up the clutter of common tasks in the Taskfiles of multiple repositories. By storing them in a single place and using them only where needed, we've streamlined our workflow.</p>
<p>In this blog post, we discuss advanced workflows for maintaining common Taskfiles across multiple projects using a Git repository called templates. By splitting common tasks into separate files and incorporating them as Git submodules, we streamline the development process, reduce repetitive work, and minimize errors and inconsistencies. I hope you find this blog helpful to implement common taskfiles in your project too :) See you in my next blog. Until then, Happy Coding!</p>
<p>Sample github repo - <a target="_blank" href="https://github.com/cksidharthan/taskfile-blog-example">https://github.com/cksidharthan/taskfile-blog-example</a></p>
]]></content:encoded></item><item><title><![CDATA[Taskfiles: Simplify Your Development Process with Style and Simplicity]]></title><description><![CDATA[In the fast-paced world of software development, optimizing productivity and reducing repetitive tasks are essential for efficient project management. For a long time, I was using makefiles in my projects, Although it was working fine, It became hard...]]></description><link>https://thebugshots.dev/taskfiles-streamlining-your-development-workflow-with-elegance-and-ease</link><guid isPermaLink="true">https://thebugshots.dev/taskfiles-streamlining-your-development-workflow-with-elegance-and-ease</guid><category><![CDATA[Build tool]]></category><category><![CDATA[Workflow Automation]]></category><category><![CDATA[Programming Tips]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[taskfiles]]></category><dc:creator><![CDATA[Sidharthan Chandrasekaran Kamaraj]]></dc:creator><pubDate>Thu, 20 Jul 2023 20:35:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1689885171213/f2e0502b-e71b-41da-8817-9c58ad1eb11b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the fast-paced world of software development, optimizing productivity and reducing repetitive tasks are essential for efficient project management. For a long time, I was using makefiles in my projects, Although it was working fine, It became hard to maintain overtime and it was not flexible enough for our team's use cases.  </p>
<p>We were searching for an alternate build tool that is much more flexible and maintainable. Enter Taskfiles, a versatile tool that simplifies the process of automating tasks and streamlining your development workflow. In this blog post, we will dive deep into Taskfiles, understanding what they are, how they work, and how they can enhance your development experience.</p>
<h3 id="heading-what-is-a-taskfile">What is a Taskfile?</h3>
<p>Taskfiles are a powerful tool designed to automate various tasks in your development workflow. They offer an elegant and straightforward way to define, organize, and execute common tasks such as building, testing, linting, deploying, and more. Unlike complex Makefiles or shell scripts, Taskfiles provide a user-friendly interface and eliminate the need for repetitive command-line typing.</p>
<h3 id="heading-installation-and-setup">Installation and Setup</h3>
<p>Setting up Taskfiles is a breeze. It requires minimal configuration, making it an excellent choice for developers of all skill levels. To get started, follow these simple steps:</p>
<p><strong>Step 1 -</strong> Install Task</p>
<p>Task is the command-line utility that reads and executes Taskfiles. To install Task, you can use Go's package manager, simply run:</p>
<pre><code class="lang-go"><span class="hljs-keyword">go</span> install github.com/<span class="hljs-keyword">go</span>-task/task/v3/cmd/task@latest
</code></pre>
<p>Step 2 - Create Taskfile.yml</p>
<p>In your project directory, create a file named "Taskfile.yml". This file will contain the tasks you want to automate.</p>
<h3 id="heading-syntax-and-usage">Syntax and Usage</h3>
<p>Taskfiles use YAML as their configuration language, which offers a human-readable and straightforward structure. A typical Taskfile consists of two main components: global settings and individual tasks.</p>
<p>Let's explore a basic Taskfile to understand the syntax:</p>
<pre><code class="lang-go"># Taskfile.yml
version: <span class="hljs-number">3</span>

tasks:
  <span class="hljs-keyword">default</span>:
    desc: List all tasks
    cmds:
      - task --list-all

  build:
    desc: Build Go application
    cmds:
      - <span class="hljs-keyword">go</span> build -o myapp .
    silent: <span class="hljs-literal">true</span>

  test:
    desc: Test Go Code
    cmds:
      - <span class="hljs-keyword">go</span> test ./...

  lint:
    desc: Lint Go application
    cmds:
      - golint ./...
</code></pre>
<p>In this example, we define three tasks: <code>default</code>, <code>build</code>, <code>test</code>, and <code>lint</code>. Each task contains a <code>cmds</code> key that lists the commands to be executed when the task is run. The optional <code>silent</code> key can be set to <code>true</code> to suppress command output.</p>
<p>When you type <code>task</code> and press <code>Enter</code>, and you will get the list of tasks available as below</p>
<pre><code class="lang-go">❯ task
task: [<span class="hljs-keyword">default</span>] task --list-all
task: Available tasks <span class="hljs-keyword">for</span> this project:
* build:         Build Go application
* <span class="hljs-keyword">default</span>:       List all tasks
* lint:          Lint Go application
* test:          Test Go Code
</code></pre>
<p>To run various tasks that you have specified in the taskfile, you can type <code>task build</code>, <code>task lint</code> or <code>task test</code> as per your requirement.</p>
<p>You can also specify environment variables that has to be set in the current environment when running the tasks. Below is an example of how to do it.</p>
<pre><code class="lang-go">env:
  GREETING: Hey, there!
</code></pre>
<p>There are more powerful workflows that can be done via Taskfiles which I will cover in future blog posts.</p>
<p>PS: I don't have anything against makefiles or any other build tools, I just saw that using Taskfiles greatly increased the maintainability of our codebase better for our team and we are continuing to use it. We were blown away by the capabilities of the Taskfiles. I will try to cover more advanced workflows that our team has created using the taskfiles in coming blogs. Until then, Happy Coding! :)</p>
]]></content:encoded></item></channel></rss>