<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Minhaz's Blog]]></title><description><![CDATA[Minhaz's Blog]]></description><link>https://blog.mdminhazulhaque.io</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 16:58:11 GMT</lastBuildDate><atom:link href="https://blog.mdminhazulhaque.io/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[DIY Network Printer: Epson L130 and Raspberry Pi Zero]]></title><description><![CDATA[Tired of shuffling USB cables every time someone in your household or small office needs to print? Wish you could print wirelessly from any device on your network? Well, you're in luck! With the magic of a Raspberry Pi and a few simple commands, you ...]]></description><link>https://blog.mdminhazulhaque.io/diy-network-printer-epson-l130-and-raspberry-pi-zero</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/diy-network-printer-epson-l130-and-raspberry-pi-zero</guid><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[#epson]]></category><category><![CDATA[#cups]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Mon, 15 Dec 2025 11:09:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/QH_SAEYJW8Y/upload/3d73cc47e24cacc94744c1191c75970a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Tired of shuffling USB cables every time someone in your household or small office needs to print? Wish you could print wirelessly from any device on your network? Well, you're in luck! With the magic of a Raspberry Pi and a few simple commands, you can transform your trusty Epson L130 inkjet printer into a convenient network printing powerhouse.</p>
<p>This guide will walk you through the steps to set up your Raspberry Pi as a CUPS (Common Unix Printing System) server, allowing seamless printing from your laptops, desktops, and even smartphones. Let's dive in!</p>
<p><strong>What You'll Need:</strong></p>
<ul>
<li><p>Raspberry Pi (any model will work, but a stable internet connection is key)</p>
</li>
<li><p>Epson L130 inkjet printer</p>
</li>
<li><p>USB cable to connect the printer to the Raspberry Pi</p>
</li>
<li><p>MicroSD card with Raspberry Pi OS installed</p>
</li>
<li><p>Wi-Fi connection for your Raspberry Pi</p>
</li>
<li><p>A computer on the same network to configure the printer</p>
</li>
</ul>
<p><strong>Step-by-Step Guide:</strong></p>
<ol>
<li><p><strong>Update Your System:</strong> First things first, let's ensure your Raspberry Pi's software packages are up to date. Open a terminal on your Raspberry Pi and run the following command:</p>
<pre><code class="lang-bash"> sudo apt update
</code></pre>
<p> This command refreshes the package lists, ensuring you have access to the latest versions.</p>
</li>
<li><p><strong>Install CUPS and Printer Drivers:</strong> Now, we'll install CUPS, the printing system we'll be using, and the Gutenprint drivers, which provide support for a wide range of printers, including the Epson L130. Execute this command:</p>
<pre><code class="lang-bash"> sudo apt install cups printer-driver-gutenprint
</code></pre>
<p> Confirm the installation when prompted by pressing <code>Y</code> and hitting Enter.</p>
</li>
<li><p><strong>Add Your User to the Printing Group:</strong> To manage the printer effectively, you'll need to add your user account (in this case, "minhaz") to the <code>lpadmin</code> group. This grants you administrative privileges for the printing system. Run the following command, replacing "minhaz" with your actual username if it's different:</p>
<pre><code class="lang-bash"> sudo usermod -a -G lpadmin minhaz
</code></pre>
<p> You might need to log out and log back in for this change to take effect.</p>
</li>
<li><p><strong>Allow Remote Access to CUPS:</strong> By default, CUPS only allows access from the local machine. To enable printing from other devices on your network, you need to allow remote access. Use the following command:</p>
<pre><code class="lang-bash"> sudo cupsctl --remote-any
</code></pre>
<p> This command configures CUPS to accept print jobs from any IP address on your network. <strong>Be mindful of your network security when enabling this.</strong></p>
</li>
<li><p><strong>Restart the CUPS Service:</strong> To apply the changes you've made, restart the CUPS service:</p>
<pre><code class="lang-bash"> sudo systemctl restart cups
</code></pre>
<p> This ensures that CUPS reloads its configuration with the new settings.</p>
</li>
</ol>
<p><strong>Configuring the Printer through the Web Interface:</strong></p>
<p>Now that your Raspberry Pi is running the CUPS server, you can configure the Epson L130 through a web browser on another computer on the same network.</p>
<ol>
<li><p><strong>Find Your Raspberry Pi's IP Address:</strong> Open a terminal on your Raspberry Pi and run:</p>
<pre><code class="lang-bash"> hostname -I
</code></pre>
<p> This will display the IP address of your Raspberry Pi.</p>
</li>
<li><p><strong>Access the CUPS Web Interface:</strong> Open a web browser on your computer and enter the IP address of your Raspberry Pi followed by port <code>631</code>. For example, if your Raspberry Pi's IP address is <code>192.168.1.100</code>, you would enter <code>http://192.168.1.100:631</code> in your browser's address bar.</p>
</li>
<li><p><strong>Add Your Printer:</strong></p>
<ul>
<li><p>You might see a security warning; proceed to the website.</p>
</li>
<li><p>Click on the <strong>"Administration"</strong> tab.</p>
</li>
<li><p>Under the "Printers" section, click <strong>"Add Printer"</strong>.</p>
</li>
<li><p>You might be prompted for a username and password. Use the username you used to log in to your Raspberry Pi and its corresponding password.</p>
</li>
<li><p>CUPS will search for connected printers. You should see your Epson L130 listed. Select it and click <strong>"Continue"</strong>.</p>
</li>
<li><p>On the next screen, you can give your printer a descriptive name (e.g., "Epson Network Printer"), add a location, and description. Click <strong>"Continue"</strong>.</p>
</li>
<li><p>In the "Make" list, select <strong>"Epson"</strong>. Click <strong>"Continue"</strong>.</p>
</li>
<li><p>In the "Model" list, look for <strong>"Epson Stylus Series"</strong> or something similar. In the "Driver" options, try selecting the driver that best matches your L130 model. You might need to experiment with different drivers if the first one doesn't work perfectly. The <strong>"Gutenprint v5.3.4"</strong> drivers are often a good choice, so look for an Epson L-series driver within that list if available, or try a generic Epson Stylus driver. Click <strong>"Add Printer"</strong>.</p>
</li>
<li><p>You might be asked to set default options for the printer. Configure them as needed and click <strong>"Set Default Options"</strong>.</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761761783091/c41fc9f3-5885-4cfd-8edb-7d1ce691892a.png" alt class="image--center mx-auto" /></p>
<p><strong>Printing from Other Devices:</strong></p>
<p>That's it! Your Epson L130 is now a network printer. To print from other devices on your network:</p>
<ul>
<li><p><strong>Windows:</strong> Go to "Settings" &gt; "Devices" &gt; "Printers &amp; scanners" &gt; "Add a printer or scanner". It should automatically detect your network printer. If not, you can manually add it using its IP address (the Raspberry Pi's IP address) and specifying it as an IPP printer.</p>
</li>
<li><p><strong>macOS:</strong> Go to "System Preferences" &gt; "Printers &amp; Scanners" &gt; click the "+" button. Select "IP" and enter the Raspberry Pi's IP address in the "Address" field, using <code>ipp</code> as the protocol and <code>printers/your_printer_name</code> (the name you gave the printer in CUPS) as the "Queue".</p>
</li>
<li><p><strong>Linux:</strong> The process varies depending on your distribution, but generally involves adding a new printer and selecting the IPP protocol, then entering the Raspberry Pi's IP address and the CUPS printer queue name.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765796885160/d878488f-5258-4aa6-9050-ad74e6a2b5a3.png" alt class="image--center mx-auto" /></p>
<p><strong>Enjoy Wireless Printing!</strong></p>
<p>You've successfully transformed your Raspberry Pi into a network print server for your Epson L130. Now you can enjoy the convenience of wireless printing from all your devices without the hassle of tangled USB cables. This simple yet powerful setup can significantly improve your printing workflow and make sharing your printer a breeze. Happy printing!</p>
]]></content:encoded></item><item><title><![CDATA[Monitor Kubernetes ConfigMaps & Secrets in GCP]]></title><description><![CDATA[It is quite common to lose the integrity of configmaps/secrets for the following reasons:

You have a large team with more than 5 people

You do not use any Config/Secret Management Tool

Lack of team collaboration


Anyway, that's not the point. All...]]></description><link>https://blog.mdminhazulhaque.io/monitor-kubernetes-configmaps-and-secrets-in-gcp</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/monitor-kubernetes-configmaps-and-secrets-in-gcp</guid><category><![CDATA[GCP]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[configmap]]></category><category><![CDATA[secrets]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 25 Apr 2025 16:13:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/8xAA0f9yQnE/upload/7db1714253ee452b93da19fe4e141bea.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It is quite common to lose the integrity of configmaps/secrets for the following reasons:</p>
<ul>
<li><p>You have a large team with more than 5 people</p>
</li>
<li><p>You do not use any Config/Secret Management Tool</p>
</li>
<li><p>Lack of team collaboration</p>
</li>
</ul>
<p>Anyway, that's not the point. All you need is to run the following query in the Logging service to find out which users made changes to which configmap or secret.</p>
<pre><code class="lang-json">protoPayload.@type = <span class="hljs-string">"type.googleapis.com/google.cloud.audit.AuditLog"</span> AND protoPayload.serviceName = <span class="hljs-string">"k8s.io"</span>
resource.type=<span class="hljs-string">"k8s_cluster"</span>
protoPayload.authenticationInfo.principalEmail !~ <span class="hljs-string">"system"</span> AND protoPayload.authenticationInfo.principalEmail !~ <span class="hljs-string">"gserviceaccount"</span>
protoPayload.methodName=<span class="hljs-string">"io.k8s.core.v1.configmaps.update"</span> OR protoPayload.methodName=<span class="hljs-string">"io.k8s.core.v1.secrets.update"</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Tracking Service Account Modifications in Google Cloud Platform (GCP)]]></title><description><![CDATA[It's quite common to see your important service account being modified by someone. Don't worry, my friend. Here is how you can track who did what.

Login to GCP and navigate to Logging

Set a proper timeline from the date-time picker (last X hour or ...]]></description><link>https://blog.mdminhazulhaque.io/tracking-service-account-modifications-in-google-cloud-platform-gcp</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/tracking-service-account-modifications-in-google-cloud-platform-gcp</guid><category><![CDATA[GCP]]></category><category><![CDATA[Google Cloud Platform]]></category><category><![CDATA[service account ]]></category><category><![CDATA[gcloud]]></category><category><![CDATA[logging]]></category><category><![CDATA[audit]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Sun, 29 Sep 2024 15:36:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/B_j4LJbam5U/upload/8d76f47105129497f0dc3bf84c159187.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It's quite common to see your important service account being modified by someone. Don't worry, my friend. Here is how you can track who did what.</p>
<ol>
<li><p>Login to GCP and navigate to <strong>Logging</strong></p>
</li>
<li><p>Set a proper timeline from the date-time picker (last X hour or last Y days)</p>
</li>
<li><p>Open up the Query Editor and paste the following code snippet</p>
</li>
</ol>
<pre><code class="lang-json">protoPayload.<span class="hljs-string">"@type"</span>=<span class="hljs-string">"type.googleapis.com/google.cloud.audit.AuditLog"</span>
resource.type=<span class="hljs-string">"service_account"</span>
protoPayload.methodName=<span class="hljs-string">"google.iam.admin.v1.DeleteServiceAccount"</span>
</code></pre>
<p>Voila! Look for the <code>principalEmail</code> field in the output, which will show the name of the person (or bot) who made the change.</p>
]]></content:encoded></item><item><title><![CDATA[Optimize Your Sharp Smart TV: Disable Built-in Apps for Faster Performance]]></title><description><![CDATA[You can speed up your smart tv experience by disabling some annoying, built in apps.
First, enable ADB on your Android TV unit.
To Activate Developer Options: Navigate to Settings > About. Tap Build number seven times to enable developer mode.
Once d...]]></description><link>https://blog.mdminhazulhaque.io/optimize-your-sharp-smart-tv-disable-built-in-apps-for-faster-performance</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/optimize-your-sharp-smart-tv-disable-built-in-apps-for-faster-performance</guid><category><![CDATA[Android]]></category><category><![CDATA[smarttv]]></category><category><![CDATA[adb]]></category><category><![CDATA[package manager]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Mon, 24 Jun 2024 01:08:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/EOQhsfFBhRk/upload/e15a24c39eb7d29c1386b0c0bb9dade5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You can speed up your smart tv experience by disabling some annoying, built in apps.</p>
<p>First, enable ADB on your Android TV unit.</p>
<p>To Activate Developer Options: Navigate to <strong>Settings</strong> &gt; <strong>About</strong>. Tap Build number seven times to enable developer mode.</p>
<p>Once done, find your TV's IP address and connect to it using the <code>adb</code> command line tool from your personal PC or Mac.</p>
<pre><code class="lang-bash">adb connect 192.168.254.101
adb shell
</code></pre>
<p>Finally uninstall the following packages one by one.</p>
<pre><code class="lang-bash">pm uninstall --user 0 azan.android.av.sharp.co.jp
pm uninstall --user 0 com.amazon.amazonvideo.livingroom
pm uninstall --user 0 com.google.android.play.games
pm uninstall --user 0 com.mediatek.wwtv.mediaplayer
pm uninstall --user 0 com.mediatek.wwtv.tvcenter
pm uninstall --user 0 com.mstar.android.tv.disclaimercustomization
pm uninstall --user 0 fusion.android.tv.demo
pm uninstall --user 0 jp.co.sharp.av.android.emanual
pm uninstall --user 0 jp.co.sharp.av.android.epopdemo
pm uninstall --user 0 rcota.android.av.sharp.co.jp.rcota
</code></pre>
<p>For best performance, disable animation scales to 0.</p>
<p>To Change Animation Scales: In Settings, go to System &gt; Developer options. Under the Drawing or Animation section, locate:</p>
<ul>
<li><p>Window animation scale</p>
</li>
<li><p>Transition animation scale</p>
</li>
<li><p>Animator duration scale</p>
</li>
</ul>
<p>Set all of them to 0 or 0.5x for a blazing fast performance.</p>
]]></content:encoded></item><item><title><![CDATA[Effortless Button Clicks with Selenium: A Quick Guide]]></title><description><![CDATA[If you need to automate the process of clicking a button on a webpage using Selenium, this guide will walk you through it effortlessly.
Prerequisites
You will need the following items.

Python3 with selenium module

Chrome Browser and Chrome Driver [...]]></description><link>https://blog.mdminhazulhaque.io/effortless-button-clicks-with-selenium-a-quick-guide</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/effortless-button-clicks-with-selenium-a-quick-guide</guid><category><![CDATA[selenium]]></category><category><![CDATA[Chrome]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 07 Jun 2024 15:08:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/gVQLAbGVB6Q/upload/68c1fe960119670db54497011d1b806a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you need to automate the process of clicking a button on a webpage using Selenium, this guide will walk you through it effortlessly.</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>You will need the following items.</p>
<ul>
<li><p>Python3 with <code>selenium</code> module</p>
</li>
<li><p>Chrome Browser and Chrome Driver [<a target="_blank" href="https://developer.chrome.com/docs/chromedriver/downloads">Download</a>]</p>
</li>
</ul>
<h3 id="heading-script">Script</h3>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> selenium <span class="hljs-keyword">import</span> webdriver
<span class="hljs-keyword">from</span> selenium.webdriver.common.keys <span class="hljs-keyword">import</span> Keys
<span class="hljs-keyword">from</span> selenium.webdriver.chrome.options <span class="hljs-keyword">import</span> Options
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> sys
<span class="hljs-keyword">import</span> time

chrome_options = Options()
chrome_options.add_argument(<span class="hljs-string">"--no-sandbox"</span>)
chrome_options.add_argument(<span class="hljs-string">"--headless"</span>)

driver = webdriver.Chrome(<span class="hljs-string">'./chromedriver'</span>, options=chrome_options)

<span class="hljs-comment"># load webpage</span>
driver.get(<span class="hljs-string">"https://example.com"</span>)

<span class="hljs-comment"># the id of the button to be clicked</span>
submit = driver.find_element_by_id(<span class="hljs-string">"btn-id"</span>)
submit.click()
time.sleep(<span class="hljs-number">3</span>)

<span class="hljs-comment"># button click action</span>
confirm = driver.find_element_by_id(<span class="hljs-string">"confirm-button"</span>)
confirm.click()

print(<span class="hljs-string">"Done"</span>)
</code></pre>
]]></content:encoded></item><item><title><![CDATA[GitLab Container Registry for CI/CD and Seamless Deployment]]></title><description><![CDATA[GitLab is an excellent SaaS tool for storing your code and automating workflows. If you have a managed Kubernetes cluster, you can also use GitLab as the container registry and the CI/CD platform.
Create Secret
First, you need to create a permanent A...]]></description><link>https://blog.mdminhazulhaque.io/gitlab-container-registry-for-cicd-and-seamless-deployment</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/gitlab-container-registry-for-cicd-and-seamless-deployment</guid><category><![CDATA[GitLab]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[containers]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 07 Jun 2024 14:47:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/9cCeS9Sg6nU/upload/796adabfb341b5f188c7121d1d376dbe.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>GitLab is an excellent SaaS tool for storing your code and automating workflows. If you have a managed Kubernetes cluster, you can also use GitLab as the container registry and the CI/CD platform.</p>
<h2 id="heading-create-secret">Create Secret</h2>
<p>First, you need to create a permanent <code>Access Token</code> from your GitLab repository or group that will have access to all the child repositories and the container registry. This token will be used by <code>kubelet</code> to pull images from the GitLab-managed container registry. The secret creation will look like the following.</p>
<pre><code class="lang-bash">kubectl create secret docker-registry gitlab-token-auth \
   --docker-server=https://registry.gitlab.com \
   --docker-username=kubelet \
   --docker-password=1234zxcv0987
</code></pre>
<h2 id="heading-cicd-pipeline">CI/CD Pipeline</h2>
<p>If you trigger a GitLab workflow inside GitLab-hosted runners, the workflow will have the privilege to push container images into the same code repository. Built-in variables like <code>CI_REGISTRY_USER</code>, <code>CI_REGISTRY_PASSWORD</code>, and <code>CI_REGISTRY_IMAGE</code> will be automatically populated during the pipeline run. Here is the code snippet that needs to be added to push the newly built image into Gitlab's container registry.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">push:</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">docker:24</span>
  <span class="hljs-attr">services:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker:24-dind</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">push</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">-u</span> <span class="hljs-string">$CI_REGISTRY_USER</span> <span class="hljs-string">-p</span> <span class="hljs-string">$CI_REGISTRY_PASSWORD</span> <span class="hljs-string">$CI_REGISTRY</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">.</span> <span class="hljs-string">-t</span> <span class="hljs-string">$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA</span>
</code></pre>
<h2 id="heading-deployment">Deployment</h2>
<p>Once the image is pushed into Gitlab registry, you can use the previosly created <code>Secret</code> to pull the image into Kubernetes and run it. You will need to patch your deployment to include the <code>imagePullSecrets</code>.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">spec:</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA</span>
      <span class="hljs-attr">imagePullSecrets:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">gitlab-token-auth</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Automate Error Detection with CloudWatch Log Alarms]]></title><description><![CDATA[Assume you have a log group in CloudWatch that continuously holds the application logs. If the logs are encoded as JSON, it will be very useful to filter the logs based on specific JSON keys or fields.
Here is the CloudWatch query that filters logs w...]]></description><link>https://blog.mdminhazulhaque.io/automate-error-detection-with-cloudwatch-log-alarms</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/automate-error-detection-with-cloudwatch-log-alarms</guid><category><![CDATA[#CloudWatch]]></category><category><![CDATA[sns]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Alarms]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 07 Jun 2024 14:22:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/TeeK3XdZd54/upload/8d0739e5a93a1e6d6f74a2ef89060413.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Assume you have a log group in CloudWatch that continuously holds the application logs. If the logs are encoded as JSON, it will be very useful to filter the logs based on specific JSON keys or fields.</p>
<p>Here is the CloudWatch query that filters logs with <code>level = error</code> and aggregates them by the count of occurrences.</p>
<pre><code class="lang-bash">fields @timestamp, @message
| filter level = <span class="hljs-string">"error"</span>
| stats count(*) by @<span class="hljs-built_in">log</span>
</code></pre>
<p>If you want an automated alert every time <code>level = error</code> appears, you can turn it into a CloudWatch Metric Alarm. Use the following command to create such an alarm.</p>
<pre><code class="lang-bash">aws cloudwatch put-metric-alarm --cli-input-json file://alarm.json
</code></pre>
<p>And here is the <code>alarm.json</code> file that contains all the required information.</p>
<pre><code class="lang-yaml">{
    <span class="hljs-attr">"logGroupName":</span> <span class="hljs-string">"prod-backend/docker/api"</span>,
    <span class="hljs-attr">"filterName":</span> <span class="hljs-string">"api-error"</span>,
    <span class="hljs-attr">"filterPattern":</span> <span class="hljs-string">"{ $.level = \"error\" }"</span>,
    <span class="hljs-attr">"metricTransformations":</span> [
        {
            <span class="hljs-attr">"metricName":</span> <span class="hljs-string">"api-error"</span>,
            <span class="hljs-attr">"metricNamespace":</span> <span class="hljs-string">"api"</span>,
            <span class="hljs-attr">"metricValue":</span> <span class="hljs-string">"1"</span>,
            <span class="hljs-attr">"unit":</span> <span class="hljs-string">"count"</span>
        }
    ]
}
</code></pre>
<p>Finally, you can connect the <code>api-error</code> metric to an SNS topic to get notified every time an error occurs in the log group.</p>
]]></content:encoded></item><item><title><![CDATA[Deploying GitLab CE on Kubernetes: A Step-by-Step Guide]]></title><description><![CDATA[Deploying GitLab CE on Kubernetes can be achieved using both Helm charts and manual manifests. This guide will walk you through the process step-by-step, ensuring that you have a functioning GitLab CE instance. We will focus on creating a ConfigMap, ...]]></description><link>https://blog.mdminhazulhaque.io/deploying-gitlab-ce-on-kubernetes-a-step-by-step-guide</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/deploying-gitlab-ce-on-kubernetes-a-step-by-step-guide</guid><category><![CDATA[GitLab]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 07 Jun 2024 13:54:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/ZV_64LdGoao/upload/b0bad1ed2c7887c5ee6207239ce6f816.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Deploying GitLab CE on Kubernetes can be achieved using both Helm charts and manual manifests. This guide will walk you through the process step-by-step, ensuring that you have a functioning GitLab CE instance. We will focus on creating a ConfigMap, setting up a Deployment, exposing it via a Service, and finally, binding a domain with an Ingress.</p>
<h3 id="heading-configmap">ConfigMap</h3>
<p>First, you need to create a ConfigMap that will be used as the core GitLab file <code>/etc/gitlab/gitlab.rb</code>. I will disable many internal tools and modules to make the core GitLab UI function as a Git server.</p>
<p>Please note that, you will also require a persistent PostgreSQL database connection parameters.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ConfigMap</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">gitlab-config</span>
<span class="hljs-attr">data:</span>
  <span class="hljs-attr">EXTERNAL_URL:</span> <span class="hljs-string">https://gitlab.example.com</span>
  <span class="hljs-attr">GITLAB_OMNIBUS_CONFIG:</span> <span class="hljs-string">|
    # disable heavy features
    postgresql['enable'] = false
    registry['enable'] = false
    redis['enable'] = false
    nginx['enable'] = false
    logrotate['enable'] = false
    gitlab_kas['enable'] = false
    monitoring_role['enable'] = false
    prometheus['enable'] = false
    alertmanager['enable'] = false
    node_exporter['enable'] = false
    redis_exporter['enable'] = false
    postgres_exporter['enable'] = false
    gitlab_exporter['enable'] = false
    prometheus_monitoring['enable'] = false
    grafana['enable'] = false
</span>
    <span class="hljs-comment"># web    </span>
    <span class="hljs-string">gitlab_workhorse['listen_network']</span> <span class="hljs-string">=</span> <span class="hljs-string">"tcp"</span>
    <span class="hljs-string">gitlab_workhorse['listen_addr']</span> <span class="hljs-string">=</span> <span class="hljs-string">"0.0.0.0:8181"</span>

    <span class="hljs-comment"># timezone</span>
    <span class="hljs-string">gitlab_rails['time_zone']</span> <span class="hljs-string">=</span> <span class="hljs-string">'Asia/Dhaka'</span>

    <span class="hljs-comment"># database</span>
    <span class="hljs-string">gitlab_rails['db_database']</span> <span class="hljs-string">=</span> <span class="hljs-string">'db_gitlab'</span>
    <span class="hljs-string">gitlab_rails['db_username']</span> <span class="hljs-string">=</span> <span class="hljs-string">'user_gitlab'</span>
    <span class="hljs-string">gitlab_rails['db_password']</span> <span class="hljs-string">=</span> <span class="hljs-string">'0987abcd1234wxyz'</span>
    <span class="hljs-string">gitlab_rails['db_host']</span> <span class="hljs-string">=</span> <span class="hljs-string">'10.10.10.1'</span>
    <span class="hljs-string">gitlab_rails['db_port']</span> <span class="hljs-string">=</span> <span class="hljs-number">5432</span>
</code></pre>
<h3 id="heading-deployment">Deployment</h3>
<p>Now we will create a deployment that will use the ConfigMap mentioned above. As you can see, I have added three extra volumes to persistently store the <code>etc</code>, <code>log</code>, and <code>data</code> directories. This is very important; otherwise, you will lose all your data every time you restart the pod. You are welcome to use any other volume type like <code>hostPath</code>, <code>glusterfs</code>, <code>nfs</code>, etc.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">gitlab</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">gitlab</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">gitlab</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">gitlab</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">envFrom:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">configMapRef:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">gitlab-config</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">gitlab/gitlab-ce</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">8181</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
        <span class="hljs-attr">volumeMounts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/etc/gitlab</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">etc</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/opt/gitlab</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">data</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/log/gitlab</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">log</span>
      <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">etc</span>
        <span class="hljs-attr">hostPath:</span>
          <span class="hljs-attr">path:</span> <span class="hljs-string">/etc</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">data</span>
        <span class="hljs-attr">hostPath:</span>
          <span class="hljs-attr">path:</span> <span class="hljs-string">/data</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">log</span>
        <span class="hljs-attr">hostPath:</span>
          <span class="hljs-attr">path:</span> <span class="hljs-string">/var/log</span>
</code></pre>
<h3 id="heading-service">Service</h3>
<p>Once the pods are up and running, we need to expose the Deployment as a Service. You will notice that we have exposed port <code>8181</code> of the containers to forward web traffic to Gitlab <code>workhorse</code>.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">gitlab</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">gitlab</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">8181</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">8181</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">gitlab</span>
</code></pre>
<h3 id="heading-ingress">Ingress</h3>
<p>Finally, you need to bind a domain with the ingress.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">gitlab</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">gitlab.example.com</span>
      <span class="hljs-attr">http:</span>
        <span class="hljs-attr">paths:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
            <span class="hljs-attr">backend:</span>
              <span class="hljs-attr">serviceName:</span> <span class="hljs-string">gitlab</span>
              <span class="hljs-attr">servicePort:</span> <span class="hljs-string">http</span>
  <span class="hljs-attr">tls:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">hosts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">gitlab.example.com</span>
      <span class="hljs-attr">secretName:</span> <span class="hljs-string">example-com-ssl</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Unleashing the Power of Curl: Simplifying SFTP File Transfers]]></title><description><![CDATA[You may have used tools like scp, rsync, sftp, and lftp to transfer files over SFTP. However, you can also accomplish this with curl. Using curl has its advantages, as you won't need to implement expect-scripts to handle password prompts or manual FT...]]></description><link>https://blog.mdminhazulhaque.io/unleashing-the-power-of-curl-simplifying-sftp-file-transfers</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/unleashing-the-power-of-curl-simplifying-sftp-file-transfers</guid><category><![CDATA[curl]]></category><category><![CDATA[SFTP]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Wed, 06 Mar 2024 14:35:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/bSlHKWxxXak/upload/f7fec1c4f22a148512ec3dc2effd444f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You may have used tools like <code>scp</code>, <code>rsync</code>, <code>sftp</code>, and <code>lftp</code> to transfer files over SFTP. However, you can also accomplish this with <code>curl</code>. Using <code>curl</code> has its advantages, as you won't need to implement expect-scripts to handle password prompts or manual FTP get/put commands.</p>
<p>Here's an example snippet that downloads a file from a secure SFTP server using custom authentication.</p>
<pre><code class="lang-yaml"><span class="hljs-string">SFTP_HOST=sftp://172.18.1.100</span>
<span class="hljs-string">SFTP_AUTH=user:12345678</span>
<span class="hljs-string">SFTP_FILE=/data/files/$(date</span> <span class="hljs-string">--date="1</span> <span class="hljs-string">day</span> <span class="hljs-string">ago"</span> <span class="hljs-string">+%Y%m%d)</span>

<span class="hljs-string">curl</span> <span class="hljs-string">-v</span> <span class="hljs-string">-u</span> <span class="hljs-string">$SFTP_AUTH</span> <span class="hljs-string">$SFTP_HOST/$SFTP_FILE</span> <span class="hljs-string">-o</span> <span class="hljs-string">$SFTP_FILE</span> <span class="hljs-number">2</span><span class="hljs-string">&gt;&amp;1</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Quick Tools Deployment with Simple Docker Commands]]></title><description><![CDATA[Well, why set up databases (or other tools) on local machines when you can quickly spin one up using Docker in just minutes? Need MySQL or PostgreSQL? I've got you covered.
MySQL
docker run --rm -d \
    -p 3306:3306 \
    -e MYSQL_ROOT_PASSWORD=1234...]]></description><link>https://blog.mdminhazulhaque.io/quick-tools-deployment-with-simple-docker-commands</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/quick-tools-deployment-with-simple-docker-commands</guid><category><![CDATA[Docker]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[MongoDB]]></category><category><![CDATA[Redis]]></category><category><![CDATA[etcd]]></category><category><![CDATA[rabbitmq]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 05 Jan 2024 23:18:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/9cCeS9Sg6nU/upload/c8a05879fcf18f6baf2c0a3b629629ee.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Well, why set up databases (or other tools) on local machines when you can quickly spin one up using Docker in just minutes? Need MySQL or PostgreSQL? I've got you covered.</p>
<h3 id="heading-mysql">MySQL</h3>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">run</span> <span class="hljs-string">--rm</span> <span class="hljs-string">-d</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-p</span> <span class="hljs-number">3306</span><span class="hljs-string">:3306</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">MYSQL_ROOT_PASSWORD=12345678</span> <span class="hljs-string">\</span>
    <span class="hljs-string">mariadb</span>
</code></pre>
<h3 id="heading-postgresql">PostgreSQL</h3>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">run</span> <span class="hljs-string">--rm</span> <span class="hljs-string">-d</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-p</span> <span class="hljs-number">5432</span><span class="hljs-string">:5432</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">POSTGRES_PASSWORD=12345678</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">POSTGRES_USER=postgres</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">POSTGRES_DB=postgres</span> <span class="hljs-string">\</span>
    <span class="hljs-string">postgres</span>
</code></pre>
<h3 id="heading-mongodb">MongoDB</h3>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">run</span> <span class="hljs-string">--rm</span> <span class="hljs-string">-d</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-p</span> <span class="hljs-number">27017</span><span class="hljs-string">:27017</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">MONGO_INITDB_ROOT_USERNAME=root</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">MONGO_INITDB_ROOT_PASSWORD=12345678</span> <span class="hljs-string">\</span>
    <span class="hljs-string">mongodb/mongodb-community-server:6</span>
</code></pre>
<h3 id="heading-redis">Redis</h3>
<pre><code class="lang-yaml">
<span class="hljs-string">docker</span> <span class="hljs-string">run</span> <span class="hljs-string">--rm</span> <span class="hljs-string">-d</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-p</span> <span class="hljs-number">6379</span><span class="hljs-string">:6379</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">ALLOW_EMPTY_PASSWORD=no</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">REDIS_PASSWORD=12345678</span> <span class="hljs-string">\</span>
    <span class="hljs-string">bitnami/redis</span>
</code></pre>
<h3 id="heading-etcd">etcd</h3>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">run</span> <span class="hljs-string">--rm</span> <span class="hljs-string">-d</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-p</span> <span class="hljs-number">2379</span><span class="hljs-string">:2379</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">ALLOW_NONE_AUTHENTICATION=yes</span> <span class="hljs-string">\</span>
    <span class="hljs-string">bitnami/etcd</span>
</code></pre>
<h3 id="heading-rabbitmq">RabbitMQ</h3>
<pre><code class="lang-yaml"><span class="hljs-string">docker</span> <span class="hljs-string">run</span> <span class="hljs-string">--rm</span> <span class="hljs-string">-d</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-p</span> <span class="hljs-number">5672</span><span class="hljs-string">:5672</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-p</span> <span class="hljs-number">15672</span><span class="hljs-string">:15672</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">RABBITMQ_DEFAULT_USER=admin</span> <span class="hljs-string">\</span>
    <span class="hljs-string">-e</span> <span class="hljs-string">RABBITMQ_DEFAULT_PASS=12345678</span> <span class="hljs-string">\</span>
    <span class="hljs-string">rabbitmq:3-management</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Execute Scheduled kubectl Command within a Kubernetes Cluster]]></title><description><![CDATA[Sometimes one may need to perform periodic activities (restarting pods, cleaning up volumes, updating replica counts, etc.). For self-hosted clusters, it is possible to schedule kubectl commands as cron jobs on the master nodes. But what about EKS/GK...]]></description><link>https://blog.mdminhazulhaque.io/execute-scheduled-kubectl-command-within-a-kubernetes-cluster</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/execute-scheduled-kubectl-command-within-a-kubernetes-cluster</guid><category><![CDATA[clusterrole]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[cronjob]]></category><category><![CDATA[rbac]]></category><category><![CDATA[clusterrolebindings]]></category><category><![CDATA[bitnami]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 05 Jan 2024 23:00:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/UAvYasdkzq8/upload/b8586ac5aa41b114a429d5a565817c32.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sometimes one may need to perform periodic activities (restarting pods, cleaning up volumes, updating replica counts, etc.). For self-hosted clusters, it is possible to schedule <code>kubectl</code> commands as cron jobs on the master nodes. But what about EKS/GKE/AKS clusters? You wouldn't want to create a server solely for this purpose. Additionally, there are numerous overheads, such as creating IAM roles and installing <code>kubectl</code>, among others.</p>
<p>Well, here is the trick. You can do the following.</p>
<ol>
<li><p>Create a Kubernetes-native <code>CronJob</code> using any Docker image that includes <code>kubectl</code></p>
</li>
<li><p>Run the pod with a service account that has the <code>Role/ClusterRole</code> to perform the desired actions</p>
</li>
</ol>
<p>First, choose a namespace where the <code>CronJob</code> will be created and executed. Then, create a service account in that namespace.</p>
<pre><code class="lang-bash">kubectl create ns my-namespace
kubectl create sa my-cronjob -n my-namespace
</code></pre>
<p>Next, associate some RBAC rules with the service account. You can use system roles (like <code>cluster-admin</code> or <code>view</code>) but try not to allow excessive permission.</p>
<pre><code class="lang-bash">kubectl create clusterrolebinding --clusterrole edit \
    --serviceaccount=my-namespace:my-cronjob
</code></pre>
<p>Finally, create a <code>CronJob</code> definition that utilizes the previously mentioned <code>ServiceAccount</code>. Make sure that the <code>spec.schedule</code> is accurate in the UTC timezone. Additionally, you should include an <code>exit 0</code> at the end of the command; otherwise, Kubernetes will regard the job as failed.</p>
<p>Here's an example:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">batch/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">CronJob</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-cronjob</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">my-namespace</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">schedule:</span> <span class="hljs-string">"0 0 * * SUN"</span>
  <span class="hljs-attr">jobTemplate:</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">template:</span>
        <span class="hljs-attr">spec:</span>
          <span class="hljs-attr">restartPolicy:</span> <span class="hljs-string">Never</span>
          <span class="hljs-attr">containers:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">command:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">/bin/bash</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">-c</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">|
              kubectl -n prod rollout restart deploy
              kubectl -n stage rollout restart deploy
              kubectl -n dev rollout restart deploy
              exit 0
</span>            <span class="hljs-attr">image:</span> <span class="hljs-string">bitnami/kubectl</span>
            <span class="hljs-attr">imagePullPolicy:</span> <span class="hljs-string">IfNotPresent</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">job</span>
          <span class="hljs-attr">serviceAccount:</span> <span class="hljs-string">my-cronjob</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Export CloudWatch Logs to S3 in File Format]]></title><description><![CDATA[You may want to compile all log streams within a specific log group into a single file for analysis or debugging purposes.
First, you need to create a bucket in the same region as the CloudWatch Log Group.
aws s3api create-bucket --bucket app-logs --...]]></description><link>https://blog.mdminhazulhaque.io/export-cloudwatch-logs-to-s3-in-file-format</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/export-cloudwatch-logs-to-s3-in-file-format</guid><category><![CDATA[AWS]]></category><category><![CDATA[#CloudWatch]]></category><category><![CDATA[S3]]></category><category><![CDATA[Export]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 05 Jan 2024 22:34:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4_TYsMnML60/upload/88e22ac7a5d78ba0f1e06cc1c2cf92cd.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You may want to compile all log streams within a specific log group into a single file for analysis or debugging purposes.</p>
<p>First, you need to create a bucket in the same region as the CloudWatch Log Group.</p>
<pre><code class="lang-bash">aws s3api create-bucket --bucket app-logs --create-bucket-configuration LocationConstraint=us-west-2
</code></pre>
<p>Next, you must modify the bucket policy to ensure the CloudWatch Log Exporter can write to it. Here is the policy document:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
  <span class="hljs-attr">"Statement"</span>: [
    {
      <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"s3:GetBucketAcl"</span>,
      <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::app-logs"</span>,
      <span class="hljs-attr">"Principal"</span>: {
        <span class="hljs-attr">"Service"</span>: <span class="hljs-string">"logs.us-west-2.amazonaws.com"</span>
      }
    },
    {
      <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"s3:PutObject"</span>,
      <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::app-logs/*"</span>,
      <span class="hljs-attr">"Principal"</span>: {
        <span class="hljs-attr">"Service"</span>: <span class="hljs-string">"logs.us-west-2.amazonaws.com"</span>
      }
    }
  ]
}
</code></pre>
<p>Use the following command to apply the policy to the bucket.</p>
<pre><code class="lang-json">aws s3api put-bucket-policy --bucket app-logs --policy file:<span class="hljs-comment">//policy.json</span>
</code></pre>
<p>Next, initiate an export job that will transfer all log streams from a specific log group into the previously created S3 bucket. You also need to specify the range in Unix timestamp format.</p>
<pre><code class="lang-bash">aws logs create-export-task --task-name <span class="hljs-string">"app-logs-group-1"</span> \
    --log-group-name <span class="hljs-string">"prod/app-logs"</span> \
    --from 1704045600000 --to 1704132000 \
    --destination <span class="hljs-string">"app-logs"</span> --destination-prefix <span class="hljs-string">"prefix1"</span>
</code></pre>
<p>The command above will produce a task ID. You can query the task ID to check whether the export job has been completed.</p>
<pre><code class="lang-bash">aws logs describe-export-tasks --task-id d6f1d52c-2783-4145-9668-4f5cc5579f41
</code></pre>
<p>Once complete, you can simply download the bucket content to your local machine and analyze it.</p>
<pre><code class="lang-bash">aws s3 sync s3://app-logs ./logs
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Applying a Single AWS ALB to Multiple Namespaces in AWS EKS]]></title><description><![CDATA[When deploying your application on EKS, using ALB to expose it is the most cloud-native approach. The AWS Load Balancer Controller simplifies the process of converting Kubernetes-native Ingress into AWS-native ALB automatically.
The Ingress objects a...]]></description><link>https://blog.mdminhazulhaque.io/reuse-a-single-aws-alb-across-multiple-namespaces-on-aws-eks</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/reuse-a-single-aws-alb-across-multiple-namespaces-on-aws-eks</guid><category><![CDATA[EKS]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[ingress]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 05 Jan 2024 21:46:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/5lUMTeo7-bE/upload/6216bd8c6a040272a9600947e7deb365.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When deploying your application on EKS, using ALB to expose it is the most cloud-native approach. The <code>AWS Load Balancer Controller</code> simplifies the process of converting Kubernetes-native Ingress into AWS-native ALB automatically.</p>
<p>The <code>Ingress</code> objects are namespace-scoped. What happens if your applications are deployed in multiple namespaces? By default, the AWS Load Balancer Controller will create a separate ALB for each ingress object. Even if there are multiple <code>Ingress</code> objects in the same namespace, you will end up having an individual ALB for each of them. And that's perfectly fine.</p>
<p>However, if you have 50 or 100 namespaces, and your application needs to be deployed in multiple namespaces, you will incur higher costs because each ALB will charge you for the <strong>number of hours</strong> they remain active, even if you do not use them.</p>
<p>There is a simple trick to consolidate all <code>Ingress</code> objects into a single ALB, where all those routes will be combined into ALB <code>rule</code> groups. All you need to do is annotate each <code>Ingress</code> with the following labels.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">alb.ingress.kubernetes.io/group.name:</span> <span class="hljs-string">myapp</span>
</code></pre>
<p>For all matching <code>group.name</code> labels in all namespaces, the <code>AWS Load Balancer Controller</code> will create a single ALB with multiple rules inside it. Here is a detailed example:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/backend-protocol:</span> <span class="hljs-string">HTTP</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/certificate-arn:</span> <span class="hljs-string">arn:aws:acm:us-west-1:987654321:certificate/37c33c94-e3be-41cc-9ec8-fd16b2873bd4</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/listen-ports:</span> <span class="hljs-string">'[{"HTTP": 80}, {"HTTPS":443}]'</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/load-balancer-name:</span> <span class="hljs-string">myapp</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/group.name:</span> <span class="hljs-string">myapp</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/scheme:</span> <span class="hljs-string">internet-facing</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/ssl-redirect:</span> <span class="hljs-string">"443"</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/success-codes:</span> <span class="hljs-string">"200"</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/target-type:</span> <span class="hljs-string">instance</span>
    <span class="hljs-attr">kubernetes.io/ingress.class:</span> <span class="hljs-string">alb</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-dev</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">myapp-dev</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">myapp-dev.example.com</span>
    <span class="hljs-attr">http:</span>
      <span class="hljs-attr">paths:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">backend:</span>
          <span class="hljs-attr">service:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
            <span class="hljs-attr">port:</span>
              <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
        <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
        <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/backend-protocol:</span> <span class="hljs-string">HTTP</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/certificate-arn:</span> <span class="hljs-string">arn:aws:acm:us-west-1:987654321:certificate/37c33c94-e3be-41cc-9ec8-fd16b2873bd4</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/listen-ports:</span> <span class="hljs-string">'[{"HTTP": 80}, {"HTTPS":443}]'</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/load-balancer-name:</span> <span class="hljs-string">myapp</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/group.name:</span> <span class="hljs-string">myapp</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/scheme:</span> <span class="hljs-string">internet-facing</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/ssl-redirect:</span> <span class="hljs-string">"443"</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/success-codes:</span> <span class="hljs-string">"200"</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/target-type:</span> <span class="hljs-string">instance</span>
    <span class="hljs-attr">kubernetes.io/ingress.class:</span> <span class="hljs-string">alb</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-stage</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">myapp-prstageod</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">myapp-stage.example.com</span>
    <span class="hljs-attr">http:</span>
      <span class="hljs-attr">paths:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">backend:</span>
          <span class="hljs-attr">service:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
            <span class="hljs-attr">port:</span>
              <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
        <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
        <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/backend-protocol:</span> <span class="hljs-string">HTTP</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/certificate-arn:</span> <span class="hljs-string">arn:aws:acm:us-west-1:987654321:certificate/37c33c94-e3be-41cc-9ec8-fd16b2873bd4</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/listen-ports:</span> <span class="hljs-string">'[{"HTTP": 80}, {"HTTPS":443}]'</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/load-balancer-name:</span> <span class="hljs-string">myapp</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/group.name:</span> <span class="hljs-string">myapp</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/scheme:</span> <span class="hljs-string">internet-facing</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/ssl-redirect:</span> <span class="hljs-string">"443"</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/success-codes:</span> <span class="hljs-string">"200"</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/target-type:</span> <span class="hljs-string">instance</span>
    <span class="hljs-attr">kubernetes.io/ingress.class:</span> <span class="hljs-string">alb</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-prod</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">myapp-prod</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">myapp.example.com</span>
    <span class="hljs-attr">http:</span>
      <span class="hljs-attr">paths:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">backend:</span>
          <span class="hljs-attr">service:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
            <span class="hljs-attr">port:</span>
              <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
        <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
        <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
</code></pre>
<p>You can find all available annotations here: <a target="_blank" href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/ingress/annotations/">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/ingress/annotations/</a></p>
]]></content:encoded></item><item><title><![CDATA[Hybrid Ingress Routing with AWS ALB and Traefik]]></title><description><![CDATA[Traefik is an excellent declarative ingress controller for Kubernetes. If you don't need all the features of cloud-managed Application Load Balancers, Traefik can be a fantastic alternative.
However, there are certain limitations of Traefik on cloud ...]]></description><link>https://blog.mdminhazulhaque.io/hybrid-ingress-routing-with-aws-alb-and-traefik</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/hybrid-ingress-routing-with-aws-alb-and-traefik</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ingress]]></category><category><![CDATA[Traefik]]></category><category><![CDATA[SSL]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 06 Oct 2023 14:12:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/dW7mWYlHCsc/upload/794962121280b9a3504e2521a3cc4eac.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://traefik.io/">Traefik</a> is an excellent declarative ingress controller for Kubernetes. If you don't need all the features of cloud-managed Application Load Balancers, Traefik can be a fantastic alternative.</p>
<p>However, there are certain limitations of Traefik on cloud platforms. It can handle the TLS/SSL layer only through Kubernetes Secret or Let's Encrypt, meaning it cannot leverage AWS ACM for resolving SSL certificates. This limitation stems from AWS ACM itself, as it can only be used with resources like ALB and CloudFront.</p>
<p>However, you may want to have both 1) SSL termination by ACM and 2) support for Traefik-specific features (Middleware, Circuit Breaker, etc.) all in one place. Fortunately, I have tested a system that allows you to continue using an AWS ACM-provided SSL certificate while handling ingress routing with Traefik. Here is the high-level architecture of the proposed system:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696017948936/73cc2555-b915-47e1-83bf-cd08f9900028.png" alt class="image--center mx-auto" /></p>
<p>To set up this system, you'll first need an EKS cluster with the <a target="_blank" href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/">AWS Load Balancer Controller</a> add-on. You can bootstrap the cluster using <a target="_blank" href="https://eksctl.io/">eksctl</a>.</p>
<p>Once the cluster is ready, deploy Traefik in <code>web</code> mode only, disabling the <code>websecure</code> entrypoint.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">traefik</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">traefik</span>
  <span class="hljs-attr">template:</span> 
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">traefik</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">serviceAccountName:</span> <span class="hljs-string">traefik</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">docker.io/traefik:v3.0</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">traefik</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">"web"</span>
          <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
        <span class="hljs-attr">args:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">"--entrypoints.web.address=:80/tcp"</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">"--api.dashboard=false"</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">"--providers.kubernetescrd"</span>
</code></pre>
<p>Next, create a service for the Traefik deployment/daemonset and set its type to NodePort (ALB requires the service to be of NodePort type for its backend).</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">traefik</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">web</span>
    <span class="hljs-attr">nodePort:</span> <span class="hljs-number">30080</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
    <span class="hljs-attr">targetPort:</span> <span class="hljs-string">web</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">traefik</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">NodePort</span>
</code></pre>
<p>Next, create an ALB Ingress with the appropriate annotations for ACM UUID, Name, and Class.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/backend-protocol:</span> <span class="hljs-string">HTTP</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/certificate-arn:</span> <span class="hljs-string">arn:aws:acm:ap-southeast-1:0000000000:certificate/cb6cf41e-a6f4-4fd3-9aa5-44503f317420</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/listen-ports:</span> <span class="hljs-string">'[{"HTTP": 80}, {"HTTPS":443}]'</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/load-balancer-name:</span> <span class="hljs-string">my-lb</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/scheme:</span> <span class="hljs-string">internet-facing</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/ssl-redirect:</span> <span class="hljs-string">"443"</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/success-codes:</span> <span class="hljs-number">200</span><span class="hljs-number">-404</span>
    <span class="hljs-attr">alb.ingress.kubernetes.io/target-type:</span> <span class="hljs-string">instance</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">traefik</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">ingressClassName:</span> <span class="hljs-string">alb</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">http:</span>
      <span class="hljs-attr">paths:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">backend:</span>
          <span class="hljs-attr">service:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">traefik</span>
            <span class="hljs-attr">port:</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">web</span>
        <span class="hljs-attr">path:</span> <span class="hljs-string">/*</span>
        <span class="hljs-attr">pathType:</span> <span class="hljs-string">ImplementationSpecific</span>
</code></pre>
<p>Once applied, an ALB will be created. You can then route any HTTPS traffic through the ALB, and ultimately, it will be routed using Traefik within your Kubernetes Cluster. A sample <code>IngressRoute</code> object would look like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">traefik.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">IngressRoute</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-apps</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">entryPoints:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">web</span>
  <span class="hljs-attr">routes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">Rule</span>
    <span class="hljs-attr">match:</span> <span class="hljs-string">Host(`a.example.com`)</span>
    <span class="hljs-attr">services:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app-a</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">Rule</span>
    <span class="hljs-attr">match:</span> <span class="hljs-string">Host(`b.example.com`)</span>
    <span class="hljs-attr">services:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app-b</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Helm Cheatsheet]]></title><description><![CDATA[Here is a list of commands for installing well-known charts using Helm. Most of them come from the official Bitnami repository, so you'll need to add the Bitnami repo first.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

R...]]></description><link>https://blog.mdminhazulhaque.io/helm-cheatsheet</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/helm-cheatsheet</guid><category><![CDATA[Helm]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Redis]]></category><category><![CDATA[rabbitmq]]></category><category><![CDATA[kafka]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Wed, 20 Sep 2023 08:18:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/kMSEIKwCG_A/upload/f885ac76e7f73e8d04bc08495f145c1c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here is a list of commands for installing well-known charts using Helm. Most of them come from the official Bitnami repository, so you'll need to add the Bitnami repo first.</p>
<pre><code class="lang-bash">helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
</code></pre>
<h2 id="heading-redis-single-node">Redis Single Node</h2>
<pre><code class="lang-bash">helm install redis bitnami/redis \
    --<span class="hljs-built_in">set</span> architecture=standalone \
    --<span class="hljs-built_in">set</span> auth.password=STRONG-PASSWORD-12345678 \
    --<span class="hljs-built_in">set</span> fullnameOverride=redis \
    --<span class="hljs-built_in">set</span> master.persistence.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> master.disableCommands=<span class="hljs-literal">false</span>
</code></pre>
<h2 id="heading-redis-sentinel">Redis Sentinel</h2>
<pre><code class="lang-bash">helm install redis bitnami/redis \
    --<span class="hljs-built_in">set</span> auth.enabled=<span class="hljs-literal">true</span> \
    --<span class="hljs-built_in">set</span> auth.sentinel=<span class="hljs-literal">true</span> \
    --<span class="hljs-built_in">set</span> auth.password=STRONG-PASSWORD-12345678 \
    --<span class="hljs-built_in">set</span> fullnameOverride=redis \
    --<span class="hljs-built_in">set</span> master.persistence.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> master.disableCommands=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> sentinel.enabled=<span class="hljs-literal">true</span> \
    --<span class="hljs-built_in">set</span> sentinel.masterSet=mymaster \
    --<span class="hljs-built_in">set</span> replica.replicaCount=3 \
    --<span class="hljs-built_in">set</span> replica.persistence.enabled=<span class="hljs-literal">false</span>
</code></pre>
<h2 id="heading-redis-cluster">Redis Cluster</h2>
<pre><code class="lang-bash">helm install redis bitnami/redis \
    --<span class="hljs-built_in">set</span> architecture=replication \
    --<span class="hljs-built_in">set</span> fullnameOverride=redis \
    --<span class="hljs-built_in">set</span> auth.password=STRONG-PASSWORD-12345678 \
    --<span class="hljs-built_in">set</span> master.count=3 \
    --<span class="hljs-built_in">set</span> replica.replicaCount=3 \
    --<span class="hljs-built_in">set</span> master.persistence.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> replica.persistence.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> serviceAccount.create=<span class="hljs-literal">true</span> \
    --<span class="hljs-built_in">set</span> podSecurityPolicy.create=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> master.podSecurityContext.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> replica.podSecurityContext.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> master.podAntiAffinityPreset=hard
</code></pre>
<h2 id="heading-elasticsearch-single-node">ElasticSearch Single Node</h2>
<pre><code class="lang-bash">helm install elasticsearch bitnami/elasticsearch \
    --<span class="hljs-built_in">set</span> master.replicas=1 \
    --<span class="hljs-built_in">set</span> fullnameOverride=elasticsearch \
    --<span class="hljs-built_in">set</span> master.persistence.enabled=<span class="hljs-literal">false</span>
</code></pre>
<h2 id="heading-elasticsearch-cluster">ElasticSearch Cluster</h2>
<pre><code class="lang-bash">helm install elasticsearch bitnami/elasticsearch \
    --<span class="hljs-built_in">set</span> master.replicas=3 \
    --<span class="hljs-built_in">set</span> data.replicaCount=3 \
    --<span class="hljs-built_in">set</span> coordinating.replicaCount=3 \
    --<span class="hljs-built_in">set</span> fullnameOverride=elasticsearch \
    --<span class="hljs-built_in">set</span> master.persistence.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> data.persistence.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> coordinating.persistence.enabled=<span class="hljs-literal">false</span>
</code></pre>
<h2 id="heading-kafka-and-zookeeper">Kafka and Zookeeper</h2>
<pre><code class="lang-bash">helm install kafka bitnami/kafka \
    --<span class="hljs-built_in">set</span> fullnameOverride=kafka \
    --<span class="hljs-built_in">set</span> podSecurityContext.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> containerSecurityContext.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> serviceAccount.create=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> autoDiscovery.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> persistence.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> zookeeper.persistence.enabled=<span class="hljs-literal">false</span>
</code></pre>
<h2 id="heading-rabbitmq-single-node">RabbitMQ Single Node</h2>
<pre><code class="lang-bash">helm install rabbitmq bitnami/rabbitmq \
    --<span class="hljs-built_in">set</span> auth.username=rabbitmq \
    --<span class="hljs-built_in">set</span> auth.password=STRONG-PASSWORD-12345678 \
    --<span class="hljs-built_in">set</span> clustering.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> podSecurityContext.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> serviceAccount.create=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> rbac.create=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> persistence.selector=volume=rabbitmq
</code></pre>
<h2 id="heading-rabbitmq-cluster">RabbitMQ Cluster</h2>
<pre><code class="lang-bash">helm install rabbitmq-rdms bitnami/rabbitmq \
    --<span class="hljs-built_in">set</span> fullnameOverride=rabbitmq \
    --<span class="hljs-built_in">set</span> auth.username=rdms-rabbitmq-user \
    --<span class="hljs-built_in">set</span> auth.password=STRONG-PASSWORD-12345678 \
    --<span class="hljs-built_in">set</span> clustering.enabled=<span class="hljs-literal">true</span> \
    --<span class="hljs-built_in">set</span> replicaCount=3 \
    --<span class="hljs-built_in">set</span> serviceAccount.create=<span class="hljs-literal">true</span> \
    --<span class="hljs-built_in">set</span> podSecurityContext.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> containerSecurityContext.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> extraPlugins=<span class="hljs-string">'rabbitmq_shovel rabbitmq_shovel_management'</span>
</code></pre>
<h2 id="heading-traefik-daemonset">Traefik DaemonSet</h2>
<pre><code class="lang-bash">helm repo add traefik https://traefik.github.io/charts
helm install traefik traefik/traefik \
    --<span class="hljs-built_in">set</span> deployment.kind=DaemonSet \
    --<span class="hljs-built_in">set</span> env[0].name=TZ \
    --<span class="hljs-built_in">set</span> env[0].value=Asia/Dhaka \
    --<span class="hljs-built_in">set</span> ingressRoute.dashboard.enabled=<span class="hljs-literal">false</span> \
    --<span class="hljs-built_in">set</span> ports.web.port=80 \
    --<span class="hljs-built_in">set</span> ports.websecure.port=443 \
    --<span class="hljs-built_in">set</span> service.type=LoadBalancer
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Generate SSL Certificates with Traefik and Let's Encrypt]]></title><description><![CDATA[Traefik is awesome. It can use both file-based certificates and Kubernetes TLS Secret objects as SSL store.
Even it is possible to use SSL certificates generated by Let's Encrypt (privkey.pem and fullchain.pem from /etc/letsencrypt/live direcotory) b...]]></description><link>https://blog.mdminhazulhaque.io/generate-ssl-certificates-with-traefik-and-lets-encrypt</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/generate-ssl-certificates-with-traefik-and-lets-encrypt</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Traefik]]></category><category><![CDATA[Let's Encrypt]]></category><category><![CDATA[SSL]]></category><category><![CDATA[acme]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Thu, 01 Jun 2023 19:04:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Xbh_OGLRfUM/upload/f8ae66330458c79c6b8c014041cbb22f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Traefik is awesome. It can use both file-based certificates and Kubernetes TLS Secret objects as SSL store.</p>
<p>Even it is possible to use SSL certificates generated by Let's Encrypt (<code>privkey.pem</code> and <code>fullchain.pem</code> from <code>/etc/letsencrypt/live</code> direcotory) by creating a TLS secret from these files.</p>
<p>But what if you want to generate certificates using Traefik itself? Luckily Traefik has full support for all Let's Encrypt challenges (http, https and dns based verification). Let's jump in.</p>
<p>First, ensure that Traefik has connectivity to the internet. Without internet connectivity, Traefik will not be able to send certificate renewal requests to ACME servers.</p>
<p>Second, if you have multiple replicas of Traefik running in the cluster (DaemonSet or Deployment) then you can reduce the replicas to 1. This will lower the chance of failed verification attempts and being restricted to issue certificates for a week.</p>
<p>Third, Traefik stores the keys and certificates in a JSON file named <code>acme.json</code>. This must persist when Traefik restarts. Otherwise, Traefik will try to re-issue the certificate and your domain could be blacklisted for a while.</p>
<p>To enable ACME based resolver, add the following parameters <code>email</code>, <code>storage</code> and <code>challenge</code> as Traefik command line parameters. You are free to choose any name. I used <code>letsencrypt</code> in this case.</p>
<pre><code class="lang-yaml"><span class="hljs-string">...</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">args:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--log.level=DEBUG</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--entrypoints.web.address=:80</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--entrypoints.websecure.address=:443</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--certificatesresolvers.letsencrypt.acme.email=me@example.com</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--certificatesresolvers.letsencrypt.acme.storage=/etc/traefik/acme/acme.json</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">--serversTransport.insecureSkipVerify=true</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">traefik:v2.9</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">traefik</span>
<span class="hljs-string">...</span>
</code></pre>
<p>Then let's mount the path <code>/etc/traefik/acme</code> as a persistence volume. You can use <code>hostPath</code>, <code>nfs</code>, <code>glusterfs</code> or anything compatible with Kubernetes. But make sure that the mount path is shared across all Traefik pods running in the cluster.</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">volumeMounts:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">acme</span>
    <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/etc/traefik/acme</span>
<span class="hljs-attr">volumes:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">acme</span>
  <span class="hljs-attr">nfs:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">/data/traefik-system/acme</span>
    <span class="hljs-attr">server:</span> <span class="hljs-number">192.168</span><span class="hljs-number">.100</span><span class="hljs-number">.200</span>
</code></pre>
<p>Once Traefik pods are in <code>Running</code> state, let's create an <code>IngressRoute</code> object that uses <code>letsencrypt</code> resolver as SSL store. Here is one example.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">traefik.containo.us/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">IngressRoute</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-app</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">entryPoints:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">websecure</span>
  <span class="hljs-attr">routes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">Rule</span>
    <span class="hljs-attr">match:</span> <span class="hljs-string">Host(`my-app.com`)</span>
    <span class="hljs-attr">services:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-app</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
  <span class="hljs-attr">tls:</span>
    <span class="hljs-attr">certResolver:</span> <span class="hljs-string">letsencrypt</span>
</code></pre>
<p>Once this resource is applied, Traefik will try to issue a certificate using ACME API and disable all other ingress hosts for a few seconds. Upon a successful issue, the <code>acme.json</code> file will be generated.</p>
<p>Please note that wildcard certificates can only be generated using DNS-based verifications. So you have to use the <code>sans</code> sections in the <code>tls</code> block of <code>IngressRoute</code> carefully.</p>
<p>I hope the post helps you all.</p>
]]></content:encoded></item><item><title><![CDATA[Bypass Proxy for Composer/NPM/Apt]]></title><description><![CDATA[While working in a restricted on-prem network, popular package managers like Composer or NPM may not properly work. Thanks to the proxy filtering technologies at the internet gateway end.
These package managers often download files over an unencrypte...]]></description><link>https://blog.mdminhazulhaque.io/bypass-proxy-for-composer-npm-apt</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/bypass-proxy-for-composer-npm-apt</guid><category><![CDATA[proxy]]></category><category><![CDATA[composer]]></category><category><![CDATA[npm]]></category><category><![CDATA[apt]]></category><category><![CDATA[bluecoat]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 31 Mar 2023 13:28:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/BEf9hE5gKWM/upload/3482e9fb13531bf3d0d55051592556e0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While working in a restricted on-prem network, popular package managers like Composer or NPM may not properly work. Thanks to the proxy filtering technologies at the internet gateway end.</p>
<p>These package managers often download files over an unencrypted HTTP connection and verify the binaries using a checksum or GPG signature. However, in some cases, it is possible to bypass the filtering by enforcing HTTPS so these tools use an encrypted connection to fetch and download artifacts.</p>
<h3 id="heading-composer">Composer</h3>
<pre><code class="lang-bash">composer config --global disable-tls
composer config --global secure-http
</code></pre>
<h3 id="heading-npm">NPM</h3>
<pre><code class="lang-bash">npm config <span class="hljs-built_in">set</span> registry https://registry.npmjs.org/
npm config <span class="hljs-built_in">set</span> ssl-strict=<span class="hljs-literal">false</span>
</code></pre>
<h3 id="heading-apt">Apt</h3>
<pre><code class="lang-bash">sed -i s/http/https/g /etc/apt/sources.list
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Conversion between AWS Secrets Manager and Kubernetes Secrets]]></title><description><![CDATA[Kubernetes Secret to AWS Secret Manager
AWS SecretManager requires the secrets to be in decoded format. However, Kubernetes Secrets are encoded in base64 and require conversion. We can use jq to do this for us.
Once the secrets are decoded, we can pa...]]></description><link>https://blog.mdminhazulhaque.io/conversion-between-aws-secrets-manager-and-kubernetes-secrets</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/conversion-between-aws-secrets-manager-and-kubernetes-secrets</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[AWS]]></category><category><![CDATA[secrets]]></category><category><![CDATA[aws cli]]></category><category><![CDATA[conversion]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Sun, 12 Feb 2023 15:08:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/1bjsASjhfkE/upload/281ff1f7f0fd58242e6075ff9cd5467f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-kubernetes-secret-to-aws-secret-manager">Kubernetes Secret to AWS Secret Manager</h3>
<p>AWS SecretManager requires the secrets to be in decoded format. However, Kubernetes Secrets are encoded in base64 and require conversion. We can use <code>jq</code> to do this for us.</p>
<p>Once the secrets are decoded, we can pass this key-value pair to AWS CLI to create a Secret Manager object.</p>
<pre><code class="lang-bash">kubectl get secret app-config -o jsonpath=<span class="hljs-string">'{.data}'</span> | jq -r <span class="hljs-string">'reduce to_entries[] as {$key, $value} (null; .[$key] = ($value|@base64d))'</span> &gt; secret.json

aws secretsmanager create-secret --name app-config --secret-string file://secret.json
</code></pre>
<h3 id="heading-aws-secret-manager-to-kubernetes-secret">AWS Secret Manager to Kubernetes Secret</h3>
<p>AWS CLI can fetch decoded secrets from Secret Manager. However, <code>kubectl</code> requires the secrets to be in <code>env</code> format if there are plenty of them. Once again, we can use <code>jq</code> to map them in <code>env</code> like and later this <code>env</code> file can be used to create the Kubernetes Secret.</p>
<pre><code class="lang-bash">aws secretsmanager get-secret-value --secret-id app-config | jq -r <span class="hljs-string">'.SecretString | fromjson | to_entries[] | "(.key)=(.value)"'</span> &gt; secret.env

kubectl create secret generic app-config --from-env-file=secret.env
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Sync HashiCorp Vault Secrets into Kubernetes Native Secrets with ESO]]></title><description><![CDATA[External Secrets Operator (ESO) does exactly what the name suggests. It can sync Secrets from remote providers like HashiCorp Vault, AWS SecretManager etc.
Secrets from Vault can be imported into the Kubernetes cluster using csi-secrets-store and has...]]></description><link>https://blog.mdminhazulhaque.io/sync-hashicorp-vault-secrets-into-kubernetes-native-secrets-with-eso</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/sync-hashicorp-vault-secrets-into-kubernetes-native-secrets-with-eso</guid><category><![CDATA[hashicorp]]></category><category><![CDATA[Vault]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 09 Dec 2022 22:09:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1670882906163/N0Sj-jZLJ.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://external-secrets.io/">External Secrets Operator</a> (ESO) does exactly what the name suggests. It can sync Secrets from remote providers like HashiCorp Vault, AWS SecretManager etc.</p>
<p>Secrets from Vault can be imported into the Kubernetes cluster using <a target="_blank" href="https://secrets-store-csi-driver.sigs.k8s.io/topics/sync-as-kubernetes-secret.html">csi-secrets-store</a> and <a target="_blank" href="https://github.com/hashicorp/vault-helm">hashicorp/vault</a> provider. But the CRD configuration seems a bit complex for noobs like me. So I ended up using ESO which seems easiest.</p>
<p>First, you need to install ESO with Helm inside the cluster. I am using <code>eso-system</code> as the namespace but feel free to choose another.</p>
<pre><code class="lang-bash">helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets -n eso-system --create-namespace
</code></pre>
<p>Once the release is installed, 3 pods should appear and wait until they all are in both <strong>Running</strong> and <strong>Healthy</strong> state.</p>
<pre><code class="lang-plaintext">NAME                                                READY   STATUS
external-secrets-cert-controller-655d7bb477-kp2ck   1/1     Running
external-secrets-f9bc79d45-xk9lm                    1/1     Running
external-secrets-webhook-5b4779cbf8-mspsl           1/1     Running
</code></pre>
<p>To sync Secrets from Vault, we need to authorize the ESO service account to make valid calls to Vault APIs. I used a custom Token with a policy that allows only read access to the Vault KV engine. We need to pass the Token to ESO using Kubernetes secret.</p>
<pre><code class="lang-bash">kubectl create secret generic vault-token -n eso-system --from-literal=token=hvs.9lownuRz5bZO221dgkKXG5hQB
</code></pre>
<p>Once the Vault Token is ready, we can create a Vault Secret Store and tell ESO to establish a connection with it.</p>
<p>However, you can create multiple <strong>SecretStore</strong> in multiple namespaces. In that case, each <strong>ExternalSecret</strong> object will look for <strong>SecretStore</strong> in the same namespace.</p>
<p>In my case, I used one <strong>ClusterSecretStore</strong> and referenced multiple <strong>ExternalSecret</strong> in multiple namespaces to a single Store, to reduce the complexity.</p>
<p>Let's create our first ClusterSecretStore. Apply the following using <code>kubectl apply -f</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">external-secrets.io/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterSecretStore</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">vault-backend</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">provider:</span>
    <span class="hljs-attr">vault:</span>
      <span class="hljs-attr">server:</span> <span class="hljs-string">"https://vault.your.host:8200"</span>
      <span class="hljs-attr">path:</span> <span class="hljs-string">"kv"</span>
      <span class="hljs-attr">version:</span> <span class="hljs-string">"v1"</span>
      <span class="hljs-attr">auth:</span>
        <span class="hljs-attr">tokenSecretRef:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">"vault-token"</span>
          <span class="hljs-attr">namespace:</span> <span class="hljs-string">eso-system</span>
          <span class="hljs-attr">key:</span> <span class="hljs-string">"token"</span>
</code></pre>
<p>Now check if the Secret Store is ready to be used. The field <code>Ready=True</code> is seen if the Vault Token is correct.</p>
<pre><code class="lang-plaintext">$ kubectl get clustersecretstores 
NAME            AGE   STATUS   READY
vault-backend   12h   Valid    True
</code></pre>
<p>When the Secret Store is ready, it's time to put some values in Vault. You can use Vault UI. For my case, I used the CLI to enable Vault <strong>v1</strong> KV engine and add some values there.</p>
<pre><code class="lang-bash">$ vault kv put kv/webapp-prod SPRING_DATASOURCE_USER=john SPRING_DATASOURCE_PASSWORD=1234qwer
$ vault kv get kv/webapp-prod
=============== Data ===============
Key                           Value
---                           -----
SPRING_DATASOURCE_PASSWORD    1234qwer
SPRING_DATASOURCE_USER        john
</code></pre>
<p>Now let's create an ExternalSecret object inside the cluster which will tell SecretStore to sync the above-mentioned secrets from Vault.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">external-secrets.io/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ExternalSecret</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">webapp-secret</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">webapp-prod</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">refreshInterval:</span> <span class="hljs-string">30s</span>
  <span class="hljs-attr">secretStoreRef:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">vault-backend</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterSecretStore</span>
  <span class="hljs-attr">target:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">webapp-secret</span>
    <span class="hljs-attr">creationPolicy:</span> <span class="hljs-string">Owner</span>
  <span class="hljs-attr">dataFrom:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">extract:</span>
      <span class="hljs-attr">key:</span> <span class="hljs-string">kv/webapp-prod</span>
</code></pre>
<p>I have used the <code>extract</code> option which will fetch all KV pairs from Vault. Also, I used the Kubernetes namespace as the Vault KV Path to easily map multiple secrets into multiple namespaces.</p>
<p>Anyway, let's check if the ExternalSecret syncing is successful.</p>
<pre><code class="lang-plaintext">$ kubectl get externalsecret -n webapp-prod
NAME            STORE           REFRESH   STATUS         READY
webapp-secret   vault-backend   30s       SecretSynced   True
</code></pre>
<p>The state <code>SecretSynced</code> indicates that ESO has successfully fetched the secrets from Vault as per the instruction. Let's check if the Kubernetes native Secret object is created.</p>
<pre><code class="lang-plaintext">$ kubectl get secret webapp-secret -n webapp-prod -o yaml | head -4
apiVersion: v1
data:
  SPRING_DATASOURCE_HOST: MTI3LjAuMC4x
  SPRING_DATASOURCE_PASSWORD: MTIzNHF3ZXI=
</code></pre>
<p>If you <code>base64 -d</code> them, you will notice that the values are the same as they are in Vault. Feel free to add more KV pairs in the same path and the new values should appear in Kubernetes Secret as well,</p>
]]></content:encoded></item><item><title><![CDATA[etcd Snapshot Too Big?]]></title><description><![CDATA[Assuming you are using a self-managed Kubernetes cluster, taking an etcd backup should be part of your cluster recovery strategy. The following command is the simplest one for taking an etcd snapshot and saving it to disk:
export ETCDCTL_API=3

times...]]></description><link>https://blog.mdminhazulhaque.io/etcd-snapshot-too-big</link><guid isPermaLink="true">https://blog.mdminhazulhaque.io/etcd-snapshot-too-big</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Backup]]></category><category><![CDATA[etc]]></category><dc:creator><![CDATA[Md. Minhazul Haque]]></dc:creator><pubDate>Fri, 09 Sep 2022 06:11:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/lRoX0shwjUQ/upload/v1662703796173/cns0zZ1qb.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Assuming you are using a self-managed Kubernetes cluster, taking an etcd backup should be part of your cluster recovery strategy. The following command is the simplest one for taking an etcd snapshot and saving it to disk:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> ETCDCTL_API=3

timestamp=`date +%Y%m%d-%H%M%S`

etcdctl --endpoints 127.0.0.1:2379 \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  snapshot save /data/etcd-backup/snapshot-<span class="hljs-variable">$timestamp</span>.db
</code></pre>
<p>However, sometimes you may find the backup size has exceeded several hundred megabytes, which is substantial for a simple cluster containing 10-20 active namespaces and hundreds of pods. This occurs due to the internal fragmentation of etcd database files. The backup size (and disk space used in <code>/var/lib/etcd</code>) can be reduced by sending a fragmentation signal to all the etcd members. The following command should be sufficient to accomplish this:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> ETCDCTL_API=3

etcdctl --endpoints 127.0.0.1:2379 \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  defrag --cluster=<span class="hljs-literal">true</span>
</code></pre>
<p>Learn more about etcd defragmentation <a target="_blank" href="https://etcd.io/docs/v3.2/op-guide/maintenance/#defragmentation">here</a>.</p>
]]></content:encoded></item></channel></rss>