<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>For The Love Of Cloud!</title>
    <link>https://blog.chelo.dev/</link>
    <description></description>
    <pubDate>Tue, 07 Apr 2026 14:04:43 -0600</pubDate>
    <item>
      <title>Exporting Terraform Plan to Excel with Python</title>
      <link>https://blog.chelo.dev/exporting-terraform-plan-to-excel-with-python</link>
      <description>&lt;![CDATA[#terraform #python #devops&#xA;&#xA;Recently, I was working with an Azure design for a customer, and I needed to compare what my terraform scripts had planned for against my customer&#39;s Excel template.!--more--&#xA;&#xA;GitHub Repository&#xA;&#xA;Table of Contents&#xA;Terraform Prep&#xA;Python Script&#xA;  Parsing arguments&#xA;  Executing Terraform&#xA;  Parsing the Terraform plan&#xA;  Final Script&#xA;Usage&#xA;Final thoughts&#xA;&#xA;If we were talking about a few resources with static names and properties, it wouldn&#39;t be too hard to comb through the terraform scripts and compare them against the Excel template. However, the customer&#39;s design had about 60+ resources, distributed in 15 Resource Groups.&#xA;&#xA;So, I decided to do everything harder (now) so that it was easier (later), and started writing a Python script that would create a Terraform Plan, parse it and then generate an Excel file with all the resources grouped by type in different worksheets.&#xA;&#xA;Terraform Prep&#xA;&#xA;First of all, since my customer was using Azure as the Terraform backend and deploying via Azure DevOps agent, I had to do some prepping in the form of a Terraform override file so that I could execute my plan locally and get a full rundown as if there were no deployed resources.&#xA;&#xA;Since it was mainly backend and provider, what I had to override, I named my file backendoverride.tf, which I placed in the same folder where the rest of my Terraform scripts were.&#xA;&#xA;The file ended looking something like this:&#xA;terraform {&#xA;  backend &#34;local&#34; {&#xA;    path = &#34;./.local-state&#34;&#xA;  }&#xA;}&#xA;&#xA;provider &#34;azurerm&#34; {&#xA;  features {}&#xA;  clientid       = &#34;00000000-0000-0000-0000-000000000000&#34;&#xA;  clientsecret   = &#34;MySuperSecretPassword&#34;&#xA;  tenantid       = &#34;10000000-0000-0000-0000-000000000000&#34;&#xA;  subscriptionid = &#34;20000000-0000-0000-0000-000000000000&#34;&#xA;}&#xA;&#xA;Python Script&#xA;&#xA;The script has two main parts:&#xA; Terraform planning&#xA; Plan flattener and Excel file creation&#xA;&#xA;Parsing arguments&#xA;&#xA;First, I had to write the base for the script. The script would receive a couple of arguments:&#xA;&#xA;| Argument | Description | Optional | Example |&#xA;|---|---|---|---|&#xA;| --tfpath | The path to the folder that contains the terraform files | false | --tfpath &#34;terraform/&#34; |&#xA;| --set | The variables that would normally be passed via command line o additional tfvars files | true | --set location=&#34;Central US&#34; testing=true  |&#xA;&#xA;This arguments are parsed using argparse and saving the Terraform path to tfpath and the variables as a dict in vars.&#xA;  I based this part on Sam Starkman&#39;s article and Laurent Franceschetti&#39;s gist&#xA;&#xA;import argparse&#xA;&#xA;def parsevar(s):&#xA;    items = s.split(&#39;=&#39;)&#xA;    key = items[0].strip()&#xA;    if len(items)   1:&#xA;        value = &#39;=&#39;.join(items[1:])&#xA;    return (key, value)&#xA;&#xA;def parsevars(items):&#xA;    d = {}&#xA;&#xA;    if items:&#xA;        for item in items:&#xA;            key, value = parsevar(item)&#xA;            d[key] = value&#xA;    return d&#xA;&#xA;vars = {}&#xA;parser = argparse.ArgumentParser(description=&#34;...&#34;)&#xA;parser.addargument(&#34;--set&#34;,&#xA;                        metavar=&#34;KEY=VALUE&#34;,&#xA;                        nargs=&#39;+&#39;)&#xA;parser.addargument(&#34;--tfpath&#34;,&#xA;                    type=str,&#xA;                    required=True)&#xA;args = parser.parseargs()&#xA;vars = parsevars(args.set)&#xA;&#xA;tfpath = args.tfpath&#xA;&#xA;Executing Terraform&#xA;&#xA;To execute Terraform, I used the pythonterraform library. I generated the plan, passing vars as an argument value for var, saved it first to plan.tfplan and the read it as a JSON into the variable plan. The code to generate the Terraform plan is really simple:&#xA;&#xA;from pythonterraform import &#xA;import json&#xA;&#xA;tf = Terraform(workingdir=tfpath)&#xA;tf.init()&#xA;tf.plan(out=&#34;plan.tfplan&#34;,var=vars)&#xA;jsondata = tf.show(&#34;plan.tfplan&#34;,json=IsFlagged)&#xA;&#xA;plan = json.loads(jsondata[1])&#xA;&#xA;Parsing the Terraform plan&#xA;&#xA;Finally, I needed to parse the plan, especifically resourcechanges. Since it contained everything from null and false, all the way to list and dict values, I decided to do a recursive function (flattener) that would iterate through all the resources.&#xA;&#xA;  The bit where I get the current directory, for the Excel file name, is based on vinithravit&#39;s answer over at StackOverflow.&#xA;&#xA;import json&#xA;import xlsxwriter&#xA;&#xA;def ofname(tfpath=&#34;.&#34;,extension=&#34;.xlsx&#34;):&#xA;    os.chdir(tfpath)&#xA;    str1=os.getcwd()&#xA;    str2=str1.split(&#39;/&#39;)&#xA;    n=len(str2)&#xA;    name = str2[n-1] + extension&#xA;    return name&#xA;&#xA;def flattener(jdata,row,column,worksheet,itemc):&#xA;    if isinstance(jdata,dict):&#xA;        for k,v in jdata.items():&#xA;            if isinstance(v,dict) or isinstance(v,list):&#xA;                worksheet.write(row,column,k)&#xA;                if isinstance(v,list) and len(v) == 1 :&#xA;                    row = flattener(v[0],row,column+1,worksheet,len(v))&#xA;                else:&#xA;                    row = flattener(v,row,column+1,worksheet,len(v))&#xA;            else:&#xA;                worksheet.write(row,column,k)&#xA;                worksheet.write(row,column+1,v)&#xA;                row = row + 1&#xA;    else:&#xA;        for v in jdata:&#xA;            if isinstance(v,dict) or isinstance(v,list):&#xA;                row = flattener(v,row,column,worksheet,len(v))&#xA;            else:&#xA;                worksheet.write(row,column,v)&#xA;                row = row + 1&#xA;    return row&#xA;&#xA;classed = {}&#xA;&#xA;for rc in plan[&#39;resourcechanges&#39;]:&#xA;    classed[rc[&#34;type&#34;]] = {}&#xA;&#xA;for rc in plan[&#39;resourcechanges&#39;]:&#xA;    rcdict = rc&#39;change&#39;&#xA;    rcdict[&#39;address&#39;] = rc[&#39;address&#39;]&#xA;    rcdict[&#39;type&#39;] = rc[&#39;type&#39;]&#xA;    classed[rc[&#34;type&#34;]].update({rc[&#39;address&#39;]: rcdict})&#xA;&#xA;workbook = xlsxwriter.Workbook(ofname(tfpath))&#xA;cellformat = workbook.addformat()&#xA;cellformat.settextwrap()&#xA;cellformat.setalign(&#34;vcenter&#34;)&#xA;for type,data in classed.items():&#xA;    sheet = type[:31]&#xA;    worksheet = workbook.addworksheet(sheet)&#xA;    worksheet.setcolumn(0,1000,42,cellformat)&#xA;    flattener(data,1,0,worksheet,0)&#xA;workbook.close()&#xA;&#xA;Final Script&#xA;&#xA;Putting everything together, plus a couple of minor adjustments (like the addition of tfcheck [a small bash script that I wrote to validate Terraform scripts] and rm plan.tfplan for cleaning up), the script ends up as follows:&#xA;&#xA;from pythonterraform import &#xA;import json&#xA;import os&#xA;import xlsxwriter&#xA;import argparse&#xA;&#xA;def parsevar(s):&#xA;    items = s.split(&#39;=&#39;)&#xA;    key = items[0].strip()&#xA;    if len(items)   1:&#xA;        value = &#39;=&#39;.join(items[1:])&#xA;    return (key, value)&#xA;&#xA;def parsevars(items):&#xA;    d = {}&#xA;&#xA;    if items:&#xA;        for item in items:&#xA;            key, value = parsevar(item)&#xA;            d[key] = value&#xA;    return d&#xA;&#xA;def ofname(tfpath=&#34;.&#34;,extension=&#34;.xlsx&#34;):&#xA;    os.chdir(tfpath)&#xA;    str1=os.getcwd()&#xA;    str2=str1.split(&#39;/&#39;)&#xA;    n=len(str2)&#xA;    name = str2[n-1] + extension&#xA;    return name&#xA;&#xA;def flattener(jdata,row,column,worksheet,itemc):&#xA;    if isinstance(jdata,dict):&#xA;        for k,v in jdata.items():&#xA;            if isinstance(v,dict) or isinstance(v,list):&#xA;                worksheet.write(row,column,k)&#xA;                if isinstance(v,list) and len(v) == 1 :&#xA;                    row = flattener(v[0],row,column+1,worksheet,len(v))&#xA;                else:&#xA;                    row = flattener(v,row,column+1,worksheet,len(v))&#xA;            else:&#xA;                worksheet.write(row,column,k)&#xA;                worksheet.write(row,column+1,v)&#xA;                row = row + 1&#xA;    else:&#xA;        for v in jdata:&#xA;            if isinstance(v,dict) or isinstance(v,list):&#xA;                row = flattener(v,row,column,worksheet,len(v))&#xA;            else:&#xA;                worksheet.write(row,column,v)&#xA;                row = row + 1&#xA;    return row&#xA;&#xA;vars = {}&#xA;parser = argparse.ArgumentParser(description=&#34;...&#34;)&#xA;parser.addargument(&#34;--set&#34;,&#xA;                        metavar=&#34;KEY=VALUE&#34;,&#xA;                        nargs=&#39;+&#39;)&#xA;parser.addargument(&#34;--tfpath&#34;,&#xA;                    type=str,&#xA;                    required=True)&#xA;args = parser.parseargs()&#xA;vars = parsevars(args.set)&#xA;&#xA;tfpath = args.tfpath&#xA;&#xA;tf = Terraform(workingdir=tfpath)&#xA;tf.init()&#xA;tf.plan(out=&#34;plan.tfplan&#34;,var=vars)&#xA;jsondata = tf.show(&#34;plan.tfplan&#34;,json=IsFlagged)&#xA;&#xA;plan = json.loads(jsondata[1])&#xA;&#xA;classed = {}&#xA;&#xA;for rc in plan[&#39;resourcechanges&#39;]:&#xA;    classed[rc[&#34;type&#34;]] = {}&#xA;&#xA;for rc in plan[&#39;resourcechanges&#39;]:&#xA;    rcdict = rc&#39;change&#39;&#xA;    rcdict[&#39;address&#39;] = rc[&#39;address&#39;]&#xA;    rcdict[&#39;type&#39;] = rc[&#39;type&#39;]&#xA;    classed[rc[&#34;type&#34;]].update({rc[&#39;address&#39;]: rcdict})&#xA;&#xA;workbook = xlsxwriter.Workbook(ofname(tfpath))&#xA;cellformat = workbook.addformat()&#xA;cellformat.settextwrap()&#xA;cellformat.setalign(&#34;vcenter&#34;)&#xA;for type,data in classed.items():&#xA;    sheet = type[:31]&#xA;    worksheet = workbook.addworksheet(sheet)&#xA;    worksheet.setcolumn(0,1000,42,cell_format)&#xA;    flattener(data,1,0,worksheet,0)&#xA;workbook.close()&#xA;&#xA;os.system(&#34;tfcheck&#34;)&#xA;os.system(&#34;rm plan.tfplan&#34;)&#xA;&#xA;I&#39;m no Python expert, by any means, and I&#39;m sure that this script can be improved and optimized.&#xA;&#xA;Usage&#xA;&#xA;Now, how do we use this script? Pretty easily. Once we&#39;ve created our override file for Terraform, we simply run the script, passing the arguments we require:&#xA;&#xA;python3 main.py --tfpath terraform/ --set location=&#34;Central US&#34; testing=true&#xA;&#xA;Final thoughts&#xA;&#xA;It&#39;s been over a year since my last post... And what a year it has been! My baby daughter was born recently, I got a new job, my grandma died...&#xA;&#xA;Anyway, I got several ideas I&#39;d like to share with you, so I&#39;ll try to post more frequently.&#xA;&#xA;See you soon! (Hopefully)]]&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://blog.chelo.dev/tag:terraform" class="hashtag"><span>#</span><span class="p-category">terraform</span></a> <a href="https://blog.chelo.dev/tag:python" class="hashtag"><span>#</span><span class="p-category">python</span></a> <a href="https://blog.chelo.dev/tag:devops" class="hashtag"><span>#</span><span class="p-category">devops</span></a></p>

<p>Recently, I was working with an Azure design for a customer, and I needed to compare what my terraform scripts had planned for against my customer&#39;s Excel template.</p>

<p><a href="https://github.com/peanutsguy/terraformplan2excel">GitHub Repository</a></p>

<h3 id="table-of-contents">Table of Contents</h3>
<ul><li><a href="#terraform-prep">Terraform Prep</a></li>
<li><a href="#python-script">Python Script</a>
<ul><li><a href="#parsing-arguments">Parsing arguments</a></li>
<li><a href="#executing-terraform">Executing Terraform</a></li>
<li><a href="#parsing-the-terraform-plan">Parsing the Terraform plan</a></li>
<li><a href="#final-script">Final Script</a></li></ul></li>
<li><a href="#usage">Usage</a></li>
<li><a href="#final-thoughts">Final thoughts</a></li></ul>

<p>If we were talking about a few resources with static names and properties, it wouldn&#39;t be too hard to comb through the terraform scripts and compare them against the Excel template. However, the customer&#39;s design had about 60+ resources, distributed in 15 Resource Groups.</p>

<p>So, I decided to do everything harder (now) so that it was easier (later), and started writing a Python script that would create a Terraform Plan, parse it and then generate an Excel file with all the resources grouped by type in different worksheets.</p>

<h2 id="terraform-prep">Terraform Prep</h2>

<p>First of all, since my customer was using Azure as the Terraform backend and deploying via Azure DevOps agent, I had to do some prepping in the form of a <a href="https://developer.hashicorp.com/terraform/language/files/override">Terraform override file</a> so that I could execute my plan locally and get a full rundown as if there were no deployed resources.</p>

<p>Since it was mainly backend and provider, what I had to override, I named my file <code>backend_override.tf</code>, which I placed in the same folder where the rest of my Terraform scripts were.</p>

<p>The file ended looking something like this:</p>

<pre><code class="language-json">terraform {
  backend &#34;local&#34; {
    path = &#34;./.local-state&#34;
  }
}

provider &#34;azurerm&#34; {
  features {}
  client_id       = &#34;00000000-0000-0000-0000-000000000000&#34;
  client_secret   = &#34;MySuperSecretPassword&#34;
  tenant_id       = &#34;10000000-0000-0000-0000-000000000000&#34;
  subscription_id = &#34;20000000-0000-0000-0000-000000000000&#34;
}
</code></pre>

<h2 id="python-script">Python Script</h2>

<p>The script has two main parts:
 – Terraform planning
 – Plan flattener and Excel file creation</p>

<h3 id="parsing-arguments">Parsing arguments</h3>

<p>First, I had to write the base for the script. The script would receive a couple of arguments:</p>

<table>
<thead>
<tr>
<th>Argument</th>
<th>Description</th>
<th>Optional</th>
<th>Example</th>
</tr>
</thead>

<tbody>
<tr>
<td><code>--tfpath</code></td>
<td>The path to the folder that contains the terraform files</td>
<td><code>false</code></td>
<td><code>--tfpath &#34;terraform/&#34;</code></td>
</tr>

<tr>
<td><code>--set</code></td>
<td>The variables that would normally be passed via command line o additional <code>tfvars</code> files</td>
<td><code>true</code></td>
<td><code>--set location=&#34;Central US&#34; testing=true</code></td>
</tr>
</tbody>
</table>

<p>This arguments are parsed using <code>argparse</code> and saving the Terraform path to <code>tfpath</code> and the variables as a <code>dict</code> in <code>vars</code>.
&gt; I based this part on Sam Starkman&#39;s <a href="https://towardsdatascience.com/a-simple-guide-to-command-line-arguments-with-argparse-6824c30ab1c3">article</a> and Laurent Franceschetti&#39;s <a href="https://gist.github.com/fralau/061a4f6c13251367ef1d9a9a99fb3e8d">gist</a></p>

<pre><code class="language-python">import argparse

def parse_var(s):
    items = s.split(&#39;=&#39;)
    key = items[0].strip()
    if len(items) &gt; 1:
        value = &#39;=&#39;.join(items[1:])
    return (key, value)


def parse_vars(items):
    d = {}

    if items:
        for item in items:
            key, value = parse_var(item)
            d[key] = value
    return d

vars = {}
parser = argparse.ArgumentParser(description=&#34;...&#34;)
parser.add_argument(&#34;--set&#34;,
                        metavar=&#34;KEY=VALUE&#34;,
                        nargs=&#39;+&#39;)
parser.add_argument(&#34;--tfpath&#34;,
                    type=str,
                    required=True)
args = parser.parse_args()
vars = parse_vars(args.set)

tfpath = args.tfpath
</code></pre>

<h3 id="executing-terraform">Executing Terraform</h3>

<p>To execute Terraform, I used the <a href="https://github.com/beelit94/python-terraform/blob/master/README.md"><code>python_terraform</code></a> library. I generated the plan, passing <code>vars</code> as an argument value for <code>var</code>, saved it first to <code>plan.tfplan</code> and the read it as a JSON into the variable <code>plan</code>. The code to generate the Terraform plan is really simple:</p>

<pre><code class="language-python">from python_terraform import *
import json

tf = Terraform(working_dir=tfpath)
tf.init()
tf.plan(out=&#34;plan.tfplan&#34;,var=vars)
json_data = tf.show(&#34;plan.tfplan&#34;,json=IsFlagged)

plan = json.loads(json_data[1])
</code></pre>

<h3 id="parsing-the-terraform-plan">Parsing the Terraform plan</h3>

<p>Finally, I needed to parse the plan, especifically <code>resource_changes</code>. Since it contained everything from <code>null</code> and <code>false</code>, all the way to <code>list</code> and <code>dict</code> values, I decided to do a recursive function (<code>flattener</code>) that would iterate through all the resources.</p>

<blockquote><p>The bit where I get the current directory, for the Excel file name, is based on <a href="https://stackoverflow.com/a/10293159">vinithravit&#39;s answer</a> over at StackOverflow.</p></blockquote>

<pre><code class="language-python">import json
import xlsxwriter

def ofname(tfpath=&#34;.&#34;,extension=&#34;.xlsx&#34;):
    os.chdir(tfpath)
    str1=os.getcwd()
    str2=str1.split(&#39;/&#39;)
    n=len(str2)
    name = str2[n-1] + extension
    return name

def flattener(jdata,row,column,worksheet,itemc):
    if isinstance(jdata,dict):
        for k,v in jdata.items():
            if isinstance(v,dict) or isinstance(v,list):
                worksheet.write(row,column,k)
                if isinstance(v,list) and len(v) == 1 :
                    row = flattener(v[0],row,column+1,worksheet,len(v))
                else:
                    row = flattener(v,row,column+1,worksheet,len(v))
            else:
                worksheet.write(row,column,k)
                worksheet.write(row,column+1,v)
                row = row + 1
    else:
        for v in jdata:
            if isinstance(v,dict) or isinstance(v,list):
                row = flattener(v,row,column,worksheet,len(v))
            else:
                worksheet.write(row,column,v)
                row = row + 1
    return row

classed = {}

for rc in plan[&#39;resource_changes&#39;]:
    classed[rc[&#34;type&#34;]] = {}

for rc in plan[&#39;resource_changes&#39;]:
    rc_dict = rc[&#39;change&#39;][&#39;after&#39;]
    rc_dict[&#39;address&#39;] = rc[&#39;address&#39;]
    rc_dict[&#39;type&#39;] = rc[&#39;type&#39;]
    classed[rc[&#34;type&#34;]].update({rc[&#39;address&#39;]: rc_dict})

workbook = xlsxwriter.Workbook(ofname(tfpath))
cell_format = workbook.add_format()
cell_format.set_text_wrap()
cell_format.set_align(&#34;vcenter&#34;)
for type,data in classed.items():
    sheet = type[:31]
    worksheet = workbook.add_worksheet(sheet)
    worksheet.set_column(0,1000,42,cell_format)
    flattener(data,1,0,worksheet,0)
workbook.close()
</code></pre>

<h3 id="final-script">Final Script</h3>

<p>Putting everything together, plus a couple of minor adjustments (like the addition of <code>tfcheck</code> [a small bash script that I wrote to validate Terraform scripts] and <code>rm plan.tfplan</code> for cleaning up), the script ends up as follows:</p>

<pre><code class="language-python">from python_terraform import *
import json
import os
import xlsxwriter
import argparse

def parse_var(s):
    items = s.split(&#39;=&#39;)
    key = items[0].strip()
    if len(items) &gt; 1:
        value = &#39;=&#39;.join(items[1:])
    return (key, value)


def parse_vars(items):
    d = {}

    if items:
        for item in items:
            key, value = parse_var(item)
            d[key] = value
    return d

def ofname(tfpath=&#34;.&#34;,extension=&#34;.xlsx&#34;):
    os.chdir(tfpath)
    str1=os.getcwd()
    str2=str1.split(&#39;/&#39;)
    n=len(str2)
    name = str2[n-1] + extension
    return name

def flattener(jdata,row,column,worksheet,itemc):
    if isinstance(jdata,dict):
        for k,v in jdata.items():
            if isinstance(v,dict) or isinstance(v,list):
                worksheet.write(row,column,k)
                if isinstance(v,list) and len(v) == 1 :
                    row = flattener(v[0],row,column+1,worksheet,len(v))
                else:
                    row = flattener(v,row,column+1,worksheet,len(v))
            else:
                worksheet.write(row,column,k)
                worksheet.write(row,column+1,v)
                row = row + 1
    else:
        for v in jdata:
            if isinstance(v,dict) or isinstance(v,list):
                row = flattener(v,row,column,worksheet,len(v))
            else:
                worksheet.write(row,column,v)
                row = row + 1
    return row

vars = {}
parser = argparse.ArgumentParser(description=&#34;...&#34;)
parser.add_argument(&#34;--set&#34;,
                        metavar=&#34;KEY=VALUE&#34;,
                        nargs=&#39;+&#39;)
parser.add_argument(&#34;--tfpath&#34;,
                    type=str,
                    required=True)
args = parser.parse_args()
vars = parse_vars(args.set)

tfpath = args.tfpath

tf = Terraform(working_dir=tfpath)
tf.init()
tf.plan(out=&#34;plan.tfplan&#34;,var=vars)
json_data = tf.show(&#34;plan.tfplan&#34;,json=IsFlagged)

plan = json.loads(json_data[1])

classed = {}

for rc in plan[&#39;resource_changes&#39;]:
    classed[rc[&#34;type&#34;]] = {}

for rc in plan[&#39;resource_changes&#39;]:
    rc_dict = rc[&#39;change&#39;][&#39;after&#39;]
    rc_dict[&#39;address&#39;] = rc[&#39;address&#39;]
    rc_dict[&#39;type&#39;] = rc[&#39;type&#39;]
    classed[rc[&#34;type&#34;]].update({rc[&#39;address&#39;]: rc_dict})

workbook = xlsxwriter.Workbook(ofname(tfpath))
cell_format = workbook.add_format()
cell_format.set_text_wrap()
cell_format.set_align(&#34;vcenter&#34;)
for type,data in classed.items():
    sheet = type[:31]
    worksheet = workbook.add_worksheet(sheet)
    worksheet.set_column(0,1000,42,cell_format)
    flattener(data,1,0,worksheet,0)
workbook.close()

os.system(&#34;tfcheck&#34;)
os.system(&#34;rm plan.tfplan&#34;)
</code></pre>

<p>I&#39;m no Python expert, by any means, and I&#39;m sure that this script can be improved and optimized.</p>

<h2 id="usage">Usage</h2>

<p>Now, how do we use this script? Pretty easily. Once we&#39;ve created our override file for Terraform, we simply run the script, passing the arguments we require:</p>

<pre><code class="language-bash">python3 main.py --tfpath terraform/ --set location=&#34;Central US&#34; testing=true
</code></pre>

<h2 id="final-thoughts">Final thoughts</h2>

<p>It&#39;s been over a year since my last post... And what a year it has been! My baby daughter was born recently, I got a new job, my grandma died...</p>

<p>Anyway, I got several ideas I&#39;d like to share with you, so I&#39;ll try to post more frequently.</p>

<p>See you soon! (Hopefully)</p>
]]></content:encoded>
      <guid>https://blog.chelo.dev/exporting-terraform-plan-to-excel-with-python</guid>
      <pubDate>Fri, 05 May 2023 21:30:00 +0000</pubDate>
    </item>
    <item>
      <title>Raspberry Pi as Home Router | Part 3 | DHCP and DNS with PiHole</title>
      <link>https://blog.chelo.dev/raspberry-pi-as-home-router-part-3-dhcp-and-dns-with-pihole</link>
      <description>&lt;![CDATA[#raspberry #rpi #router #network&#xA;&#xA;To complete our router, we need a DHCP server and DNS server. The DHCP server will assign IPs to our internal network, while the DNS server will resolve our queries to their corresponding IPs.!--more--&#xA;&#xA;Since I like to have a private and ad-free experience when I surf, I&#39;m running Pi-hole as a container. So the first thing I need to do is setup Docker using the official instructions:&#xA;&#xA;Docker installation&#xA;&#xA;Update the apt package index and install packages to allow apt to use a repository over HTTPS:&#xA;sudo apt update&#xA;sudo apt install ca-certificates curl gnupg lsb-release&#xA;&#xA;Add Docker’s official GPG key:&#xA;curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg&#xA;&#xA;Use the following command to set up the stable repository.&#xA;echo &#34;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsbrelease -cs) stable&#34; | sudo tee /etc/apt/sources.list.d/docker.list   /dev/null&#xA;&#xA;Update the apt package index, and install the latest version of Docker Engine and containerd, or go to the next step to install a specific version:&#xA;sudo apt update&#xA;sudo apt install docker-ce docker-ce-cli containerd.io&#xA;&#xA;Add user to the docker group.&#xA;sudo usermod -aG docker $USER&#xA;&#xA;Logout and log in again to reload the permissions, so that we can run docker commands as the logged in user, and not as sudo.&#xA;&#xA;Now, we&#39;ll install Docker Compose by running this command to download the current stable release of Docker Compose:&#xA;&#xA; sudo curl -L &#34;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&#34; -o /usr/local/bin/docker-compose&#xA;&#xA;And apply executable permissions to the binary:&#xA;&#xA;sudo chmod +x /usr/local/bin/docker-compose&#xA;&#xA;Pi-hole container&#xA;&#xA;We want our Pi-hole container to serve both DHCP and DNS. The following command will create a container named pihole using the latest Pi-hole image. It will use our internal interface lan0 (with its static IP address, 192.168.1.254) and will enable DHCP, using an IP range between 192.168.1.1 and 192.168.1.100:&#xA;&#xA;docker run -d \&#xA;    --net=host \&#xA;    -e TZ=&#39;America/Chicago&#39; \&#xA;    -e WEBPASSWORD: &#39;{mysupersecretpassword}&#39; \&#xA;    -e ADMINEMAIL=&#39;{email@domain.com}&#39; \&#xA;    -e ServerIP=&#39;192.168.1.254&#39; \&#xA;    -e INTERFACE=&#39;lan0&#39; \&#xA;    -e DHCPACTIVE=&#39;true&#39; \&#xA;    -e DHCPSTART=&#39;192.168.1.1&#39; \&#xA;    -e DHCPEND=&#39;192.168.1.100&#39; \&#xA;    -e DHCPROUTER=&#39;192.168.1.254&#39; \&#xA;    -e PIHOLEDOMAIN=&#39;{homedomain.local}&#39; \&#xA;    -v ./pihole/etc-pihole/:/etc/pihole/ \&#xA;    -v ./pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/ \&#xA;    --cap-add=NETADMIN \&#xA;    --restart=unless-stopped \&#xA;    --name pihole \&#xA;    pihole/pihole:latest&#xA;&#xA;After setting up the container, since I&#39;m using Ubuntu, I need to disable the caching DNS stub resolver that it Ubuntu has, since it will prevent Pi-hole from listening on port 53 (the port used by DNS requests). The stub resolver can be disabled with:&#xA;&#xA;sudo sed -r -i.orig &#39;s/#?DNSStubListener=yes/DNSStubListener=no/g&#39; /etc/systemd/resolved.conf&#xA;&#xA;After this, I need to change the nameserver settings, which currently point to the stub resolver (which we have just disabled). I need to point the /etc/resolv.conf symlink to /run/systemd/resolve/resolv.conf by running the following command:&#xA;&#xA;sudo sh -c &#39;rm /etc/resolv.conf &amp;&amp; ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf&#39;&#xA;&#xA;Finally, I need to restart systemd-resolved so that our changes are applied:&#xA;&#xA;sudo systemctl restart systemd-resolved&#xA;&#xA;Having done this, our Pi-hole container should start working as both a DHCP and DNS server. All we have to do now is disable our previous DHCP server, in my case my ISP modem.&#xA;&#xA;With this, we have a fully working router with the added benefit of having a privacy and ad-blocking solution incorporated.&#xA;&#xA;Stay tuned!&#xA;&#xA;Other posts in this series:&#xA;Introduction&#xA;Part 1 - Network description&#xA;Part 2 - IPv4 forwarding]]&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://blog.chelo.dev/tag:raspberry" class="hashtag"><span>#</span><span class="p-category">raspberry</span></a> <a href="https://blog.chelo.dev/tag:rpi" class="hashtag"><span>#</span><span class="p-category">rpi</span></a> <a href="https://blog.chelo.dev/tag:router" class="hashtag"><span>#</span><span class="p-category">router</span></a> <a href="https://blog.chelo.dev/tag:network" class="hashtag"><span>#</span><span class="p-category">network</span></a></p>

<p>To complete our router, we need a DHCP server and DNS server. The DHCP server will assign IPs to our internal network, while the DNS server will resolve our queries to their corresponding IPs.</p>

<p>Since I like to have a private and ad-free experience when I surf, I&#39;m running <a href="https://pi-hole.net/">Pi-hole</a> as a container. So the first thing I need to do is setup Docker using the official <a href="https://docs.docker.com/engine/install/ubuntu/">instructions</a>:</p>

<h2 id="docker-installation">Docker installation</h2>

<p>Update the <code>apt</code> package index and install packages to allow apt to use a repository over HTTPS:</p>

<pre><code class="language-bash">sudo apt update
sudo apt install ca-certificates curl gnupg lsb-release
</code></pre>

<p>Add Docker’s official GPG key:</p>

<pre><code class="language-bash">curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
</code></pre>

<p>Use the following command to set up the stable repository.</p>

<pre><code class="language-bash">echo &#34;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable&#34; | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>

<p>Update the apt package index, and install the latest version of Docker Engine and containerd, or go to the next step to install a specific version:</p>

<pre><code class="language-bash">sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
</code></pre>

<p>Add user to the docker group.</p>

<pre><code class="language-bash">sudo usermod -aG docker $USER
</code></pre>

<p>Logout and log in again to reload the permissions, so that we can run docker commands as the logged in user, and not as sudo.</p>

<p>Now, we&#39;ll install Docker Compose by running this command to download the current stable release of Docker Compose:</p>

<pre><code class="language-bash"> sudo curl -L &#34;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&#34; -o /usr/local/bin/docker-compose
</code></pre>

<p>And apply executable permissions to the binary:</p>

<pre><code class="language-bash">sudo chmod +x /usr/local/bin/docker-compose
</code></pre>

<h2 id="pi-hole-container">Pi-hole container</h2>

<p>We want our Pi-hole container to serve both DHCP and DNS. The following command will create a container named <code>pihole</code> using the latest Pi-hole image. It will use our internal interface <code>lan0</code> (with its static IP address, <code>192.168.1.254</code>) and will enable DHCP, using an IP range between <code>192.168.1.1</code> and <code>192.168.1.100</code>:</p>

<pre><code class="language-bash">docker run -d \
    --net=host \
    -e TZ=&#39;America/Chicago&#39; \
    -e WEBPASSWORD: &#39;{mysupersecretpassword}&#39; \
    -e ADMIN_EMAIL=&#39;{email@domain.com}&#39; \
    -e ServerIP=&#39;192.168.1.254&#39; \
    -e INTERFACE=&#39;lan0&#39; \
    -e DHCP_ACTIVE=&#39;true&#39; \
    -e DHCP_START=&#39;192.168.1.1&#39; \
    -e DHCP_END=&#39;192.168.1.100&#39; \
    -e DHCP_ROUTER=&#39;192.168.1.254&#39; \
    -e PIHOLE_DOMAIN=&#39;{homedomain.local}&#39; \
    -v ./pihole/etc-pihole/:/etc/pihole/ \
    -v ./pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/ \
    --cap-add=NET_ADMIN \
    --restart=unless-stopped \
    --name pihole \
    pihole/pihole:latest
</code></pre>

<p>After setting up the container, since I&#39;m using Ubuntu, I need to disable the caching DNS stub resolver that it Ubuntu has, since it will prevent Pi-hole from listening on port 53 (the port used by DNS requests). The stub resolver can be disabled with:</p>

<pre><code class="language-bash">sudo sed -r -i.orig &#39;s/#?DNSStubListener=yes/DNSStubListener=no/g&#39; /etc/systemd/resolved.conf
</code></pre>

<p>After this, I need to change the nameserver settings, which currently point to the stub resolver (which we have just disabled). I need to point the <code>/etc/resolv.conf</code> symlink to <code>/run/systemd/resolve/resolv.conf</code> by running the following command:</p>

<pre><code class="language-bash">sudo sh -c &#39;rm /etc/resolv.conf &amp;&amp; ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf&#39;
</code></pre>

<p>Finally, I need to restart <code>systemd-resolved</code> so that our changes are applied:</p>

<pre><code class="language-bash">sudo systemctl restart systemd-resolved
</code></pre>

<p>Having done this, our Pi-hole container should start working as both a DHCP and DNS server. All we have to do now is disable our previous DHCP server, in my case my ISP modem.</p>

<p>With this, we have a fully working router with the added benefit of having a privacy and ad-blocking solution incorporated.</p>

<p>Stay tuned!</p>

<h3 id="other-posts-in-this-series">Other posts in this series:</h3>
<ul><li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-introduction">Introduction</a></li>
<li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-part-1-network-description">Part 1 – Network description</a></li>
<li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-part-2-ipv4-forwarding">Part 2 – IPv4 forwarding</a></li></ul>
]]></content:encoded>
      <guid>https://blog.chelo.dev/raspberry-pi-as-home-router-part-3-dhcp-and-dns-with-pihole</guid>
      <pubDate>Thu, 03 Mar 2022 01:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Raspberry Pi as Home Router | Part 2 | IPv4 forwarding</title>
      <link>https://blog.chelo.dev/raspberry-pi-as-home-router-part-2-ipv4-forwarding</link>
      <description>&lt;![CDATA[#raspberry #rpi #router #network&#xA;&#xA;One of the fundamental functions of a router is to forward traffic between our internal network and the internet.!--more--&#xA;&#xA;In order to use the Raspberry Pi, or any Linux machine, as a router, the first thing we need to do is to enable packet forwarding.&#xA;&#xA;Since I&#39;m using Ubuntu 20.04 on my PiRouter, I can enable IPv4 forwarding instantly by executing, as sudo, the following command:&#xA;&#xA;sysctl -w net.ipv4.ipforward=1&#xA;| Remember to run these commands as root, or with sudo.&#xA;&#xA;To make the change permanent, I need to modify /etc/sysctl.conf, adding the following line at the end of the file:&#xA;&#xA;net.ipv4.ipforward=1&#xA;&#xA;This will ensure that IPv4 forwarding will be enabled on boot.&#xA;&#xA;Now, I need to setup iptables rules in my firewall so that the PiRouter accepts and forwards the traffic it receives from my internal network/interface (lan0) to my external interface/internet (wan0).&#xA;&#xA;This can be accomplished by executing the following commands:&#xA;&#xA;iptables -A FORWARD -i lan0 -j ACCEPT&#xA;iptables -A POSTROUTING -o wan0 -j MASQUERADE&#xA;&#xA;These rules will only be active until we reboot, so it&#39;s important to save them. I use iptables-persistent, which can be installed using the following command:&#xA;&#xA;apt install iptables-persistent&#xA;&#xA;After the installation, the setup will ask if you wish to save the current rules. By clicking Yes, we save our forwarding rules to /etc/iptables/rules.v4, and they&#39;ll be loaded every time our router boots.&#xA;&#xA;Finally, we can ensure that the IPTables Persistent service is enabled and running by executing the following commands:&#xA;&#xA;systemctl enable netfilter-persistent.service&#xA;systemctl start netfilter-persistent.service&#xA;&#xA;Having done all of these steps, we now have a Raspberry Pi that will forward traffic between interfaces, however it&#39;s still not ready to be used as a router, since we&#39;re still missing a DHCP server and a DNS server. I&#39;ll be covering these key pieces in the next post.&#xA;&#xA;Other posts in this series:&#xA;Introduction&#xA;Part 1 - Network description&#xA;Part 3 - DHCP and DNS with PiHole]]&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://blog.chelo.dev/tag:raspberry" class="hashtag"><span>#</span><span class="p-category">raspberry</span></a> <a href="https://blog.chelo.dev/tag:rpi" class="hashtag"><span>#</span><span class="p-category">rpi</span></a> <a href="https://blog.chelo.dev/tag:router" class="hashtag"><span>#</span><span class="p-category">router</span></a> <a href="https://blog.chelo.dev/tag:network" class="hashtag"><span>#</span><span class="p-category">network</span></a></p>

<p>One of the fundamental functions of a router is to forward traffic between our internal network and the internet.</p>

<p>In order to use the Raspberry Pi, or any Linux machine, as a router, the first thing we need to do is to enable packet forwarding.</p>

<p>Since I&#39;m using Ubuntu 20.04 on my PiRouter, I can enable IPv4 forwarding instantly by executing, as sudo, the following command:</p>

<pre><code class="language-shell">sysctl -w net.ipv4.ip_forward=1
</code></pre>

<p>| Remember to run these commands as root, or with sudo.</p>

<p>To make the change permanent, I need to modify <code>/etc/sysctl.conf</code>, adding the following line at the end of the file:</p>

<pre><code class="language-shell">net.ipv4.ip_forward=1
</code></pre>

<p>This will ensure that IPv4 forwarding will be enabled on boot.</p>

<p>Now, I need to setup <code>iptables</code> rules in my firewall so that the PiRouter accepts and forwards the traffic it receives from my internal network/interface (<code>lan0</code>) to my external interface/internet (<code>wan0</code>).</p>

<p>This can be accomplished by executing the following commands:</p>

<pre><code class="language-shell">iptables -A FORWARD -i lan0 -j ACCEPT
</code></pre>

<pre><code class="language-shell">iptables -A POSTROUTING -o wan0 -j MASQUERADE
</code></pre>

<p>These rules will only be active until we reboot, so it&#39;s important to save them. I use <code>iptables-persistent</code>, which can be installed using the following command:</p>

<pre><code class="language-shell">apt install iptables-persistent
</code></pre>

<p>After the installation, the setup will ask if you wish to save the current rules. By clicking <em>Yes</em>, we save our forwarding rules to <code>/etc/iptables/rules.v4</code>, and they&#39;ll be loaded every time our router boots.</p>

<p>Finally, we can ensure that the IPTables Persistent service is enabled and running by executing the following commands:</p>

<pre><code class="language-shell">systemctl enable netfilter-persistent.service
</code></pre>

<pre><code class="language-shell">systemctl start netfilter-persistent.service
</code></pre>

<p>Having done all of these steps, we now have a Raspberry Pi that will forward traffic between interfaces, however it&#39;s still not ready to be used as a router, since we&#39;re still missing a DHCP server and a DNS server. I&#39;ll be covering these key pieces in the next post.</p>

<h3 id="other-posts-in-this-series">Other posts in this series:</h3>
<ul><li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-introduction">Introduction</a></li>
<li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-part-1-network-description">Part 1 – Network description</a></li>
<li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-part-3-dhcp-and-dns-with-pihole">Part 3 – DHCP and DNS with PiHole</a></li></ul>
]]></content:encoded>
      <guid>https://blog.chelo.dev/raspberry-pi-as-home-router-part-2-ipv4-forwarding</guid>
      <pubDate>Tue, 01 Mar 2022 00:30:00 +0000</pubDate>
    </item>
    <item>
      <title>Raspberry Pi as Home Router | Part 1 | Network description</title>
      <link>https://blog.chelo.dev/raspberry-pi-as-home-router-part-1-network-description</link>
      <description>&lt;![CDATA[#raspberry #rpi #router #network&#xA;&#xA;In this first part, I&#39;ll be going into a general description of my home network, as well as the services I&#39;m running in my PiRouter. !--more--&#xA;&#xA;Network description&#xA;&#xA;NetworkDiagram&#xA;Figure 1 - General network diagram&#xA;&#xA;As you can see in the previous image, I&#39;ve got my ISP modem, whose function is to connect with my ISP. From there, the PiRouter is setup in the modem&#39;s DMZ, so that it receives all traffic.&#xA;&#xA;The PiRouter has 2 ethernet interfaces:&#xA;The Raspberry Pi integrated Gigabit Ethernet, which connects to the internal network (lan0).&#xA;A USB 3.0 Gigabit Ethernet adapter, which connects directly to the ISP modem (wan0_).&#xA;&#xA;Internally, I&#39;ve got a server running Unraid on an old Supermicro server, which sports an Intel Xeon E5620 with 24GB of RAM. This server runs a couple of VMs and several containers.&#xA;&#xA;I also have a Raspberry Pi 3B running Raspberry Pi OS with some containers used in home automation.&#xA;&#xA;For WiFi, I use 2 TP-Link Deco M4 antennas. I need the power since I  live in a two-story flat with a total surface of 175m² (1,880 sqft), with thick reinforced concrete walls.&#xA;&#xA;My home automation runs on Home Assistant, using both WiFi and Zigbee. I&#39;ll probably go into this topic on another series of posts.&#xA;&#xA;Other posts in this series:&#xA;Introduction&#xA;Part 2 - IPv4 forwarding&#xA;Part 3 - DHCP and DNS with PiHole]]&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://blog.chelo.dev/tag:raspberry" class="hashtag"><span>#</span><span class="p-category">raspberry</span></a> <a href="https://blog.chelo.dev/tag:rpi" class="hashtag"><span>#</span><span class="p-category">rpi</span></a> <a href="https://blog.chelo.dev/tag:router" class="hashtag"><span>#</span><span class="p-category">router</span></a> <a href="https://blog.chelo.dev/tag:network" class="hashtag"><span>#</span><span class="p-category">network</span></a></p>

<p>In this first part, I&#39;ll be going into a general description of my home network, as well as the services I&#39;m running in my PiRouter. </p>

<h2 id="network-description">Network description</h2>

<p><img src="/assets/img/network_diagram.png" alt="NetworkDiagram">
<em>Figure 1 – General network diagram</em></p>

<p>As you can see in the previous image, I&#39;ve got my ISP modem, whose function is to connect with my ISP. From there, the PiRouter is setup in the modem&#39;s DMZ, so that it receives all traffic.</p>

<p>The PiRouter has 2 ethernet interfaces:
1. The Raspberry Pi integrated Gigabit Ethernet, which connects to the internal network (<em>lan0</em>).
2. A <a href="https://www.amazon.com/gp/product/B00MYTSN18">USB 3.0 Gigabit Ethernet adapter</a>, which connects directly to the ISP modem (<em>wan0</em>).</p>

<p>Internally, I&#39;ve got a server running <a href="https://unraid.net/">Unraid</a> on an old Supermicro server, which sports an Intel Xeon E5620 with 24GB of RAM. This server runs a couple of VMs and several containers.</p>

<p>I also have a Raspberry Pi 3B running Raspberry Pi OS with some containers used in home automation.</p>

<p>For WiFi, I use 2 TP-Link Deco M4 antennas. I need the power since I  live in a two-story flat with a total surface of 175m² (1,880 sqft), with thick reinforced concrete walls.</p>

<p>My home automation runs on Home Assistant, using both WiFi and Zigbee. I&#39;ll probably go into this topic on another series of posts.</p>

<h3 id="other-posts-in-this-series">Other posts in this series:</h3>
<ul><li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-introduction">Introduction</a></li>
<li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-part-2-ipv4-forwarding">Part 2 – IPv4 forwarding</a></li>
<li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-part-3-dhcp-and-dns-with-pihole">Part 3 – DHCP and DNS with PiHole</a></li></ul>
]]></content:encoded>
      <guid>https://blog.chelo.dev/raspberry-pi-as-home-router-part-1-network-description</guid>
      <pubDate>Mon, 21 Feb 2022 04:30:00 +0000</pubDate>
    </item>
    <item>
      <title>Raspberry Pi as Home Router | Introduction</title>
      <link>https://blog.chelo.dev/raspberry-pi-as-home-router-introduction</link>
      <description>&lt;![CDATA[#raspberry #rpi #router #network&#xA;&#xA;A while back, I started using a Raspberry Pi 4B with 2GB of RAM as my home router.&#xA;&#xA;The reason for doing this is because my ISP modem is pretty basic and limited. I also didn&#39;t want to have to reconfigure my network each time I changed modem or ISP. !--more--&#xA;&#xA;Some benefits I got from using the RPI4 as a router were that I could setup an ad blocker, Pi-hole in my case, as well as adding a small UPS, for power outages, and LTE modem, for ISP outages (I haven&#39;t configured this last one).&#xA;&#xA;After using my PiRouter (that&#39;s what I like calling it) for a while, I decided to clean everything up and standarize it, so that I could replicate it if/when I had to replace my Raspberry or the microSD card, as painlessly as possible.&#xA;&#xA;In these series of articles I&#39;ll be detailing how I configured it using Git, Docker and Ansible, having learned the basics of Ansible from the great guides published by Jeff Geerling.&#xA;&#xA;Other posts in this series:&#xA;Part 1 - Network description&#xA;Part 2 - IPv4 forwarding&#xA;Part 3 - DHCP and DNS with PiHole]]&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://blog.chelo.dev/tag:raspberry" class="hashtag"><span>#</span><span class="p-category">raspberry</span></a> <a href="https://blog.chelo.dev/tag:rpi" class="hashtag"><span>#</span><span class="p-category">rpi</span></a> <a href="https://blog.chelo.dev/tag:router" class="hashtag"><span>#</span><span class="p-category">router</span></a> <a href="https://blog.chelo.dev/tag:network" class="hashtag"><span>#</span><span class="p-category">network</span></a></p>

<p>A while back, I started using a Raspberry Pi 4B with 2GB of RAM as my home router.</p>

<p>The reason for doing this is because my ISP modem is pretty basic and limited. I also didn&#39;t want to have to reconfigure my network each time I changed modem or ISP. </p>

<p>Some benefits I got from using the RPI4 as a router were that I could setup an ad blocker, <a href="https://pi-hole.net/">Pi-hole</a> in my case, as well as adding a small <a href="https://aliexpress.com/item/32955634965.html">UPS</a>, for power outages, and <a href="https://aliexpress.com/item/32961627057.html">LTE modem</a>, for ISP outages (I haven&#39;t configured this last one).</p>

<p>After using my PiRouter (that&#39;s what I like calling it) for a while, I decided to clean everything up and standarize it, so that I could replicate it if/when I had to replace my Raspberry or the microSD card, as painlessly as possible.</p>

<p>In these series of articles I&#39;ll be detailing how I configured it using Git, Docker and Ansible, having learned the basics of Ansible from the <a href="https://www.youtube.com/watch?v=goclfp6a2IQ&amp;list=PL2_OBreMn7FqZkvMYt6ATmgC0KAGGJNAN">great guides</a> published by <a href="https://www.jeffgeerling.com/">Jeff Geerling</a>.</p>

<h3 id="other-posts-in-this-series">Other posts in this series:</h3>
<ul><li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-part-1-network-description">Part 1 – Network description</a></li>
<li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-part-2-ipv4-forwarding">Part 2 – IPv4 forwarding</a></li>
<li><a href="https://blog.chelo.dev/raspberry-pi-as-home-router-part-3-dhcp-and-dns-with-pihole">Part 3 – DHCP and DNS with PiHole</a></li></ul>
]]></content:encoded>
      <guid>https://blog.chelo.dev/raspberry-pi-as-home-router-introduction</guid>
      <pubDate>Sat, 19 Feb 2022 00:30:00 +0000</pubDate>
    </item>
    <item>
      <title>Initial commit</title>
      <link>https://blog.chelo.dev/initial-commit</link>
      <description>&lt;![CDATA[So this is my first post in this new blog.&#xA;&#xA;The idea for this blog is to share my ideas and projects on sustainability, cloud, electronics and automation.&#xA;&#xA;I&#39;ll be sharing guides and projects, which I&#39;ll (probably) be uploading to GitHub.]]&gt;</description>
      <content:encoded><![CDATA[<p>So this is my first post in this new blog.</p>

<p>The idea for this blog is to share my ideas and projects on sustainability, cloud, electronics and automation.</p>

<p>I&#39;ll be sharing guides and projects, which I&#39;ll (probably) be uploading to GitHub.</p>
]]></content:encoded>
      <guid>https://blog.chelo.dev/initial-commit</guid>
      <pubDate>Thu, 10 Feb 2022 01:30:00 +0000</pubDate>
    </item>
  </channel>
</rss>