<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Merlin Saha — Cloud, DevSecOps, MLOps & AI Systems]]></title><description><![CDATA[Hands-on PoC, project walkthroughs, and architectural patterns covering
MLOps, LLMOps, DevSecOps, and cloud infrastructure on GCP, AWS, Azure,
and Oracle Cloud. Built by a practitioner, shared for engineers who want
to go beyond the theory.]]></description><link>https://merlin.microworka.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 19 Apr 2026 11:43:05 GMT</lastBuildDate><atom:link href="https://merlin.microworka.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Keycloak Dual Deployment Strategy: Combining Open Source and Serverless for Enterprise Identity]]></title><description><![CDATA[Unifying Business Value and Technical Excellence
In today’s digital landscape, managing identity at scale is both a technical imperative and a strategic business enabler. Enterprises need to balance security, performance, cost-efficiency, and user ex...]]></description><link>https://merlin.microworka.com/keycloak-dual-deployment-strategy-combining-open-source-and-serverless-for-enterprise-identity</link><guid isPermaLink="true">https://merlin.microworka.com/keycloak-dual-deployment-strategy-combining-open-source-and-serverless-for-enterprise-identity</guid><category><![CDATA[Open Source]]></category><category><![CDATA[keycloak]]></category><category><![CDATA[serverless]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Devops]]></category><category><![CDATA[identity-management]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[#CostOptimization ]]></category><category><![CDATA[#InfrastructureAsCode]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Wed, 14 May 2025 07:07:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747206553012/15c09e4a-e70d-4f87-9ef9-83848efc29ab.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-unifying-business-value-and-technical-excellence"><strong>Unifying Business Value and Technical Excellence</strong></h2>
<p><strong>In today’s digital landscape</strong>, managing identity at scale is both a technical imperative and a strategic business enabler. Enterprises need to balance <strong>security</strong>, <strong>performance</strong>, <strong>cost-efficiency</strong>, and <strong>user experience</strong> across increasingly complex environments.</p>
<p>This post introduces a <strong>Keycloak Dual Deployment Strategy</strong>—a hybrid architecture that combines the <strong>control of open-source</strong> with the <strong>scalability of serverless</strong>. By segregating administrative and client-facing responsibilities across two dedicated Keycloak instances, and integrating with <strong>Oracle Cloud Infrastructure’s Autonomous Transaction Processing (OCI ATP)</strong> database, organizations can achieve robust identity management that enhances their <strong>security posture</strong>, <strong>accelerates developer productivity</strong>, and <strong>reduces operational costs</strong>.</p>
<p>This month, we’ve made significant strides in our identity management infrastructure by implementing this dual Keycloak deployment model. The two Keycloak instances are deployed on <strong>Google Cloud Run</strong>, enabling us to run containerized workloads with automatic scaling and minimal overhead. These instances connect securely to our existing <strong>OCI ATP database</strong>, allowing us to leverage our enterprise-grade relational data layer without migrating critical assets. Building on our previous work with <strong>Vault Server</strong>, <strong>Vault Agent</strong>, and the <strong>sidecar deployment pattern</strong>, this approach delivers both <strong>technical robustness</strong> and <strong>tangible business outcomes</strong>.</p>
<p>The infrastructure is fully automated using <strong>Terraform</strong> and orchestrated via <strong>Terraform Cloud</strong>, ensuring consistency, traceability, and team collaboration. The diagram below illustrates the overall architecture of this dual deployment model:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747066940807/d18294ec-382a-4be9-b5af-adc704fd222e.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-our-dual-architecture-approach"><strong>Our Dual Architecture Approach</strong></h2>
<p>We have designed and implemented a carefully segregated Keycloak environment consisting of:</p>
<ol>
<li><p><strong>Internal Administrative Instance</strong> - Providing full management capabilities to authorized internal users</p>
</li>
<li><p><strong>External Client-Facing Instance</strong> - Delivering streamlined authentication services to end users</p>
</li>
</ol>
<p>Both instances connect to the same OCI ATP database, ensuring data consistency while maintaining strict separation of access privileges.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Our Keycloak architecture connects two distinct instances to a shared database</span>
<span class="hljs-string">Internal</span> <span class="hljs-string">Users</span> <span class="hljs-string">→</span> <span class="hljs-string">Keycloak</span> <span class="hljs-string">Admin</span> <span class="hljs-string">Instance</span> <span class="hljs-string">→</span> <span class="hljs-string">OCI</span> <span class="hljs-string">ATP</span> <span class="hljs-string">Database</span> <span class="hljs-string">←</span> <span class="hljs-string">Keycloak</span> <span class="hljs-string">External</span> <span class="hljs-string">Instance</span> <span class="hljs-string">←</span> <span class="hljs-string">End</span> <span class="hljs-string">Users</span>
</code></pre>
<h2 id="heading-effortless-scalability-with-external-load-balancer">Effortless Scalability with External Load Balancer</h2>
<p>One of the hidden superpowers of our Keycloak dual deployment architecture lies in how seamlessly it scales — or doesn’t — depending on real-time needs.</p>
<p>By placing our Keycloak instances behind an <strong>external load balancer</strong>, we’ve unlocked a flexible, cloud-neutral way to manage traffic, resilience, and cost.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747165197854/978e393f-3cf1-4345-b1d7-7b27dd5aa62b.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-scale-when-you-need-it-save-when-you-dont">Scale When You Need It, Save When You Don’t</h3>
<ul>
<li><p><strong>On-demand scaling</strong>: Whether it’s 10 users or 10,000, our client-facing Keycloak instance can auto-scale based on traffic without requiring manual intervention.</p>
</li>
<li><p><strong>Idle = zero cost</strong>: Thanks to our Cloud Run-based serverless deployment, Keycloak services scale down to zero during off-peak hours — a huge cost-saving advantage.</p>
</li>
</ul>
<h3 id="heading-load-balancer-high-availability-amp-security">Load Balancer = High Availability &amp; Security</h3>
<ul>
<li><p>The load balancer acts as a <strong>smart gatekeeper</strong>, routing requests only to healthy instances.</p>
</li>
<li><p>We’ve enabled <strong>health checks</strong>, <strong>TLS termination</strong>, and <strong>layer-7 routing</strong>, ensuring both <strong>resilience</strong> and <strong>security</strong> at the edge.</p>
</li>
<li><p>Compatible with advanced features like <strong>Web Application Firewalls (WAF)</strong>, <strong>Geo-routing</strong>, and <strong>DDoS protection</strong>.</p>
</li>
</ul>
<h3 id="heading-cloud-agnostic-by-design">Cloud Agnostic by Design</h3>
<ul>
<li><p>This approach works with <strong>Google Cloud Load Balancer</strong>, <strong>OCI Load Balancer</strong>, <strong>AWS ALB</strong>, or even self-managed solutions like <strong>HAProxy</strong> or <strong>NGINX</strong>.</p>
</li>
<li><p>Our architecture avoids vendor lock-in while delivering <strong>enterprise-grade availability and performance</strong>.</p>
</li>
</ul>
<p>Whether you’re running on Google Cloud, Oracle, AWS, or on-premises — scaling your identity platform up or down becomes a matter of toggling replicas behind your load balancer.</p>
<h2 id="heading-why-we-chose-keycloak-for-enterprise-identity"><strong>Why We Chose Keycloak for Enterprise Identity</strong></h2>
<p>When evaluating identity solutions, Keycloak emerged as the clear choice for several compelling reasons:</p>
<ol>
<li><p><strong>Enterprise-Grade at Open Source Cost</strong>: We gain capabilities comparable to commercial solutions like Okta and Ping Identity without the per-user licensing fees that can run into millions annually for large organizations.</p>
</li>
<li><p><strong>Standards Compliance</strong>: Full implementation of OAuth 2.0, OpenID Connect, and SAML 2.0 ensures we maintain compatibility with both existing systems and future integrations.</p>
</li>
<li><p><strong>Deployment Flexibility</strong>: Unlike SaaS-only options, Keycloak gives us complete control over our deployment architecture, security configurations, and data residency.</p>
</li>
<li><p><strong>Proven Enterprise Adoption</strong>: Companies like Salesforce, BMW, and Deutsche Telekom have validated Keycloak's enterprise readiness, giving us confidence in our selection.</p>
</li>
<li><p><strong>Vibrant Ecosystem</strong>: Regular updates, security patches, and an engaged community ensure the platform evolves alongside emerging threats and requirements.</p>
</li>
</ol>
<h2 id="heading-technical-foundation"><strong>Technical Foundation</strong></h2>
<h3 id="heading-container-customization"><strong>Container Customization</strong></h3>
<p>Our deployment leverages custom Docker images built specifically for connection to OCI ATP:</p>
<pre><code class="lang-yaml"><span class="hljs-string">FROM</span> <span class="hljs-string">quay.io/keycloak/keycloak:&lt;VERSION&gt;</span>

<span class="hljs-comment"># Oracle connectivity components</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">--chown=keycloak:keycloak</span> <span class="hljs-string">&lt;LIBS_PATH&gt;/ojdbc11.jar</span> <span class="hljs-string">/opt/keycloak/providers/ojdbc11.jar</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">--chown=keycloak:keycloak</span> <span class="hljs-string">&lt;LIBS_PATH&gt;/orai18n.jar</span> <span class="hljs-string">/opt/keycloak/providers/orai18n.jar</span>

<span class="hljs-comment"># Wallet configuration for secure ATP connectivity</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">--chown=keycloak:keycloak</span> <span class="hljs-string">&lt;ATP_WALLET_LOCAL_PATH&gt;</span> <span class="hljs-string">/opt/keycloak/&lt;ATP_WALLET_LOCAL_PATH&gt;</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">chmod</span> <span class="hljs-string">-R</span> <span class="hljs-number">750</span> <span class="hljs-string">/opt/keycloak/&lt;ATP_WALLET_LOCAL_PATH&gt;</span>

<span class="hljs-comment"># Environment configuration</span>
<span class="hljs-string">ENV</span> <span class="hljs-string">TNS_ADMIN=/opt/keycloak/&lt;ATP_WALLET_LOCAL_PATH&gt;</span>
<span class="hljs-string">ENV</span> <span class="hljs-string">KC_DB_DRIVER=oracle.jdbc.OracleDriver</span>
<span class="hljs-string">ENV</span> <span class="hljs-string">KC_DB=oracle</span>
</code></pre>
<h3 id="heading-feature-based-security-separation"><strong>Feature-Based Security Separation</strong></h3>
<p>We've implemented security through deliberate feature control:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># External-facing instance feature configuration</span>
<span class="hljs-string">features=account,account-api,login,passkeys,persistent-user-sessions,recovery-codes,web-authn,token-exchange,authorization,par,step-up-authentication,client-policies</span>

<span class="hljs-comment"># Explicitly disabled administrative features</span>
<span class="hljs-string">features-disabled=admin,admin-api,admin-fine-grained-authz,update-email</span>
</code></pre>
<h3 id="heading-cloud-run-deployment-optimization"><strong>Cloud Run Deployment Optimization</strong></h3>
<p>Our Cloud Run configuration optimizes for both security and performance:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Internal Admin Instance</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">run</span> <span class="hljs-string">deploy</span> <span class="hljs-string">auth-int</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--image</span> <span class="hljs-string">"${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${NEXT_VERSION}"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--no-allow-unauthenticated</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--memory=1024Mi</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--cpu</span> <span class="hljs-number">1</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--min-instances=0</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--max-instances=1</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--service-account=keycloak-admin-sa@${PROJECT_ID}.iam.gserviceaccount.com</span>

<span class="hljs-comment"># External Client Instance</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">run</span> <span class="hljs-string">deploy</span> <span class="hljs-string">auth-ext</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--image</span> <span class="hljs-string">"${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${NEXT_VERSION}"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--allow-unauthenticated</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--memory=2048Mi</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--cpu</span> <span class="hljs-number">2</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--min-instances=0</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--max-instances=3</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--service-account=keycloak-client-sa@${PROJECT_ID}.iam.gserviceaccount.com</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--cpu-boost</span>
</code></pre>
<h2 id="heading-performance-tuning-and-optimization"><strong>Performance Tuning and Optimization</strong></h2>
<p>The difference between a functioning system and a production-grade solution often lies in the details of configuration and tuning. We've meticulously calibrated our Keycloak deployment for optimal performance, reliability, and security:</p>
<h3 id="heading-database-connection-pool-configuration"><strong>Database Connection Pool Configuration</strong></h3>
<pre><code class="lang-yaml"><span class="hljs-string">db-pool-initial-size=5</span>
<span class="hljs-string">db-pool-min-size=5</span>
<span class="hljs-string">db-pool-max-size=20</span>
<span class="hljs-string">db-url-properties=maxStatements=0</span>
<span class="hljs-string">db-url-properties=queryTimeout=300</span>
<span class="hljs-string">db-url-properties=oracle.net.CONNECT_TIMEOUT=60000</span>
<span class="hljs-string">db-url-properties=oracle.jdbc.ReadTimeout=120000</span>
</code></pre>
<p>These database connection settings represent a careful balance between resource efficiency and performance. The initial and minimum pool size of 5 connections ensures immediate availability without unnecessarily consuming resources during quiet periods. The maximum pool of 20 connections allows the system to handle traffic spikes effectively.</p>
<p>The Oracle-specific timeout parameters address one of the most common challenges in cloud environments: network latency and temporary connectivity issues. With a connect timeout of 60 seconds and read timeout of 120 seconds, we prevent premature connection failures while ensuring the system can recover from genuine connection problems.</p>
<h3 id="heading-security-and-protocol-settings"><strong>Security and Protocol Settings</strong></h3>
<pre><code class="lang-yaml"><span class="hljs-string">spi-connections-http-client-default-max-connections=16</span>
<span class="hljs-string">spi-connections-http-client-default-time-to-live=30</span>
<span class="hljs-string">spi-connections-http-client-default-disable-trust-manager=false</span>
<span class="hljs-string">spi-login-protocol-openid-connect-code-lifespan=120</span>
<span class="hljs-string">spi-login-protocol-openid-connect-access-token-lifespan=300</span>
<span class="hljs-string">spi-login-protocol-openid-connect-refresh-token-lifespan=1800</span>
</code></pre>
<p>Our token lifespan settings strike a balance between security and user experience:</p>
<ul>
<li><p>Authorization codes valid for 2 minutes provide sufficient time for authentication flows while limiting the window for code interception</p>
</li>
<li><p>Access tokens with a 5-minute lifespan reduce token exchange frequency during active sessions</p>
</li>
<li><p>Refresh tokens valid for 30 minutes enable reasonable session persistence without excessive security exposure</p>
</li>
</ul>
<p>These settings were determined after analyzing our usage patterns and security requirements, ensuring we maintain NIST-compliant security practices without unnecessary user friction.</p>
<h3 id="heading-optimizing-performance-through-caching"><strong>Optimizing Performance Through Caching</strong></h3>
<pre><code class="lang-yaml"><span class="hljs-string">theme-static-max-age=86400</span>
<span class="hljs-string">theme-cache-themes=true</span>
<span class="hljs-string">theme-cache-templates=true</span>
<span class="hljs-string">spi-theme-cache-themes=true</span>
<span class="hljs-string">spi-theme-cache-templates=true</span>
</code></pre>
<p>By enabling aggressive theme and template caching with a 24-hour static resource cache, we've significantly reduced unnecessary processing and improved response times. In serverless environments, these optimizations are particularly important for reducing cold start impacts. Our performance testing showed a 42% improvement in initial page load times after implementing these caching strategies.</p>
<h2 id="heading-infrastructure-as-code-implementation"><strong>Infrastructure as Code Implementation</strong></h2>
<p>Our entire infrastructure is managed through Terraform, enabling consistent deployment across environments and reducing configuration drift. Here's a look at our key infrastructure components:</p>
<h3 id="heading-oracle-cloud-infrastructure-resources"><strong>Oracle Cloud Infrastructure Resources</strong></h3>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"oci_database_autonomous_database"</span> <span class="hljs-string">"adb"</span> {
  <span class="hljs-string">admin_password</span>           <span class="hljs-string">=</span> <span class="hljs-string">var.password</span>
  <span class="hljs-string">compartment_id</span>           <span class="hljs-string">=</span> <span class="hljs-string">var.compartment_ocid</span>
  <span class="hljs-string">db_name</span>                  <span class="hljs-string">=</span> <span class="hljs-string">var.db_name</span>
  <span class="hljs-string">display_name</span>             <span class="hljs-string">=</span> <span class="hljs-string">var.db_name</span>
  <span class="hljs-string">db_workload</span>              <span class="hljs-string">=</span> <span class="hljs-string">var.db_workload</span>
  <span class="hljs-string">is_free_tier</span>             <span class="hljs-string">=</span> <span class="hljs-string">var.is_free_tier</span>
}

<span class="hljs-string">resource</span> <span class="hljs-string">"oci_core_network_security_group"</span> <span class="hljs-string">"nsg"</span> {
  <span class="hljs-string">compartment_id</span> <span class="hljs-string">=</span> <span class="hljs-string">var.compartment_ocid</span>
  <span class="hljs-string">vcn_id</span>         <span class="hljs-string">=</span> <span class="hljs-string">var.vcn_id</span>
  <span class="hljs-string">display_name</span> <span class="hljs-string">=</span> <span class="hljs-string">var.nsg_display_name</span>
}

<span class="hljs-string">resource</span> <span class="hljs-string">"oci_core_vcn"</span> <span class="hljs-string">"vcn"</span> {
  <span class="hljs-string">dns_label</span>      <span class="hljs-string">=</span> <span class="hljs-string">var.dns_label</span>
  <span class="hljs-string">cidr_block</span>     <span class="hljs-string">=</span> <span class="hljs-string">var.cidr_block</span>
  <span class="hljs-string">compartment_id</span> <span class="hljs-string">=</span> <span class="hljs-string">var.compartment_ocid</span>
  <span class="hljs-string">display_name</span>   <span class="hljs-string">=</span> <span class="hljs-string">var.display_name</span>
}

<span class="hljs-string">resource</span> <span class="hljs-string">"oci_core_subnet"</span> <span class="hljs-string">"subnet"</span> {
  <span class="hljs-string">cidr_block</span>        <span class="hljs-string">=</span> <span class="hljs-string">var.cidr_block</span>
  <span class="hljs-string">display_name</span>      <span class="hljs-string">=</span> <span class="hljs-string">var.display_name</span>
  <span class="hljs-string">compartment_id</span>    <span class="hljs-string">=</span> <span class="hljs-string">var.compartment_ocid</span>
  <span class="hljs-string">vcn_id</span>            <span class="hljs-string">=</span> <span class="hljs-string">var.vcn_id</span>
  <span class="hljs-string">route_table_id</span>    <span class="hljs-string">=</span> <span class="hljs-string">var.route_table_id</span>
  <span class="hljs-string">security_list_ids</span> <span class="hljs-string">=</span> <span class="hljs-string">var.security_list_ids</span>
  <span class="hljs-string">dhcp_options_id</span>   <span class="hljs-string">=</span> <span class="hljs-string">var.dhcp_options_id</span>
  <span class="hljs-string">dns_label</span>         <span class="hljs-string">=</span> <span class="hljs-string">var.dns_label</span>
  <span class="hljs-string">prohibit_public_ip_on_vnic</span> <span class="hljs-string">=</span> <span class="hljs-string">var.prohibit_public_ip_on_vnic</span>
  <span class="hljs-string">availability_domain</span> <span class="hljs-string">=</span> <span class="hljs-string">var.availability_domain</span>
}

<span class="hljs-string">resource</span> <span class="hljs-string">"oci_identity_compartment"</span> <span class="hljs-string">"compartment"</span> {
  <span class="hljs-string">compartment_id</span> <span class="hljs-string">=</span> <span class="hljs-string">var.tenancy_ocid</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">var.description</span>
  <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">var.name</span>
  <span class="hljs-string">enable_delete</span> <span class="hljs-string">=</span> <span class="hljs-string">var.enable_delete</span>
}
</code></pre>
<h3 id="heading-google-cloud-platform-resources"><strong>Google Cloud Platform Resources</strong></h3>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"google_artifact_registry_repository"</span> <span class="hljs-string">"artifact-registry-repo"</span> {
  <span class="hljs-string">location</span>      <span class="hljs-string">=</span> <span class="hljs-string">var.repository_location</span>
  <span class="hljs-string">repository_id</span> <span class="hljs-string">=</span> <span class="hljs-string">var.repository_id</span>
  <span class="hljs-string">description</span>   <span class="hljs-string">=</span> <span class="hljs-string">var.repository_description</span>
  <span class="hljs-string">format</span>        <span class="hljs-string">=</span> <span class="hljs-string">var.repository_format</span>
  <span class="hljs-string">project</span> <span class="hljs-string">=</span> <span class="hljs-string">var.project_id</span>
}

<span class="hljs-string">resource</span> <span class="hljs-string">"google_cloud_run_v2_service"</span> <span class="hljs-string">"cloud_run_service"</span> {
  <span class="hljs-string">name</span>     <span class="hljs-string">=</span> <span class="hljs-string">var.name</span>
  <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">var.gcp_region</span>
  <span class="hljs-string">launch_stage</span> <span class="hljs-string">=</span> <span class="hljs-string">var.launch_stage</span>
  <span class="hljs-string">ingress</span> <span class="hljs-string">=</span> <span class="hljs-string">var.ingress</span>
  <span class="hljs-string">deletion_protection</span> <span class="hljs-string">=</span> <span class="hljs-string">var.deletion_protection</span>
  <span class="hljs-string">project</span> <span class="hljs-string">=</span> <span class="hljs-string">var.project</span>

  <span class="hljs-string">template</span> {
    <span class="hljs-string">service_account</span> <span class="hljs-string">=</span> <span class="hljs-string">var.service_account_name</span>
    <span class="hljs-string">scaling</span> {
      <span class="hljs-string">max_instance_count</span> <span class="hljs-string">=</span> <span class="hljs-string">var.maxScale</span>
      <span class="hljs-string">min_instance_count</span> <span class="hljs-string">=</span> <span class="hljs-string">var.minScale</span>
    }
    <span class="hljs-string">containers</span> {
      <span class="hljs-string">image</span> <span class="hljs-string">=</span> <span class="hljs-string">var.image</span>
      <span class="hljs-string">resources</span> {
        <span class="hljs-string">limits</span> <span class="hljs-string">=</span> {
          <span class="hljs-string">cpu</span> <span class="hljs-string">=</span> <span class="hljs-string">var.cpu</span>
          <span class="hljs-string">memory</span> <span class="hljs-string">=</span> <span class="hljs-string">var.memory</span>
        }
        <span class="hljs-string">cpu_idle</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
        <span class="hljs-string">startup_cpu_boost</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
      }

      <span class="hljs-comment"># Handle regular environment variables</span>
      <span class="hljs-string">dynamic</span> <span class="hljs-string">"env"</span> {
        <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">var.environment_variables</span>
        <span class="hljs-string">content</span> {
          <span class="hljs-string">name</span>  <span class="hljs-string">=</span> <span class="hljs-string">env.value.name</span>
          <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">env.value.value</span>
        }
      }

      <span class="hljs-comment"># Handle secret environment variables</span>
      <span class="hljs-string">dynamic</span> <span class="hljs-string">"env"</span> {
        <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">var.secret_environment_variables</span>
        <span class="hljs-string">content</span> {
          <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">env.value.name</span>
          <span class="hljs-string">value_source</span> {
            <span class="hljs-string">secret_key_ref</span> {
              <span class="hljs-string">secret</span>  <span class="hljs-string">=</span> <span class="hljs-string">env.value.secret_id</span>
              <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">env.value.version</span>
            }
          }
        }
      }
    }
  }
  <span class="hljs-string">lifecycle</span> {
    <span class="hljs-string">ignore_changes</span> <span class="hljs-string">=</span> [
      <span class="hljs-comment"># Ignore changes to specific attributes, e.g., image or environment variables</span>
      <span class="hljs-string">template</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.containers</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.image</span>,
      <span class="hljs-string">template</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.containers</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.env</span>,
      <span class="hljs-string">template</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.volumes</span>,
    ]
  }
}

<span class="hljs-string">resource</span> <span class="hljs-string">"google_secret_manager_secret"</span> <span class="hljs-string">"manager_secret"</span> {
  <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> { <span class="hljs-attr">for secret in var.secrets :</span> <span class="hljs-string">secret.name</span> <span class="hljs-string">=&gt;</span> <span class="hljs-string">secret</span> }

  <span class="hljs-string">secret_id</span> <span class="hljs-string">=</span> <span class="hljs-string">each.value.name</span>

  <span class="hljs-string">replication</span> {
    <span class="hljs-string">user_managed</span> {
      <span class="hljs-string">replicas</span> {
        <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">var.gcp_location</span>
      }
      <span class="hljs-string">replicas</span> {
        <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">var.gcp_replication_location</span>
      }
    }
  }
}

<span class="hljs-comment"># Secret versions</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"google_secret_manager_secret_version"</span> <span class="hljs-string">"secret_versions"</span> {
  <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> { <span class="hljs-attr">for secret in var.secrets :</span> <span class="hljs-string">secret.name</span> <span class="hljs-string">=&gt;</span> <span class="hljs-string">secret</span> }

  <span class="hljs-string">secret</span>      <span class="hljs-string">=</span> <span class="hljs-string">google_secret_manager_secret.manager_secret</span>[<span class="hljs-string">each.key</span>]<span class="hljs-string">.id</span>
  <span class="hljs-string">secret_data</span> <span class="hljs-string">=</span> <span class="hljs-string">each.value.value</span>
  <span class="hljs-string">deletion_policy</span> <span class="hljs-string">=</span> <span class="hljs-string">var.deletion_policy</span>

  <span class="hljs-string">lifecycle</span> {
    <span class="hljs-string">create_before_destroy</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
  }
}
</code></pre>
<p>Our Terraform implementation follows several infrastructure-as-code best practices:</p>
<ol>
<li><p><strong>Variable Parameterization</strong>: Everything is parameterized for flexibility across environments</p>
</li>
<li><p><strong>Resource Isolation</strong>: Clear separation between network, compute, and storage resources</p>
</li>
<li><p><strong>Lifecycle Management</strong>: Strategic use of lifecycle blocks to prevent unnecessary resource updates</p>
</li>
<li><p><strong>Secret Handling</strong>: Using Secret Manager properly instead of embedding secrets in Terraform files</p>
</li>
<li><p><strong>Cross-Cloud Management</strong>: Single codebase managing both OCI and GCP resources</p>
</li>
</ol>
<h2 id="heading-observability-and-monitoring-strategy"><strong>Observability and Monitoring Strategy</strong></h2>
<p>For monitoring and observability, we leverage Google Cloud's native capabilities:</p>
<ol>
<li><p><strong>Cloud Monitoring</strong>: Provides real-time metrics on our Cloud Run instances, including:</p>
<ul>
<li><p>Request latency and throughput</p>
</li>
<li><p>Instance count and CPU utilization</p>
</li>
<li><p>Memory usage and garbage collection metrics</p>
</li>
<li><p>Error rates and response codes</p>
</li>
</ul>
</li>
<li><p><strong>Cloud Logging</strong>: Centralized logging solution that captures:</p>
<ul>
<li><p>Authentication events and failures</p>
</li>
<li><p>Performance bottlenecks</p>
</li>
<li><p>Application errors and exceptions</p>
</li>
<li><p>Infrastructure changes</p>
</li>
</ul>
</li>
</ol>
<p>This approach takes advantage of the built-in integration between Cloud Run and Google's operations suite, eliminating the need for custom agents or complex configurations. For our Oracle ATP database, we use OCI's monitoring capabilities while forwarding critical alerts to our centralized monitoring solution.</p>
<h2 id="heading-business-value-delivered"><strong>Business Value Delivered</strong></h2>
<h3 id="heading-empowering-development-teams"><strong>Empowering Development Teams</strong></h3>
<p>Our Keycloak implementation has transformed how our development teams approach authentication and authorization. Previously, each team implemented their own authentication logic, creating inconsistencies, security vulnerabilities, and maintenance overhead. Now:</p>
<ul>
<li><p>Frontend developers focus exclusively on creating compelling user experiences</p>
</li>
<li><p>Backend developers concentrate on business logic and domain-specific functionality</p>
</li>
<li><p>Security protocols are consistently implemented across all applications</p>
</li>
<li><p>New applications can be integrated into our authentication ecosystem in days, not weeks</p>
</li>
</ul>
<p>As our lead application developer noted: "Having a centralized identity provider managed by the platform team means we can deliver business features faster while being more secure."</p>
<h3 id="heading-enhanced-end-user-experience"><strong>Enhanced End-User Experience</strong></h3>
<p>For end users, our implementation provides:</p>
<ul>
<li><p><strong>Single Sign-On</strong>: Users authenticate once to access all our services</p>
</li>
<li><p><strong>Self-Service Profile Management</strong>: Users control their own information</p>
</li>
<li><p><strong>Flexible Multi-Factor Authentication</strong>: Options including SMS, email, and authenticator apps</p>
</li>
<li><p><strong>Cross-Platform Consistency</strong>: The same authentication flow on web, mobile, and desktop</p>
</li>
<li><p><strong>Progressive Security</strong>: Step-up authentication for sensitive operations</p>
</li>
</ul>
<p>These capabilities have measurably improved our user retention and engagement metrics, with a 23% reduction in abandoned authentication attempts.</p>
<h3 id="heading-operational-efficiency"><strong>Operational Efficiency</strong></h3>
<p>The dual-instance architecture delivers significant operational benefits:</p>
<ul>
<li><p><strong>Reduced Attack Surface</strong>: Administrative functions are completely isolated from public access</p>
</li>
<li><p><strong>Independent Scaling</strong>: Resources are allocated based on actual usage patterns</p>
</li>
<li><p><strong>Controlled Change Management</strong>: Administrative changes can be made without affecting customer-facing systems</p>
</li>
<li><p><strong>Simplified Compliance</strong>: Clear separation simplifies audit requirements and reporting</p>
</li>
</ul>
<h3 id="heading-cost-optimization"><strong>Cost Optimization</strong></h3>
<p>Our approach delivers substantial cost advantages:</p>
<ul>
<li><p><strong>Elimination of Per-User Licensing</strong>: Saving compared to equivalent commercial solutions</p>
</li>
<li><p><strong>Efficient Resource Utilization</strong>: Cloud Run's serverless model ensures we only pay for resources when actively processing authentication requests</p>
</li>
<li><p><strong>Development Efficiency</strong>: Centralizing authentication reduces development costs across projects</p>
</li>
<li><p><strong>Support Cost Reduction</strong>: Fewer identity-related support tickets since implementation</p>
</li>
</ul>
<h2 id="heading-implementation-challenges-and-solutions"><strong>Implementation Challenges and Solutions</strong></h2>
<p>Our journey wasn't without obstacles:</p>
<h3 id="heading-oracle-atp-connection-stability"><strong>Oracle ATP Connection Stability</strong></h3>
<p>Initially, we experienced intermittent connection issues between Keycloak and OCI ATP. We resolved this by:</p>
<ol>
<li><p>Implementing connection pooling optimizations</p>
</li>
<li><p>Customizing Oracle JDBC driver settings</p>
</li>
<li><p>Configuring appropriate timeout and retry logic</p>
</li>
</ol>
<h3 id="heading-cold-start-performance"><strong>Cold Start Performance</strong></h3>
<p>Cloud Run's serverless nature initially caused slow authentication during traffic spikes. We addressed this by:</p>
<ol>
<li><p>Implementing min-instances=1 for the external instance during business hours</p>
</li>
<li><p>Enabling CPU boost for faster container startup</p>
</li>
<li><p>Optimizing JVM heap settings for faster initialization</p>
</li>
</ol>
<h2 id="heading-automated-cicd-pipeline"><strong>Automated CI/CD Pipeline</strong></h2>
<p>A critical aspect of our implementation is the fully automated deployment pipeline. We've established a GitHub Actions workflow that handles the entire process from build to deployment:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Auth</span> <span class="hljs-string">Client</span> <span class="hljs-string">Deployment</span>

<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">keycloak-client</span> ]

<span class="hljs-attr">env:</span>
  <span class="hljs-attr">GCP_WIF_PROJECT_ID:</span> <span class="hljs-string">"project-wif-843"</span>
  <span class="hljs-attr">GCP_WIF_POOL:</span> <span class="hljs-string">"auth-server-gh-pool"</span>
  <span class="hljs-attr">GCP_WIF_PROVIDER:</span> <span class="hljs-string">"auth-server-gh-prov"</span>
  <span class="hljs-attr">PROJECT_LOCATION:</span> <span class="hljs-string">europe-west1</span>
  <span class="hljs-attr">PROJECT_ID:</span> <span class="hljs-string">project-prod843</span>
  <span class="hljs-attr">GCP_ARTIFACT_REGISTRY_NAME:</span> <span class="hljs-string">project-repository</span>
  <span class="hljs-attr">DOCKER_IMAGE:</span> <span class="hljs-string">keycloak-oracle-client</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">deploy:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">"Deploy Client Auth Server"</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">environment:</span> <span class="hljs-string">prod</span>
    <span class="hljs-attr">permissions:</span>
      <span class="hljs-attr">contents:</span> <span class="hljs-string">'read'</span>
      <span class="hljs-attr">id-token:</span> <span class="hljs-string">'write'</span>

    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>

      <span class="hljs-comment"># Authentication using Workload Identity Federation</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">'auth'</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">'Authenticate to Google Cloud'</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">'google-github-actions/auth@v2'</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">create_credentials_file:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">workload_identity_provider:</span> <span class="hljs-string">'projects/$<span class="hljs-template-variable">{{ env.GCP_WIF_PROJECT_NUMBER }}</span>/locations/global/workloadIdentityPools/$<span class="hljs-template-variable">{{ env.GCP_WIF_POOL }}</span>/providers/$<span class="hljs-template-variable">{{ env.GCP_WIF_PROVIDER }}</span>'</span>
          <span class="hljs-attr">service_account:</span> <span class="hljs-string">'$<span class="hljs-template-variable">{{ env.GCP_WIF_SA }}</span>@$<span class="hljs-template-variable">{{ env.GCP_WIF_PROJECT_ID }}</span>.iam.gserviceaccount.com'</span>

      <span class="hljs-comment"># Semantic versioning automation</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Retrieve</span> <span class="hljs-string">information</span> <span class="hljs-string">on</span> <span class="hljs-string">existing</span> <span class="hljs-string">releases</span>
        <span class="hljs-attr">id:</span> <span class="hljs-string">get_release_info</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          RELEASE_TAG=$(curl -L -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" https://api.github.com/repos/${{ github.repository }}/releases/latest |  jq -r '.tag_name')
          MAJOR=$(echo $RELEASE_TAG | awk -F. '{print $1}')
          MINOR=$(echo $RELEASE_TAG | awk -F. '{print $2}')
          PATCH=$(echo $RELEASE_TAG | awk -F. '{print $3}')
          PATCH=$((PATCH + 1))
          NEXT_VERSION="${MAJOR}.${MINOR}.${PATCH}"
          NEXT_VERSION=$(echo $NEXT_VERSION | sed 's/^v//')
          echo "NEXT_VERSION=${NEXT_VERSION}" &gt;&gt; $GITHUB_ENV
</span>
      <span class="hljs-comment"># Container build and deployment</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Container</span>
        <span class="hljs-attr">working-directory:</span> <span class="hljs-string">./keycloak-oci-atp/dev</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|-
          docker build -t "${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${{ env.NEXT_VERSION }}" .
</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">to</span> <span class="hljs-string">cloud</span> <span class="hljs-string">run</span>
        <span class="hljs-attr">working-directory:</span> <span class="hljs-string">./keycloak-oci-atp/dev</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|-
          gcloud run deploy auth-ext \
            --image "${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${{ env.NEXT_VERSION }}" \
            --platform managed \
            --project ${PROJECT_ID} \
            --region ${PROJECT_LOCATION} \
            --allow-unauthenticated \
            --memory=2048Mi \
            --cpu 2 \
            --min-instances=0 \
            --max-instances=3 \
            --service-account=keycloak-client-sa@${PROJECT_ID}.iam.gserviceaccount.com \
            --update-secrets=KC_DB_PASSWORD=KC_DB_PASSWORD:latest \
            --timeout=3600s \
            --cpu-boost \
            --port=8080 \
            --ingress=all \
            --execution-environment=gen2</span>
</code></pre>
<p>This pipeline provides several key advantages:</p>
<ol>
<li><p><strong>Automated Versioning</strong>: Semantic version increments with each deployment</p>
</li>
<li><p><strong>Zero-Touch Deployment</strong>: Complete automation from commit to development environment after Pair Testing in Feature Branch and Merging</p>
</li>
<li><p><strong>Secure Authentication</strong>: Using Google's Workload Identity Federation for keyless authentication</p>
</li>
<li><p><strong>Secret Management</strong>: Sensitive configuration is managed through Cloud Secret Manager</p>
</li>
<li><p><strong>Immutable Deployments</strong>: Each release creates a versioned container image</p>
</li>
</ol>
<h2 id="heading-integration-with-hashicorp-vault"><strong>Integration with HashiCorp Vault</strong></h2>
<p>Our Keycloak implementation connects directly with our HashiCorp Vault infrastructure through OIDC, creating a comprehensive security ecosystem:</p>
<ol>
<li><p><strong>Identity Federation</strong>: Keycloak serves as the identity provider for Vault</p>
</li>
<li><p><strong>Certificate Management</strong>: Vault manages SSL/TLS certificates for applications</p>
</li>
<li><p><strong>Secret Rotation</strong>: Automated database credential rotation through Vault</p>
</li>
<li><p><strong>Centralized Policy Management</strong>: Access policies coordinated between systems</p>
</li>
</ol>
<p>This integration allows us to maintain a single source of truth for identity while leveraging Vault's advanced secret management capabilities.</p>
<h2 id="heading-multi-cloud-secret-management-approach"><strong>Multi-Cloud Secret Management Approach</strong></h2>
<p>We've implemented a hybrid approach to secrets management that leverages the strengths of both cloud-native and open-source solutions:</p>
<ol>
<li><p><strong>Google Secret Manager</strong>: Used for storing Keycloak admin passwords and other GCP-specific credentials, benefiting from tight integration with GCP IAM and Cloud Run</p>
</li>
<li><p><strong>HashiCorp Vault</strong>: Handles more complex secret management requirements, with Keycloak providing OIDC-based authentication for accessing Vault</p>
</li>
</ol>
<p>This approach gives us the advantages of cloud-native integration where appropriate while maintaining flexibility through open standards. By using Keycloak as the identity provider for Vault, we maintain a single source of truth for authentication while leveraging each tool's strengths.</p>
<h2 id="heading-looking-forward"><strong>Looking Forward</strong></h2>
<p>With our foundation now established, we're exploring several enhancements:</p>
<ol>
<li><p><strong>Federation with External Identity Providers</strong>: Allowing customers to use existing corporate credentials like Google, Facebook ...</p>
</li>
<li><p><strong>Advanced Threat Protection</strong>: Implementing risk-based authentication flows</p>
</li>
<li><p><strong>Deeper Analytics</strong>: Gaining insights into authentication patterns to proactively address potential issues</p>
</li>
<li><p><strong>Custom Authentication Flows</strong>: Building domain-specific authentication experiences</p>
</li>
<li><p><strong>Disaster Recovery Planning</strong>: Developing cross-region failover capabilities to further enhance availability</p>
</li>
</ol>
<h2 id="heading-breaking-free-from-vendor-lock-in"><strong>Breaking Free from Vendor Lock-in</strong></h2>
<p>One of the most significant business advantages of our approach is the reduction of vendor lock-in. By combining open-source technologies (Keycloak) with cloud-agnostic containerization and serverless deployment patterns:</p>
<ol>
<li><p><strong>Flexible Infrastructure</strong>: Our solution can run on any cloud provider or on-premises environment that supports containers</p>
</li>
<li><p><strong>Avoidance of Proprietary APIs</strong>: We rely on standard protocols and interfaces rather than provider-specific services</p>
</li>
<li><p><strong>Cost Negotiation Leverage</strong>: The ability to migrate provides negotiating power with cloud providers</p>
</li>
<li><p><strong>Risk Mitigation</strong>: Protection against service discontinuation or unfavorable terms changes</p>
</li>
</ol>
<h2 id="heading-cost-benefits-of-open-source-serverless"><strong>Cost Benefits of Open Source + Serverless</strong></h2>
<p>Our implementation delivers substantial cost optimization through the synergy of open source and serverless technologies:</p>
<ol>
<li><p><strong>Elimination of Licensing Costs</strong>: Open-source Keycloak removes per-user fees typical of commercial identity solutions</p>
</li>
<li><p><strong>Pay-per-Use Infrastructure</strong>: Cloud Run's serverless model means we only pay for actual authentication traffic</p>
</li>
<li><p><strong>Right-sized Resource Allocation</strong>: Independent scaling for admin and client instances optimizes resource utilization</p>
</li>
<li><p><strong>Reduced Operational Overhead</strong>: Managed services minimize the need for dedicated infrastructure management</p>
</li>
<li><p><strong>Development Efficiency</strong>: Standardized authentication reduces per-application development costs</p>
</li>
</ol>
<p>This model transforms identity from a fixed cost to a variable cost that scales with actual usage.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>The combination of Keycloak, containerization, and serverless deployment represents a modern approach to identity that delivers enterprise-grade functionality without vendor lock-in or prohibitive licensing costs. This architecture not only meets current requirements but also provides the flexibility to adapt to future needs without major reimplementation.</p>
<p>Most importantly, by handling the complexities of authentication and authorization through this centralized service, we've empowered our development teams to focus on what truly matters: creating business value through domain-specific applications and experiences.</p>
<h2 id="heading-kipcklwqxcpcklwqxcpcklwqxcpcklwqxcpcklwqxcpcklwqxcpcklwqxcoqkg"><strong>*********************</strong></h2>
<p><em>Specialising in Cloud Architecture and Application Modernisation, Saha Merlin is a Cloud Solutions Architect and DevSecOps Specialist who helps organizations build scalable, secure, and sustainable infrastructure. With six years of specialized experience in highly regulated industries—split equally between insurance and finance—he brings deep understanding of compliance requirements and industry-specific challenges to his technical implementations.</em></p>
<p><em>His expertise spans various deployment models including Container-as-a-Service (CaaS), Infrastructure-as-a-Service (IaaS), and serverless platforms that drive business outcomes through technical excellence. He strategically implements open source technologies, particularly when SaaS solutions fall short or when greater control and autonomy are essential to meeting business requirements.</em></p>
<p><em>Saha integrates DevSecOps practices, Green IT principles to minimize environmental impact, and Generative AI to accelerate innovation. With a solid foundation in Software Engineering and nine years of diverse industry experience, he designs cloud-native solutions that align with both industry standards and emerging technological trends.</em></p>
]]></content:encoded></item><item><title><![CDATA[Securing Digital Assets: Implementing Cost-Effective SSL Encryption in Kubernetes Environments]]></title><description><![CDATA[In today's digital landscape, cybersecurity is not just a technical requirement—it's a critical business imperative. This comprehensive guide demonstrates how organizations can leverage Let's Encrypt and Cert-Manager to implement robust SSL encryptio...]]></description><link>https://merlin.microworka.com/securing-digital-assets-implementing-cost-effective-ssl-encryption-in-kubernetes-environments</link><guid isPermaLink="true">https://merlin.microworka.com/securing-digital-assets-implementing-cost-effective-ssl-encryption-in-kubernetes-environments</guid><category><![CDATA[k8s]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Security]]></category><category><![CDATA[Let's Encrypt]]></category><category><![CDATA[gke]]></category><category><![CDATA[Google]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Mon, 12 May 2025 16:40:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715518093827/6c757072-392c-4199-80fa-4cbbc7306d2c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's digital landscape, cybersecurity is not just a technical requirement—it's a critical business imperative. This comprehensive guide demonstrates how organizations can leverage Let's Encrypt and Cert-Manager to implement robust SSL encryption in Kubernetes clusters, reducing security risks while optimizing operational costs.</p>
<h2 id="heading-the-business-case-for-automated-ssl-encryption">The Business Case for Automated SSL Encryption</h2>
<p>Modern enterprises face significant challenges in maintaining secure digital infrastructure:</p>
<ul>
<li><p><strong>Security Risks</strong>: Unencrypted connections expose sensitive data to potential breaches</p>
</li>
<li><p><strong>Compliance Demands</strong>: Many industries require continuous HTTPS protection</p>
</li>
<li><p><strong>Cost Pressures</strong>: Traditional SSL certificates can be expensive and complex to manage</p>
</li>
</ul>
<p>Let's Encrypt offers a game-changing solution: free, automated SSL certificates that integrate seamlessly with Kubernetes environments.</p>
<h2 id="heading-technical-dive-ssl-implementation">Technical Dive: SSL Implementation</h2>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>Before diving into the implementation, ensure you have:</p>
<ul>
<li><p>A Kubernetes cluster (we'll use Google Kubernetes Engine as our reference architecture)</p>
</li>
<li><p>Configured <code>kubectl</code> command-line tool</p>
</li>
<li><p>A domain name mapped to your cluster's load balancer IP</p>
</li>
</ul>
<h3 id="heading-implementation">Implementation</h3>
<h4 id="heading-step-1-cluster-authentication-and-preparation">Step 1: Cluster Authentication and Preparation</h4>
<p>Authenticate and connect to your GKE cluster using the following commands:</p>
<pre><code class="lang-yaml"><span class="hljs-string">bashgcloud</span> <span class="hljs-string">auth</span> <span class="hljs-string">login</span>
<span class="hljs-string">sudo</span> <span class="hljs-string">apt-get</span> <span class="hljs-string">install</span> <span class="hljs-string">google-cloud-sdk-gke-gcloud-auth-plugin</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">container</span> <span class="hljs-string">clusters</span> <span class="hljs-string">get-credentials</span> <span class="hljs-string">&lt;cluster-name&gt;</span> <span class="hljs-string">--zone</span> <span class="hljs-string">&lt;cluster-location&gt;</span> <span class="hljs-string">--project</span> <span class="hljs-string">&lt;project-id&gt;</span>
<span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">nodes</span>
</code></pre>
<h4 id="heading-step-2-deploy-cert-manager-the-ssl-automation-engine">Step 2: Deploy Cert-Manager - The SSL Automation Engine</h4>
<p>Cert-Manager is a crucial Kubernetes addon that automates TLS certificate management:</p>
<pre><code class="lang-yaml"><span class="hljs-string">bashkubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml</span>
<span class="hljs-string">kubectl</span> <span class="hljs-string">-n</span> <span class="hljs-string">cert-manager</span> <span class="hljs-string">get</span> <span class="hljs-string">all</span>
</code></pre>
<h3 id="heading-staged-rollout-strategy">Staged Rollout Strategy</h3>
<p>We'll implement a two-phase deployment to minimize risks:</p>
<ol>
<li><p><strong>Staging Environment</strong></p>
<ul>
<li><p>Uses Let's Encrypt's staging server</p>
</li>
<li><p>Allows testing without rate limits</p>
</li>
<li><p>Validates configuration before production deployment</p>
</li>
</ul>
</li>
<li><p><strong>Production Environment</strong></p>
<ul>
<li><p>Switches to Let's Encrypt's production certificate</p>
</li>
<li><p>Enables full, trusted SSL protection</p>
</li>
</ul>
</li>
</ol>
<p>Deploy in staging environement</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># issuer-lets-encrypt-staging.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">cert-manager.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Issuer</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">letsencrypt-staging</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">&lt;your-app-namespace&gt;</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">acme:</span>
    <span class="hljs-attr">server:</span> <span class="hljs-string">https://acme-staging-v02.api.letsencrypt.org/directory</span>
    <span class="hljs-attr">email:</span> <span class="hljs-string">&lt;your-email&gt;</span>
    <span class="hljs-attr">privateKeySecretRef:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">letsencrypt-staging</span>
    <span class="hljs-attr">solvers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">http01:</span>
          <span class="hljs-attr">ingress:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">web-ingress</span>
</code></pre>
<p>Create an empty Secret for your SSL certificate before reconfiguring the Ingress and apply it.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># secret.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Secret</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">web-ssl</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">&lt;your-app-namespace&gt;</span>
<span class="hljs-attr">type:</span> <span class="hljs-string">kubernetes.io/tls</span>
<span class="hljs-attr">stringData:</span>
  <span class="hljs-attr">tls.key:</span> <span class="hljs-string">""</span>
  <span class="hljs-attr">tls.crt:</span> <span class="hljs-string">""</span>
</code></pre>
<p>Apply your empty secret</p>
<pre><code class="lang-bash">kubectl apply -f ssl/secret.yaml
kubectl apply -f issuer-lets-encrypt-staging.yaml
kubectl describe issuers.cert-manager.io letsencrypt-staging -n &lt;your-app-namespace&gt;
</code></pre>
<p><strong>Step 3: Create Ingress controller</strong></p>
<pre><code class="lang-yaml"><span class="hljs-comment"># ingress.yml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">web-ingress</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">&lt;your-app-namespace&gt;</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">kubernetes.io/ingress.allow-http:</span> <span class="hljs-string">"true"</span>
    <span class="hljs-attr">kubernetes.io/ingress.global-static-ip-name:</span> <span class="hljs-string">"lb-static-ip"</span>
    <span class="hljs-attr">cert-manager.io/issuer:</span> <span class="hljs-string">letsencrypt-staging</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">tls:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">secretName:</span> <span class="hljs-string">web-ssl</span>
     <span class="hljs-attr">hosts:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">&lt;your-domain.com&gt;</span>
  <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">&lt;your-domain.com&gt;</span>
      <span class="hljs-attr">http:</span>
        <span class="hljs-attr">paths:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/*</span>
            <span class="hljs-attr">pathType:</span> <span class="hljs-string">ImplementationSpecific</span>
            <span class="hljs-attr">backend:</span>
              <span class="hljs-attr">service:</span>
                <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
                <span class="hljs-attr">port:</span>
                  <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
</code></pre>
<p>Test it to check the content of your application (it can take arround 5 minutes to propagate)</p>
<pre><code class="lang-bash">curl -v --insecure https://yourdomain.com
</code></pre>
<p><strong>Step 4: Deploy in production</strong></p>
<p>Once staging validation succeeds, transition to the production Let's Encrypt server</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># issuer-lets-encrypt-production.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">cert-manager.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Issuer</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">letsencrypt-production</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">&lt;your-app-namespace&gt;</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">acme:</span>
    <span class="hljs-attr">server:</span> <span class="hljs-string">https://acme-v02.api.letsencrypt.org/directory</span>
    <span class="hljs-attr">email:</span> <span class="hljs-string">&lt;your-email&gt;</span>
    <span class="hljs-attr">privateKeySecretRef:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">letsencrypt-production</span>
    <span class="hljs-attr">solvers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">http01:</span>
          <span class="hljs-attr">ingress:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">web-ingress</span>
</code></pre>
<p><em>Switch SSL to Production</em></p>
<pre><code class="lang-bash">kubectl apply -f issuer-lets-encrypt-production.yaml
kubectl annotate ingress web-ingress cert-manager.io/issuer=letsencrypt-production --overwrite -n &lt;your-app-namespace&gt;
curl -v https://yourdomain.com <span class="hljs-comment"># wait 5 minutes min before test</span>
</code></pre>
<p><strong>Congratulations</strong></p>
<h2 id="heading-key-business-benefits">Key Business Benefits</h2>
<ol>
<li><p><strong>Cost Optimization</strong>: Zero-cost SSL certificates</p>
</li>
<li><p><strong>Automated Management</strong>: Automatic certificate renewal</p>
</li>
<li><p><strong>Reduced Operational Overhead</strong>: Simplified SSL infrastructure</p>
</li>
<li><p><strong>Enhanced Security Posture</strong>: Continuous HTTPS protection</p>
</li>
</ol>
<h2 id="heading-operational-insights">Operational Insights</h2>
<ul>
<li><p>Cert-Manager automatically handles certificate renewal</p>
</li>
<li><p>You'll receive email notifications 30 days before certificate expiration</p>
</li>
<li><p>The entire process is repeatable across different Kubernetes environments</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Implementing Let's Encrypt SSL in Kubernetes is no longer a complex technical challenge but a strategic business enabler. By following this guide, organizations can dramatically improve their digital security while maintaining operational efficiency.</p>
<p><strong>Pro Tip</strong>: Always test in staging first and monitor your certificate's status to ensure uninterrupted service.</p>
]]></content:encoded></item><item><title><![CDATA[Auto-Unsealing HashiCorp Vault with GCP KMS and Deploying to Cloud Run]]></title><description><![CDATA[Introduction
HashiCorp Vault is a powerful secrets management tool that helps organizations secure, store, and control access to tokens, passwords, certificates, and encryption keys. One challenge with managing Vault is the need to unseal it after ea...]]></description><link>https://merlin.microworka.com/auto-unsealing-hashicorp-vault-with-gcp-kms-and-deploying-to-cloud-run</link><guid isPermaLink="true">https://merlin.microworka.com/auto-unsealing-hashicorp-vault-with-gcp-kms-and-deploying-to-cloud-run</guid><category><![CDATA[Vault]]></category><category><![CDATA[hashicorp]]></category><category><![CDATA[hashicorp-vault]]></category><category><![CDATA[GCP]]></category><category><![CDATA[google cloud]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Fri, 02 May 2025 08:58:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746175083836/96268919-62f6-4300-99e9-e21cac996929.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746175251765/e617cd58-89e5-4d9b-9edd-e3c575a6d675.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p>HashiCorp Vault is a powerful secrets management tool that helps organizations secure, store, and control access to tokens, passwords, certificates, and encryption keys. One challenge with managing Vault is the need to unseal it after each restart, which can be cumbersome in automated environments. This article demonstrates how to automate the Vault unsealing process using Google Cloud KMS and deploy the solution to Google Cloud Run for a serverless, scalable, and cost-effective setup.</p>
<p>I'll walk through the entire process, including:</p>
<ol>
<li><p>Setting up GCP resources with Terraform</p>
</li>
<li><p>Configuring Vault for auto-unsealing with GCP KMS</p>
</li>
<li><p>Creating a Docker container for Vault</p>
</li>
<li><p>Deploying to Cloud Run</p>
</li>
<li><p>Automating deployment with GitHub Actions</p>
</li>
<li><p>Migrating from Shamir key shares to GCP KMS auto-unsealing</p>
</li>
</ol>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<ul>
<li><p>Google Cloud Platform account with a project</p>
</li>
<li><p>GCP service account with appropriate permissions</p>
</li>
<li><p>Basic knowledge of Terraform, Docker, and Vault</p>
</li>
<li><p>HashiCorp Vault CLI installed locally</p>
</li>
<li><p>GitHub repository for CI/CD (optional)</p>
</li>
</ul>
<h2 id="heading-setting-up-environment-variables"><strong>Setting Up Environment Variables</strong></h2>
<p>Start by setting up environment variables for your deployment:</p>
<pre><code class="lang-yaml"><span class="hljs-string">export</span> <span class="hljs-string">PROJECT_ID=gcp_project_id</span>
<span class="hljs-string">export</span> <span class="hljs-string">GCP_LOCATION=europe-west1</span>
<span class="hljs-string">export</span> <span class="hljs-string">GCP_ARTIFACT_REGISTRY_NAME=docker-repository</span>
<span class="hljs-string">export</span> <span class="hljs-string">DOCKER_IMAGE=vault-server</span>
<span class="hljs-string">export</span> <span class="hljs-string">CLOUD_RUN_SERVICE_NAME=vault-server</span>
</code></pre>
<h2 id="heading-gcp-resources-with-terraform"><strong>GCP Resources with Terraform</strong></h2>
<h3 id="heading-required-iam-roles-for-vault-service-account"><strong>Required IAM Roles for Vault Service Account</strong></h3>
<p>The Vault service account needs the following roles to interact with GCP services:</p>
<pre><code class="lang-yaml"><span class="hljs-string">roles/cloudkms.viewer</span>
<span class="hljs-string">roles/cloudkms.cryptoKeyEncrypterDecrypter</span> <span class="hljs-string">or</span> <span class="hljs-string">roles/cloudkms.signerVerifier</span>
<span class="hljs-string">roles/secretmanager.secretAccessor</span>
<span class="hljs-string">roles/storage.objectAdmin</span>
</code></pre>
<h3 id="heading-terraform-configuration"><strong>Terraform Configuration</strong></h3>
<p>Create a Terraform configuration file to set up the necessary GCP KMS resources:</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"google_kms_key_ring"</span> <span class="hljs-string">"keyring"</span> {
  <span class="hljs-string">name</span>     <span class="hljs-string">=</span> <span class="hljs-string">"${var.name}-keyring"</span>
  <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">var.location</span>
  <span class="hljs-string">project</span>  <span class="hljs-string">=</span> <span class="hljs-string">var.project</span>
}
<span class="hljs-string">​</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"google_kms_crypto_key"</span> <span class="hljs-string">"key"</span> {
  <span class="hljs-string">name</span>            <span class="hljs-string">=</span> <span class="hljs-string">"${var.name}-key"</span>
  <span class="hljs-string">key_ring</span>        <span class="hljs-string">=</span> <span class="hljs-string">google_kms_key_ring.keyring.id</span>
  <span class="hljs-string">rotation_period</span> <span class="hljs-string">=</span> <span class="hljs-string">var.rotation_period</span>
  <span class="hljs-string">purpose</span>         <span class="hljs-string">=</span> <span class="hljs-string">var.purpose</span>
}
<span class="hljs-string">​</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"google_kms_crypto_key_iam_binding"</span> <span class="hljs-string">"iam"</span> {
  <span class="hljs-string">crypto_key_id</span> <span class="hljs-string">=</span> <span class="hljs-string">google_kms_crypto_key.key.id</span>
  <span class="hljs-string">role</span>          <span class="hljs-string">=</span> <span class="hljs-string">"roles/cloudkms.cryptoKeyEncrypterDecrypter"</span>
<span class="hljs-string">​</span>
  <span class="hljs-string">members</span> <span class="hljs-string">=</span> [
    <span class="hljs-string">"serviceAccount:${var.vault_service_account}@${var.project}.iam.gserviceaccount.com"</span>
  ]
}
<span class="hljs-string">​</span>
<span class="hljs-string">variable</span> <span class="hljs-string">"name"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"vault-unseal"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"location"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"global"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"project"</span> {}
<span class="hljs-string">variable</span> <span class="hljs-string">"rotation_period"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"7776000s"</span>  <span class="hljs-comment"># 90 days</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"purpose"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"ENCRYPT_DECRYPT"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"vault_service_account"</span> {}
</code></pre>
<p>You'll also need to create a Google Cloud Storage bucket for Vault's storage backend:</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"google_storage_bucket"</span> <span class="hljs-string">"storage-bucket"</span> {
  <span class="hljs-string">name</span>          <span class="hljs-string">=</span> <span class="hljs-string">var.bucket_name</span>
  <span class="hljs-string">location</span>      <span class="hljs-string">=</span> <span class="hljs-string">var.location</span>
  <span class="hljs-string">force_destroy</span> <span class="hljs-string">=</span> <span class="hljs-string">var.force_destroy</span>
  <span class="hljs-string">uniform_bucket_level_access</span> <span class="hljs-string">=</span> <span class="hljs-string">var.uniform_bucket_level_access</span>
  <span class="hljs-string">public_access_prevention</span> <span class="hljs-string">=</span> <span class="hljs-string">var.public_access_prevention</span>
<span class="hljs-string">​</span>
  <span class="hljs-string">storage_class</span> <span class="hljs-string">=</span> <span class="hljs-string">var.storage_class</span>
  <span class="hljs-string">versioning</span> {
    <span class="hljs-string">enabled</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
  }
}
<span class="hljs-string">​</span>
<span class="hljs-string">​</span>
<span class="hljs-string">variable</span> <span class="hljs-string">"bucket_name"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"vault-server-bucket"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"location"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"EU"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"uniform_bucket_level_access"</span> {
  <span class="hljs-string">type</span>    <span class="hljs-string">=</span> <span class="hljs-string">bool</span>
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"storage_class"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"STANDARD"</span>
}
<span class="hljs-string">​</span>
<span class="hljs-string">variable</span> <span class="hljs-string">"force_destroy"</span> {
  <span class="hljs-string">type</span>    <span class="hljs-string">=</span> <span class="hljs-string">bool</span>
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-literal">false</span>
}
<span class="hljs-string">​</span>
<span class="hljs-string">variable</span> <span class="hljs-string">"public_access_prevention"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"enforced"</span>
}
</code></pre>
<h2 id="heading-configuring-vault-for-auto-unsealing"><strong>Configuring Vault for Auto-Unsealing</strong></h2>
<p>Create a <code>vault-config.hcl</code> file for your Vault configuration:</p>
<pre><code class="lang-yaml"><span class="hljs-string">seal</span> <span class="hljs-string">"gcpckms"</span> {
  <span class="hljs-string">project</span>     <span class="hljs-string">=</span> <span class="hljs-string">"gcp_project_id"</span>
  <span class="hljs-string">region</span>      <span class="hljs-string">=</span> <span class="hljs-string">"global"</span>
  <span class="hljs-string">key_ring</span>    <span class="hljs-string">=</span> <span class="hljs-string">"vault-unseal-keyring"</span>
  <span class="hljs-string">crypto_key</span>  <span class="hljs-string">=</span> <span class="hljs-string">"vault-unseal-key"</span>
}
<span class="hljs-string">​</span>
<span class="hljs-string">storage</span> <span class="hljs-string">"gcs"</span> {
  <span class="hljs-string">bucket</span> <span class="hljs-string">=</span> <span class="hljs-string">"vault-server-bucket"</span>
}
<span class="hljs-string">​</span>
<span class="hljs-string">listener</span> <span class="hljs-string">"tcp"</span> {
  <span class="hljs-string">address</span>     <span class="hljs-string">=</span> <span class="hljs-string">"0.0.0.0:8080"</span>
  <span class="hljs-string">tls_disable</span> <span class="hljs-string">=</span> <span class="hljs-number">0</span>  <span class="hljs-comment"># Enabling TLS</span>
}
</code></pre>
<blockquote>
<p><strong>Note:</strong> Make sure to replace <code>gcp_project_id</code> with your actual GCP project ID. For production deployments, you should properly configure TLS with certificates rather than using <code>tls_disable = 0</code>.</p>
</blockquote>
<h2 id="heading-creating-the-vault-docker-container"><strong>Creating the Vault Docker Container</strong></h2>
<h3 id="heading-dockerfile"><strong>Dockerfile</strong></h3>
<p>Create a Dockerfile for the Vault container:</p>
<pre><code class="lang-yaml"><span class="hljs-string">FROM</span> <span class="hljs-string">hashicorp/vault:1.19.0</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Create a non-root user and group</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">addgroup</span> <span class="hljs-string">-S</span> <span class="hljs-string">vaultgroup</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">adduser</span> <span class="hljs-string">-S</span> <span class="hljs-string">vaultuser</span> <span class="hljs-string">-G</span> <span class="hljs-string">vaultgroup</span>
<span class="hljs-string">​</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">mkdir</span> <span class="hljs-string">-p</span> <span class="hljs-string">/vault/config</span>
<span class="hljs-string">​</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">config/vault-config.hcl</span> <span class="hljs-string">/vault/config/vault-config.hcl</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Set proper ownership for Vault directories and files</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">chown</span> <span class="hljs-string">-R</span> <span class="hljs-string">vaultuser:vaultgroup</span> <span class="hljs-string">/vault</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Use the non-root user</span>
<span class="hljs-string">USER</span> <span class="hljs-string">vaultuser</span>
<span class="hljs-string">​</span>
<span class="hljs-string">ENTRYPOINT</span> [<span class="hljs-string">"vault"</span>, <span class="hljs-string">"server"</span>, <span class="hljs-string">"-config=/vault/config/vault-config.hcl"</span>]
</code></pre>
<p>Make sure your project structure looks like this:</p>
<pre><code class="lang-yaml"><span class="hljs-string">project/</span>
<span class="hljs-string">├──</span> <span class="hljs-string">config/</span>
<span class="hljs-string">│</span>   <span class="hljs-string">└──</span> <span class="hljs-string">vault-config.hcl</span>
<span class="hljs-string">├──</span> <span class="hljs-string">Dockerfile</span>
<span class="hljs-string">└──</span> <span class="hljs-string">terraform/</span>
    <span class="hljs-string">└──</span> <span class="hljs-string">main.tf</span>
</code></pre>
<h2 id="heading-building-and-pushing-the-docker-image-manually"><strong>Building and Pushing the Docker Image Manually</strong></h2>
<p>If you're not using CI/CD, you can build and push the Docker image manually:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Build Container</span>
<span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">-t</span> <span class="hljs-string">"${GCP_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:latest"</span> <span class="hljs-string">.</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Authenticate with Artifact Registry</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">auth</span> <span class="hljs-string">configure-docker</span> <span class="hljs-string">${GCP_LOCATION}-docker.pkg.dev</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Push Container</span>
<span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">"${GCP_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:latest"</span>
</code></pre>
<h2 id="heading-automating-deployment-with-github-actions"><strong>Automating Deployment with GitHub Actions</strong></h2>
<p>For a more robust deployment process, you can use GitHub Actions to automate the build and deployment of your Vault server. Create a file named <code>.github/workflows/deploy-vault.yml</code> with the following content:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">Vault</span> <span class="hljs-string">Server</span> <span class="hljs-string">Deployment</span>
<span class="hljs-string">​</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">vault-server</span> ]
<span class="hljs-string">​</span>
<span class="hljs-attr">env:</span>
  <span class="hljs-attr">GCP_WIF_PROJECT_ID:</span> <span class="hljs-string">"org-wif-project"</span>
  <span class="hljs-attr">GCP_WIF_PROJECT_NUMBER:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.GCP_WIF_PROJECT_NUMBER</span> <span class="hljs-string">}}</span>
  <span class="hljs-attr">GCP_WIF_POOL:</span> <span class="hljs-string">"auth-server-gh-pool"</span>
  <span class="hljs-attr">GCP_WIF_PROVIDER:</span> <span class="hljs-string">"auth-server-gh-prov"</span>
  <span class="hljs-attr">GCP_WIF_SA:</span> <span class="hljs-string">"github-org-auth-sa"</span>
  <span class="hljs-attr">RELEASE_TOKEN:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.RELEASE_TOKEN</span> <span class="hljs-string">}}</span>
<span class="hljs-string">​</span>
  <span class="hljs-attr">PROJECT_LOCATION:</span> <span class="hljs-string">europe-west1</span>
  <span class="hljs-attr">PROJECT_ID:</span> <span class="hljs-string">org-env-project</span>
  <span class="hljs-attr">GCP_ARTIFACT_REGISTRY_NAME:</span> <span class="hljs-string">docker-repository</span>
  <span class="hljs-attr">DOCKER_IMAGE:</span> <span class="hljs-string">vault-server</span>
  <span class="hljs-attr">GCP_IMPERSONATED_SA_LIFETIME_TOKEN:</span> <span class="hljs-number">300</span> <span class="hljs-comment"># 5 minutes</span>
<span class="hljs-string">​</span>
<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">deploy:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">"Deploy Vault"</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">environment:</span> <span class="hljs-string">test</span>
    <span class="hljs-attr">permissions:</span>
      <span class="hljs-attr">contents:</span> <span class="hljs-string">'read'</span>
      <span class="hljs-attr">id-token:</span> <span class="hljs-string">'write'</span>
<span class="hljs-string">​</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>
<span class="hljs-string">​</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Set</span> <span class="hljs-string">up</span> <span class="hljs-string">Cloud</span> <span class="hljs-string">SDK</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">google-github-actions/setup-gcloud@v1</span>
<span class="hljs-string">​</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">'auth'</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">'Authenticate to Google Cloud'</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">'google-github-actions/auth@v2'</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">create_credentials_file:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">workload_identity_provider:</span> <span class="hljs-string">'projects/$<span class="hljs-template-variable">{{ env.GCP_WIF_PROJECT_NUMBER }}</span>/locations/global/workloadIdentityPools/$<span class="hljs-template-variable">{{ env.GCP_WIF_POOL }}</span>/providers/$<span class="hljs-template-variable">{{ env.GCP_WIF_PROVIDER }}</span>'</span>
          <span class="hljs-attr">service_account:</span> <span class="hljs-string">'$<span class="hljs-template-variable">{{ env.GCP_WIF_SA }}</span>@$<span class="hljs-template-variable">{{ env.GCP_WIF_PROJECT_ID }}</span>.iam.gserviceaccount.com'</span>
<span class="hljs-string">​</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Retrieve</span> <span class="hljs-string">information</span> <span class="hljs-string">on</span> <span class="hljs-string">existing</span> <span class="hljs-string">releases</span>
        <span class="hljs-attr">id:</span> <span class="hljs-string">get_release_info</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          RELEASE_TAG=$(curl -L -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" https://api.github.com/repos/${{ github.repository }}/releases/latest |  jq -r '.tag_name')
          echo "Latest release tag: $RELEASE_TAG"
          MAJOR=$(echo $RELEASE_TAG | awk -F. '{print $1}')
          MINOR=$(echo $RELEASE_TAG | awk -F. '{print $2}')
          PATCH=$(echo $RELEASE_TAG | awk -F. '{print $3}')
          PATCH=$((PATCH + 1))
          NEXT_VERSION="${MAJOR}.${MINOR}.${PATCH}"
          NEXT_VERSION=$(echo $NEXT_VERSION | sed 's/^v//')  # Remove the "v" from the beginning of the version
          echo "NEXT_VERSION=${NEXT_VERSION}" &gt;&gt; $GITHUB_ENV
          echo "Next version: $NEXT_VERSION"
</span><span class="hljs-string">​</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Container</span>
        <span class="hljs-attr">working-directory:</span> <span class="hljs-string">./vault-server</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|-
          docker build -t "${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${{ env.NEXT_VERSION }}" .
</span><span class="hljs-string">​</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Authenticate</span> <span class="hljs-string">Artifact</span> <span class="hljs-string">Registry</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|-
          gcloud -q auth configure-docker ${{ env.PROJECT_LOCATION }}-docker.pkg.dev
</span><span class="hljs-string">​</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Container</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|-
          docker push "${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${{ env.NEXT_VERSION }}"
</span><span class="hljs-string">​</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">to</span> <span class="hljs-string">cloud</span> <span class="hljs-string">run</span>
        <span class="hljs-attr">working-directory:</span> <span class="hljs-string">./vault-server</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|-
          gcloud run deploy vault-server \
            --image "${PROJECT_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:${{ env.NEXT_VERSION }}" \
            --platform managed \
            --project ${PROJECT_ID} \
            --region ${PROJECT_LOCATION} \
            --no-allow-unauthenticated \
            --service-account=vault-server-sa@${PROJECT_ID}.iam.gserviceaccount.com \
            --update-secrets=/vault/credentials/gcp-vault-agent-sa.json=VAULT_AGENT_SA:latest \
            --memory=1024Mi \
            --cpu 1 \
            --min-instances=0 \
            --max-instances=3 \
            --timeout=3600s \
            --cpu-boost \
            --port=8080 \
            --ingress=all \
            --execution-environment=gen2
</span><span class="hljs-string">​</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Create</span> <span class="hljs-string">a</span> <span class="hljs-string">release</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/create-release@v1</span>
        <span class="hljs-attr">env:</span>
          <span class="hljs-attr">GITHUB_TOKEN:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.RELEASE_TOKEN</span> <span class="hljs-string">}}</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">tag_name:</span> <span class="hljs-string">${{</span> <span class="hljs-string">env.NEXT_VERSION</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">release_name:</span> <span class="hljs-string">Version</span> <span class="hljs-string">${{</span> <span class="hljs-string">env.NEXT_VERSION</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">body:</span> <span class="hljs-string">Release</span> <span class="hljs-string">notes</span> <span class="hljs-string">for</span> <span class="hljs-string">vault</span> <span class="hljs-string">version</span> <span class="hljs-string">${{</span> <span class="hljs-string">env.NEXT_VERSION</span> <span class="hljs-string">}}</span>
</code></pre>
<p>This workflow does the following:</p>
<ol>
<li><p>Authenticates to Google Cloud using Workload Identity Federation</p>
</li>
<li><p>Retrieves the latest release version and increments it</p>
</li>
<li><p>Builds the Vault Docker image</p>
</li>
<li><p>Pushes the image to Google Artifact Registry</p>
</li>
<li><p>Deploys the image to Cloud Run</p>
</li>
<li><p>Creates a new GitHub release</p>
</li>
</ol>
<blockquote>
<p><strong>Note:</strong> To use this workflow, you'll need to set up Workload Identity Federation and add the required secrets to your GitHub repository.</p>
</blockquote>
<h2 id="heading-deploying-to-cloud-run-manually"><strong>Deploying to Cloud Run Manually</strong></h2>
<p>If you're not using CI/CD, you can deploy to Cloud Run manually:</p>
<pre><code class="lang-yaml"><span class="hljs-string">gcloud</span> <span class="hljs-string">run</span> <span class="hljs-string">deploy</span> <span class="hljs-string">${CLOUD_RUN_SERVICE_NAME}</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--image</span> <span class="hljs-string">"${GCP_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${GCP_ARTIFACT_REGISTRY_NAME}/${DOCKER_IMAGE}:latest"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--platform</span> <span class="hljs-string">managed</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--project</span> <span class="hljs-string">${PROJECT_ID}</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--region</span> <span class="hljs-string">${GCP_LOCATION}</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--no-allow-unauthenticated</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--service-account=vault-server-sa@${PROJECT_ID}.iam.gserviceaccount.com</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--update-secrets=/vault/credentials/gcp-vault-agent-sa.json=VAULT_AGENT_SA:latest</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--memory=1024Mi</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--cpu</span> <span class="hljs-number">1</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--min-instances=0</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--max-instances=3</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--timeout=3600s</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--cpu-boost</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--port=8080</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--ingress=all</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--execution-environment=gen2</span>
</code></pre>
<blockquote>
<p><strong>Security Note:</strong> In a production environment, you should keep <code>--no-allow-unauthenticated</code> for proper authentication for your Vault server rather than using <code>--allow-unauthenticated</code>. Consider setting up Identity-Aware Proxy (IAP) or another authentication mechanism.</p>
</blockquote>
<h2 id="heading-setting-up-and-using-the-vault-cli"><strong>Setting Up and Using the Vault CLI</strong></h2>
<p>Install the Vault CLI locally to interact with your deployed Vault server:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># For macOS</span>
<span class="hljs-string">brew</span> <span class="hljs-string">tap</span> <span class="hljs-string">hashicorp/tap</span>
<span class="hljs-string">brew</span> <span class="hljs-string">install</span> <span class="hljs-string">hashicorp/tap/vault</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Verify installation</span>
<span class="hljs-string">vault</span> <span class="hljs-string">version</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Configure CLI to talk to your Vault server</span>
<span class="hljs-string">export</span> <span class="hljs-string">VAULT_ADDR="https://vault.yourdomain.com"</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Check Vault status</span>
<span class="hljs-string">vault</span> <span class="hljs-string">status</span>
</code></pre>
<h2 id="heading-migrating-from-shamir-keys-to-gcp-kms-auto-unsealing"><strong>Migrating from Shamir Keys to GCP KMS Auto-Unsealing</strong></h2>
<p>If you're migrating an existing Vault installation from Shamir key shares to GCP KMS auto-unsealing, follow these steps:</p>
<ol>
<li><p>Update your Vault configuration to include the <code>gcpckms</code> seal stanza</p>
</li>
<li><p>Restart Vault</p>
</li>
<li><p>Unseal Vault with the <code>-migrate</code> flag:</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-comment"># Provide three of your five Shamir unseal keys with the -migrate flag</span>
<span class="hljs-string">vault</span> <span class="hljs-string">operator</span> <span class="hljs-string">unseal</span> <span class="hljs-string">-migrate</span> <span class="hljs-string">&lt;UNSEAL_KEY_1&gt;</span>
<span class="hljs-string">vault</span> <span class="hljs-string">operator</span> <span class="hljs-string">unseal</span> <span class="hljs-string">-migrate</span> <span class="hljs-string">&lt;UNSEAL_KEY_2&gt;</span>
<span class="hljs-string">vault</span> <span class="hljs-string">operator</span> <span class="hljs-string">unseal</span> <span class="hljs-string">-migrate</span> <span class="hljs-string">&lt;UNSEAL_KEY_3&gt;</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Verify the migration was successful</span>
<span class="hljs-string">vault</span> <span class="hljs-string">status</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746249011930/0929599c-da1e-46b7-880b-6d0c0a4502c1.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746249022484/d2265d83-ec1a-4dfa-828b-cc59dd2eae31.png" alt class="image--center mx-auto" /></p>
<p>After successful migration, you should see output similar to:</p>
<pre><code class="lang-yaml"><span class="hljs-string">Key</span>                      <span class="hljs-string">Value</span>
<span class="hljs-string">---</span>                      <span class="hljs-string">-----</span>
<span class="hljs-string">Seal</span> <span class="hljs-string">Type</span>                <span class="hljs-string">gcpckms</span>
<span class="hljs-string">Recovery</span> <span class="hljs-string">Seal</span> <span class="hljs-string">Type</span>       <span class="hljs-string">shamir</span>
<span class="hljs-string">Initialized</span>              <span class="hljs-literal">true</span>
<span class="hljs-string">Sealed</span>                   <span class="hljs-literal">false</span>
<span class="hljs-string">Total</span> <span class="hljs-string">Recovery</span> <span class="hljs-string">Shares</span>    <span class="hljs-number">5</span>
<span class="hljs-string">Threshold</span>                <span class="hljs-number">3</span>
<span class="hljs-string">Version</span>                  <span class="hljs-number">1.19</span><span class="hljs-number">.0</span>
<span class="hljs-string">Build</span> <span class="hljs-string">Date</span>               <span class="hljs-number">2025-03-04T12:36:40Z</span>
<span class="hljs-string">Storage</span> <span class="hljs-string">Type</span>             <span class="hljs-string">gcs</span>
<span class="hljs-string">Cluster</span> <span class="hljs-string">Name</span>             <span class="hljs-string">vault-cluster-xxxxxx</span>
<span class="hljs-string">Cluster</span> <span class="hljs-string">ID</span>               <span class="hljs-string">sssss-5a16sc405-89c1-s333333ffffff</span>
<span class="hljs-string">HA</span> <span class="hljs-string">Enabled</span>               <span class="hljs-literal">true</span>
</code></pre>
<p>Note that the Shamir keys are now recovery keys for use in emergency situations.</p>
<h2 id="heading-setting-up-authentication-methods-optional"><strong>Setting Up Authentication Methods (Optional)</strong></h2>
<p>After your Vault is up and running with auto-unsealing, you might want to configure authentication methods:</p>
<h3 id="heading-oidc-authentication"><strong>OIDC Authentication</strong></h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Enable OIDC auth method</span>
<span class="hljs-string">vault</span> <span class="hljs-string">auth</span> <span class="hljs-string">enable</span> <span class="hljs-string">oidc</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Configure OIDC</span>
<span class="hljs-string">vault</span> <span class="hljs-string">write</span> <span class="hljs-string">auth/oidc/config</span> <span class="hljs-string">\</span>
  <span class="hljs-string">oidc_discovery_url="https://oidc.yourdomain.com/realms/vault"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">oidc_client_id="vault-client"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">oidc_client_secret="your-client-secret"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">default_role="reader"</span>
</code></pre>
<h3 id="heading-approle-authentication-for-applications"><strong>AppRole Authentication for Applications</strong></h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Enable AppRole auth method</span>
<span class="hljs-string">vault</span> <span class="hljs-string">auth</span> <span class="hljs-string">enable</span> <span class="hljs-string">approle</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Create a policy for the Vault agent</span>
<span class="hljs-string">vault</span> <span class="hljs-string">policy</span> <span class="hljs-string">write</span> <span class="hljs-string">vault-agent-policy</span> <span class="hljs-bullet">-</span> <span class="hljs-string">&lt;&lt;EOF</span>
<span class="hljs-string">path</span> <span class="hljs-string">"secret/data/*"</span> {
  <span class="hljs-string">capabilities</span> <span class="hljs-string">=</span> [<span class="hljs-string">"read"</span>]
}
<span class="hljs-string">EOF</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Create an AppRole with the policy attached</span>
<span class="hljs-string">vault</span> <span class="hljs-string">write</span> <span class="hljs-string">auth/approle/role/vault-agent</span> <span class="hljs-string">\</span>
  <span class="hljs-string">token_policies="vault-agent-policy"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">token_ttl=1h</span> <span class="hljs-string">\</span>
  <span class="hljs-string">token_max_ttl=2h</span> <span class="hljs-string">\</span>
  <span class="hljs-string">secret_id_ttl=1h</span> <span class="hljs-string">\</span>
  <span class="hljs-string">bind_secret_id=true</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Get Role ID</span>
<span class="hljs-string">vault</span> <span class="hljs-string">read</span> <span class="hljs-string">auth/approle/role/vault-agent/role-id</span>
<span class="hljs-string">​</span>
<span class="hljs-comment"># Generate a Secret ID</span>
<span class="hljs-string">vault</span> <span class="hljs-string">write</span> <span class="hljs-string">-f</span> <span class="hljs-string">auth/approle/role/vault-agent/secret-id</span>
</code></pre>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>By configuring HashiCorp Vault with Google Cloud KMS for auto-unsealing and deploying it to Cloud Run, we've created a serverless, fully managed secrets management solution that automatically unseals itself after restarts. The CI/CD pipeline with GitHub Actions further enhances this setup by automating the deployment process, making version management easy and repeatable, and our use of GCP Secret Manager ensures that sensitive initialization data is securely stored.</p>
<p>This approach eliminates the operational burden of manual unsealing while maintaining the security benefits of Vault's seal mechanism. The combination of GCP KMS for auto-unsealing, Secret Manager for secure storage, Cloud Run for deployment, and GitHub Actions for CI/CD provides a scalable, resilient, and cost-effective secrets management solution that can grow with your organization's needs.</p>
<h2 id="heading-next-steps"><strong>Next Steps</strong></h2>
<ul>
<li><p>Set up proper TLS certificates for your Vault instance</p>
</li>
<li><p>Configure additional authentication methods as needed</p>
</li>
<li><p>Implement audit logging</p>
</li>
<li><p>Enhance your CI/CD pipeline with testing</p>
</li>
<li><p>Implement backup and disaster recovery procedures</p>
</li>
<li><p>Consider setting up Vault HA for higher availability</p>
</li>
</ul>
<h2 id="heading-resources"><strong>Resources</strong></h2>
<ul>
<li><p><a target="_blank" href="https://developer.hashicorp.com/vault/docs">HashiCorp Vault Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://developer.hashicorp.com/vault/docs/configuration/seal/gcpckms">GCP KMS Auto-Unseal</a></p>
</li>
<li><p><a target="_blank" href="https://cloud.google.com/solutions/secrets-management">Vault on Google Cloud</a></p>
</li>
<li><p><a target="_blank" href="https://cloud.google.com/run/docs">Cloud Run Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://docs.github.com/en/actions">GitHub Actions Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://cloud.google.com/secret-manager/docs">GCP Secret Manager</a></p>
</li>
</ul>
<hr />
<p><em>This article represents a practical implementation based on real-world experience deploying HashiCorp Vault on Google Cloud Platform. While this setup works well for many use cases, always assess your specific security requirements before implementing any secrets management solution in production.</em></p>
]]></content:encoded></item><item><title><![CDATA[Cloud Migration: From OVH VPS to Oracle Cloud Infrastructure Free Tier]]></title><description><![CDATA[Introduction
In early 2025, I undertook a cost optimization project for a startup running a legacy Java/Servlet/JSP application. The goal was to reduce hosting costs while maintaining or improving infrastructure quality. This post details our journey...]]></description><link>https://merlin.microworka.com/cloud-migration-from-ovh-vps-to-oracle-cloud-infrastructure-free-tier</link><guid isPermaLink="true">https://merlin.microworka.com/cloud-migration-from-ovh-vps-to-oracle-cloud-infrastructure-free-tier</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Oracle]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Mon, 20 Jan 2025 08:06:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737373307888/d410a6af-03aa-4089-9888-73c9d9f8d06c.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In early 2025, I undertook a cost optimization project for a startup running a legacy Java/Servlet/JSP application. The goal was to reduce hosting costs while maintaining or improving infrastructure quality. This post details our journey from an OVH VPS to Oracle Cloud Infrastructure (OCI) Free Tier, resulting in a €0/month hosting cost while gaining enterprise-grade features.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737357116306/d423e322-064a-4837-9fdb-23143b25c2bb.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-initial-situation-analysis">Initial Situation Analysis</h2>
<h3 id="heading-legacy-environment">Legacy Environment</h3>
<ul>
<li><p>Application Stack: Java/Servlet/JSP</p>
</li>
<li><p>Build System: No Maven/Gradle</p>
</li>
<li><p>Deployment: Direct WAR deployment</p>
</li>
<li><p>Infrastructure: OVH VPS</p>
</li>
<li><p>Monthly Costs:</p>
<ul>
<li><p>VPS: €15.98</p>
</li>
<li><p>Backup Service: €6.00</p>
</li>
<li><p>Total: €21.98/month</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-technical-debt-overview">Technical Debt Overview</h3>
<ul>
<li><p>No containerization</p>
</li>
<li><p>Manual deployment process</p>
</li>
<li><p>Basic backup system</p>
</li>
<li><p>Limited security features</p>
</li>
<li><p>No Infrastructure as Code</p>
</li>
</ul>
<h2 id="heading-cloud-provider-selection-process">Cloud Provider Selection Process</h2>
<p>We conducted a thorough analysis of major cloud providers based on the following criteria:</p>
<h3 id="heading-google-cloud-platform">Google Cloud Platform</h3>
<ul>
<li><p>Costs for equivalent setup: €115.89 - €136.86/month</p>
</li>
<li><p>Breakdown:</p>
<ul>
<li><p>Compute (n1-standard-1): €34.67</p>
</li>
<li><p>Database: ~€101.40</p>
</li>
<li><p>Storage: €0.10</p>
</li>
<li><p>External IP: €7.29</p>
</li>
<li><p>DNS: €0.65</p>
</li>
</ul>
</li>
<li><p>Free tier: Limited duration and resources</p>
</li>
</ul>
<h3 id="heading-oracle-cloud-infrastructure">Oracle Cloud Infrastructure</h3>
<ul>
<li><p>Free Tier Resources:</p>
<ul>
<li><p>2 AMD or 4 ARM OCPUs</p>
</li>
<li><p>24GB RAM</p>
</li>
<li><p>200GB Block Storage</p>
</li>
<li><p>Autonomous Database</p>
</li>
<li><p>Load Balancer</p>
</li>
<li><p>10TB/month outbound bandwidth</p>
</li>
</ul>
</li>
<li><p>Enterprise Features Included:</p>
<ul>
<li><p>Web Application Firewall</p>
</li>
<li><p>Bastion Service</p>
</li>
<li><p>Cloud Guard</p>
</li>
<li><p>Vulnerability Scanning</p>
</li>
<li><p>Automated Backups</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-decision-factors">Decision Factors</h3>
<ol>
<li><p>Cost Efficiency: OCI's permanent free tier</p>
</li>
<li><p>Resource Allocation: Generous compute and memory</p>
</li>
<li><p>Enterprise Features: Included security and monitoring</p>
</li>
<li><p>Long-term Viability: No time limitation on free tier</p>
</li>
</ol>
<h2 id="heading-migration-strategy">Migration Strategy</h2>
<h3 id="heading-why-lift-and-shift">Why Lift-and-Shift?</h3>
<ol>
<li><p>Resource Constraints</p>
<ul>
<li><p>Limited budget for immediate modernization</p>
</li>
<li><p>Team focused on product development</p>
</li>
<li><p>No immediate technical debt impact</p>
</li>
</ul>
</li>
<li><p>Future-Proofing</p>
<ul>
<li><p>Zero hosting costs enable gradual modernization</p>
</li>
<li><p>Infrastructure ready for containerization</p>
</li>
<li><p>Automated deployment pipeline in place</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-infrastructure-as-code-implementation">Infrastructure as Code Implementation</h3>
<h4 id="heading-terraform-backend-configuration">Terraform Backend Configuration</h4>
<p>We used Terraform Cloud for state management and team collaboration. Here's our backend configuration:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># backend.tf</span>
<span class="hljs-string">terraform</span> {
  <span class="hljs-string">required_version</span> <span class="hljs-string">=</span> <span class="hljs-string">"&gt;= 1.1.0"</span>
  <span class="hljs-string">backend</span> <span class="hljs-string">"remote"</span> {
    <span class="hljs-string">hostname</span>     <span class="hljs-string">=</span> <span class="hljs-string">"app.terraform.io"</span>
    <span class="hljs-string">organization</span> <span class="hljs-string">=</span> <span class="hljs-string">"[Organization Name]"</span>

    <span class="hljs-string">workspaces</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"prod"</span>
    }
  }
}
</code></pre>
<p>This setup provides several benefits:</p>
<ul>
<li><p>Secure state file storage</p>
</li>
<li><p>State locking to prevent concurrent modifications</p>
</li>
<li><p>Team collaboration capabilities</p>
</li>
<li><p>Version control integration</p>
</li>
<li><p>Automated state backups</p>
</li>
<li><p>Run history and audit trail</p>
</li>
</ul>
<p>Our Terraform project structure:</p>
<pre><code class="lang-yaml"><span class="hljs-string">terraform/</span>
<span class="hljs-string">├──</span> <span class="hljs-string">modules/</span>
<span class="hljs-string">├──</span> <span class="hljs-string">script/</span>
<span class="hljs-string">├──</span> <span class="hljs-string">backend.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">data.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">local.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">main.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">output.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">primary.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">providers.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">secondary.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">variables.tf</span>
<span class="hljs-string">└──</span> <span class="hljs-string">versions.tf</span>
</code></pre>
<h2 id="heading-implementation-timeline">Implementation Timeline</h2>
<ol>
<li><p>Planning Phase (2 Days)</p>
<ul>
<li><p>Infrastructure design</p>
</li>
<li><p>Migration strategy documentation</p>
</li>
<li><p>Resource allocation planning</p>
</li>
</ul>
</li>
<li><p>Infrastructure Setup (2 Days)</p>
<ul>
<li><p>Terraform configuration</p>
</li>
<li><p>Network setup</p>
</li>
<li><p>Security configuration</p>
</li>
</ul>
</li>
<li><p>Application Migration (1 Day)</p>
<ul>
<li><p>Database migration</p>
</li>
<li><p>File system replication</p>
</li>
<li><p>DNS configuration</p>
</li>
</ul>
</li>
<li><p>Testing &amp; Validation (2 Days)</p>
<ul>
<li><p>Functionality testing</p>
</li>
<li><p>Performance validation</p>
</li>
<li><p>Security verification</p>
</li>
</ul>
</li>
<li><p>Cutover (1 Day)</p>
<ul>
<li><p>DNS switch</p>
</li>
<li><p>Final data sync</p>
</li>
<li><p>Go-live verification</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-results-and-benefits">Results and Benefits</h2>
<ol>
<li><p>Cost Reduction</p>
<ul>
<li><p>Previous cost: €21.98/month</p>
</li>
<li><p>Current cost: €0/month</p>
</li>
<li><p>Annual savings: €263.76</p>
</li>
</ul>
</li>
<li><p>Infrastructure Improvements</p>
<ul>
<li><p>High availability architecture</p>
</li>
<li><p>Automated backups</p>
</li>
<li><p>Web Application Firewall</p>
</li>
<li><p>Bastion service</p>
</li>
<li><p>Enhanced monitoring</p>
</li>
</ul>
</li>
<li><p>Operational Benefits</p>
<ul>
<li><p>Infrastructure as Code</p>
</li>
<li><p>Automated deployments</p>
</li>
<li><p>Enhanced security</p>
</li>
<li><p>Improved disaster recovery</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-future-roadmap">Future Roadmap</h2>
<ol>
<li><p>Application Modernization</p>
<ul>
<li><p>Maven integration</p>
</li>
<li><p>Containerization</p>
</li>
<li><p>CI/CD pipeline enhancement</p>
</li>
</ul>
</li>
<li><p>Infrastructure Evolution</p>
<ul>
<li><p>Multi-region deployment</p>
</li>
<li><p>Container orchestration</p>
</li>
<li><p>Serverless adoption</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-exemple-resources-and-further-reading-to-start">Exemple Resources and Further Reading to start</h2>
<ul>
<li>1 year ago, i add on my Youtube Channel Complete implementation tutorial: <strong>Pipeline DevSecOps Oracle Cloud Gratuite, Jenkins, Trivy, OWASP, Docker Hub, Sonarqube, Terraform</strong></li>
</ul>
<iframe width="560" height="315" src="https://www.youtube.com/embed/mvBNh6scVHk?si=rPLrQdj-05zYQQ3i"></iframe>

<ul>
<li>Infrastructure as Code templates: <a target="_blank" href="https://www.youtube.com/redirect?event=comments&amp;redir_token=QUFFLUhqbS0zMDB6TU9pdFlTVnJmTGtvWGhoU0N2U3hZZ3xBQ3Jtc0tsVWotSjVGSzU2ZFR1a0hnN0hpWDVHTUl4N0plQlFvU0Rya3ExeEVWdV9NQUY1RE54cFU4SEdVNkNhdEtLNDJZM1ZtX2FZSUN0OVNMc2h1Y25xbVN5UDFfVGtWbFVEZGt1amlGREJndkFBZHM0clFyUQ&amp;q=https%3A%2F%2Fgithub.com%2Fdevsahamerlin%2Fiac-spring-boot-atp-jenkins-oci-devsecops">https://github.com/devsahamerlin/iac-spring-boot-atp-jenkins-oci-devsecops</a></li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This migration demonstrates that pragmatic cloud adoption doesn't always require immediate application modernization. By leveraging OCI's free tier, we eliminated hosting costs while creating a foundation for future improvements. The startup can now modernize at their own pace without infrastructure cost pressure.</p>
<p>The complete automation scripts, Infrastructure as Code templates, and configuration files are available in our GitHub repository. For a detailed walkthrough of setting up a complete DevSecOps pipeline on OCI Free Tier, check out our YouTube tutorial.</p>
<p><em>Tags: #CloudMigration #OCI #DevOps #IaC #CostOptimization #CloudArchitecture</em></p>
]]></content:encoded></item><item><title><![CDATA[Building a Secure, Scalable Enterprise Architecture with GCP and MongoDB Atlas]]></title><description><![CDATA[In today's rapidly evolving digital landscape, organizations need cloud architectures that can deliver high availability, security, and scalability while maintaining operational efficiency. This post explores a modern enterprise-grade architecture bu...]]></description><link>https://merlin.microworka.com/building-a-secure-scalable-enterprise-architecture-with-gcp-and-mongodb-atlas</link><guid isPermaLink="true">https://merlin.microworka.com/building-a-secure-scalable-enterprise-architecture-with-gcp-and-mongodb-atlas</guid><category><![CDATA[Security]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Mon, 23 Dec 2024 08:27:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747069560163/2bde9482-3149-4950-a2c6-ac6e1edd6405.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's rapidly evolving digital landscape, organizations need cloud architectures that can deliver high availability, security, and scalability while maintaining operational efficiency. This post explores a modern enterprise-grade architecture built on Google Cloud Platform (GCP) and MongoDB Atlas, designed to meet these demanding requirements.</p>
<h2 id="heading-introduction">Introduction</h2>
<p>Modern enterprises require infrastructure that can support rapid innovation while ensuring robust security and reliability. Our architecture combines the power of Google Kubernetes Engine (GKE) with MongoDB Atlas to create a solution that addresses these needs through a comprehensive, security-first approach.</p>
<h2 id="heading-core-architecture-components">Core Architecture Components</h2>
<h3 id="heading-private-kubernetes-cluster">Private Kubernetes Cluster</h3>
<p>At the heart of our architecture lies a private GKE cluster, designed with security and isolation in mind. The cluster operates with internal IP addresses only, following RFC1918 standards for private networks. This approach ensures that nodes and pods are inherently isolated from the internet, creating a secure foundation for our applications.</p>
<p>The cluster features:</p>
<ul>
<li><p>Multi-zone deployment for high availability</p>
</li>
<li><p>Node auto-provisioning for dynamic scaling</p>
</li>
<li><p>Horizontal Pod Autoscaling (HPA) for workload optimization</p>
</li>
<li><p>Private nodes with no public IP addresses</p>
</li>
</ul>
<h3 id="heading-security-implementation">Security Implementation</h3>
<p>Security is implemented in multiple layers throughout the architecture:</p>
<p>Identity and Access Management:</p>
<ul>
<li><p>Identity-Aware Proxy (IAP) controls access to applications</p>
</li>
<li><p>Cloud IAM provides fine-grained access control</p>
</li>
<li><p>Kubernetes Secrets manage sensitive configuration data</p>
</li>
</ul>
<p>Network Security:</p>
<ul>
<li><p>Virtual Private Cloud (VPC) isolates resources</p>
</li>
<li><p>Cloud Firewall rules control traffic flow</p>
</li>
<li><p>SSL certificates secure HTTPS communications</p>
</li>
<li><p>Cloud NAT enables secure outbound internet access</p>
</li>
</ul>
<p>Security Monitoring and Prevention:</p>
<ul>
<li><p>Cloud Security Scanner identifies web vulnerabilities</p>
</li>
<li><p>Security Command Center provides threat detection</p>
</li>
<li><p>Checkov performs automated security analysis of infrastructure configurations</p>
</li>
</ul>
<h3 id="heading-database-layer">Database Layer</h3>
<p>The MongoDB Atlas integration brings several crucial capabilities:</p>
<ul>
<li><p>Regional cluster deployment with multi-zone redundancy</p>
</li>
<li><p>Automated backups and point-in-time recovery</p>
</li>
<li><p>Network isolation through VPC peering</p>
</li>
<li><p>IP Access Lists for controlled database access</p>
</li>
</ul>
<h3 id="heading-cicd-pipeline">CI/CD Pipeline</h3>
<p>Our continuous integration and deployment pipeline leverages:</p>
<ul>
<li><p>GitHub for version control and collaboration</p>
</li>
<li><p>Artifact Registry for container image management</p>
</li>
<li><p>ArgoCD for GitOps-driven deployments</p>
</li>
<li><p>Automated deployment system (Dispatch) for seamless updates</p>
</li>
</ul>
<h3 id="heading-monitoring-and-maintenance">Monitoring and Maintenance</h3>
<p>The architecture includes comprehensive monitoring through:</p>
<ul>
<li><p>Cloud Logging for centralized log management</p>
</li>
<li><p>Cloud Monitoring for performance tracking</p>
</li>
<li><p>Regular automated backups</p>
</li>
<li><p>Jump Host for secure maintenance access</p>
</li>
</ul>
<h2 id="heading-business-benefits">Business Benefits</h2>
<h3 id="heading-enhanced-security-posture">Enhanced Security Posture</h3>
<p>The multi-layered security approach significantly reduces the risk of breaches while maintaining compliance with industry standards. The private cluster design, combined with IAP and Cloud Security Command Center, provides comprehensive protection for sensitive workloads.</p>
<h3 id="heading-operational-excellence">Operational Excellence</h3>
<p>Automation plays a crucial role in reducing manual intervention and human error. The GitOps approach with ArgoCD ensures consistent deployments, while auto-scaling capabilities optimize resource utilization automatically.</p>
<h3 id="heading-cost-optimization">Cost Optimization</h3>
<p>Several features contribute to cost efficiency:</p>
<ul>
<li><p>Dynamic scaling adjusts resources based on demand</p>
</li>
<li><p>Multi-zone deployment optimizes for availability without excessive redundancy</p>
</li>
<li><p>Cloud CDN reduces bandwidth costs and improves performance</p>
</li>
<li><p>Automated resource management prevents waste</p>
</li>
</ul>
<h3 id="heading-business-continuity">Business Continuity</h3>
<p>The architecture ensures business continuity through:</p>
<ul>
<li><p>Multi-zone deployment for high availability</p>
</li>
<li><p>Automated backup solutions for both GKE and AKS</p>
</li>
<li><p>Disaster recovery planning and implementation</p>
</li>
<li><p>Real-time monitoring and alerting</p>
</li>
</ul>
<h2 id="heading-implementation-considerations">Implementation Considerations</h2>
<h3 id="heading-network-design">Network Design</h3>
<p>The network architecture carefully balances security with accessibility:</p>
<ul>
<li><p>Cloud DNS manages domain name resolution</p>
</li>
<li><p>VPC peering enables secure communication between networks</p>
</li>
<li><p>Cloud Router facilitates dynamic route exchange</p>
</li>
<li><p>Load balancers distribute traffic efficiently</p>
</li>
</ul>
<h3 id="heading-development-workflow">Development Workflow</h3>
<p>The development process is streamlined through:</p>
<ul>
<li><p>GitHub for collaborative development</p>
</li>
<li><p>Terraform Cloud for infrastructure as code</p>
</li>
<li><p>Integrated CI/CD pipeline</p>
</li>
<li><p>Automated testing and security scanning</p>
</li>
</ul>
<h2 id="heading-future-considerations">Future Considerations</h2>
<p>The architecture is designed with future growth in mind:</p>
<ul>
<li><p>Potential integration with on-premises systems</p>
</li>
<li><p>Multi-regional expansion capabilities</p>
</li>
<li><p>Multi-cloud deployment options</p>
</li>
<li><p>Continuous cost and performance optimization</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This architecture represents a comprehensive approach to modern cloud infrastructure, combining security, scalability, and operational efficiency. By leveraging GCP's advanced services and MongoDB Atlas's robust database capabilities, it provides a solid foundation for enterprise applications while maintaining flexibility for future growth.</p>
<p>The implementation demonstrates how careful consideration of security, automation, and scalability can result in an architecture that not only meets current business needs but also positions organizations for future success. Through features like private clustering, automated security scanning, and GitOps-driven deployment, it establishes a framework that supports both rapid innovation and stable operations.</p>
<p>Organizations adopting this architecture can expect improved security posture, reduced operational overhead, and enhanced ability to scale their applications while maintaining control over costs and complexity. The architecture's emphasis on automation and security-first design makes it particularly suitable for enterprises handling sensitive workloads while requiring operational agility.</p>
<p>For organizations considering similar architectures, the key is to maintain focus on security, automation, and scalability while ensuring that the implementation aligns with specific business requirements and compliance needs.</p>
<h2 id="heading-want-to-implement-this-architecture-check-out-these-resources">📚 Want to implement this architecture? Check out these resources:</h2>
<h3 id="heading-video-tutorials-french">🎥 Video Tutorials (French):</h3>
<ul>
<li><p>Secure GitHub Actions &amp; GCP with Workload Identity Federation: <a target="_blank" href="https://www.youtube.com/watch?v=VzP6NhN-rW0">https://www.youtube.com/watch?v=VzP6NhN-rW0</a></p>
</li>
<li><p>Configure MongoDB Atlas with Terraform Cloud: <a target="_blank" href="https://www.youtube.com/watch?v=GbGBIU97sCY">https://www.youtube.com/watch?v=GbGBIU97sCY</a></p>
</li>
<li><p>Connect Terraform Cloud to GCP via Workload Identity Federation: <a target="_blank" href="https://www.youtube.com/watch?v=ebV8VeNdscU">https://www.youtube.com/watch?v=ebV8VeNdscU</a></p>
</li>
<li><p>Connect to GCP private resources with IAP: <a target="_blank" href="https://www.youtube.com/watch?v=FUqWOMvyWxo">https://www.youtube.com/watch?v=FUqWOMvyWxo</a></p>
</li>
</ul>
<h3 id="heading-technical-guides-english">📝 Technical Guides (English):</h3>
<ul>
<li><p>Secure GitHub Actions-GCP Connection with Workload Identity Federation: <a target="_blank" href="https://merlin.microworka.com/establish-a-secure-connection-between-github-actions-and-google-cloud-platform-gcp-using-workload-identity-federation">https://merlin.microworka.com/establish-a-secure-connection-between-github-actions-and-google-cloud-platform-gcp-using-workload-identity-federation</a></p>
</li>
<li><p>Link Terraform Cloud &amp; GCP via Workload Identity Federation: <a target="_blank" href="https://merlin.microworka.com/how-to-safely-link-terraform-cloud-and-google-cloud-platform-via-workload-identity-federation">https://merlin.microworka.com/how-to-safely-link-terraform-cloud-and-google-cloud-platform-via-workload-identity-federation</a></p>
</li>
<li><p>Configure MongoDB Atlas with Terraform Cloud: <a target="_blank" href="https://merlin.microworka.com/easy-steps-to-configure-mongodb-atlas-with-terraform-and-terraform-cloud">https://merlin.microworka.com/easy-steps-to-configure-mongodb-atlas-with-terraform-and-terraform-cloud</a></p>
</li>
<li><p>Set up ArgoCD on Private GKE for GitOp: <a target="_blank" href="https://merlin.microworka.com/setting-up-argocd-on-private-google-kubernetes-engine-cluster-for-gitops-deployment">https://merlin.microworka.com/setting-up-argocd-on-private-google-kubernetes-engine-cluster-for-gitops-deployment</a></p>
</li>
</ul>
<p>#CloudArchitecture #GCP #MongoDB #DevOps #CloudSecurity #Infrastructure #TechInnovation #CloudComputing #Engineering</p>
<hr />
<p><em>This blog post is part of our technical architecture series. For more detailed information about specific components or implementation guidance, please reach out to our team.</em></p>
]]></content:encoded></item><item><title><![CDATA[Setting up ArgoCD on Private Google Kubernetes Engine Cluster for GitOps Deployment]]></title><description><![CDATA[ArgoCD is a popular open-source tool for implementing GitOps principles and managing Kubernetes resources declaratively using Git as a single source of truth. In this blog post, we'll learn how to deploy ArgoCD on a private Google Kubernetes Engine (...]]></description><link>https://merlin.microworka.com/setting-up-argocd-on-private-google-kubernetes-engine-cluster-for-gitops-deployment</link><guid isPermaLink="true">https://merlin.microworka.com/setting-up-argocd-on-private-google-kubernetes-engine-cluster-for-gitops-deployment</guid><category><![CDATA[ArgoCD]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[gitops]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Wed, 06 Nov 2024 06:52:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715517965253/6dc3a53d-9a7a-4cd1-8141-fbecc4a8ad83.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>ArgoCD is a popular open-source tool for implementing GitOps principles and managing Kubernetes resources declaratively using Git as a single source of truth. In this blog post, we'll learn how to deploy ArgoCD on a private Google Kubernetes Engine (GKE) cluster and set up GitOps deployment using GitHub as the Git repository.</p>
<p><strong>Prerequisites:</strong></p>
<ul>
<li><p>A Google Cloud Platform (GCP) account</p>
</li>
<li><p><code>gcloud</code> command-line tool installed and authenticated</p>
</li>
<li><p>A GitHub account</p>
</li>
<li><p>A private Git repository for storing your Kubernetes manifests</p>
</li>
</ul>
<p><strong>Step 1:</strong> Create a GKE cluster Create a new private GKE cluster or use an existing one. Make sure to enable the necessary APIs and grant the required permissions for your GCP account.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># From jump host/autorised host</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">auth</span> <span class="hljs-string">login</span>
<span class="hljs-string">sudo</span> <span class="hljs-string">apt-get</span> <span class="hljs-string">install</span> <span class="hljs-string">google-cloud-sdk-gke-gcloud-auth-plugin</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">compute</span> <span class="hljs-string">start-iap-tunnel</span> <span class="hljs-string">private-gke-jump-host</span> <span class="hljs-number">22</span> <span class="hljs-string">--local-host-port=localhost:&lt;LOCAL_PORT&gt;</span> <span class="hljs-comment">#you can use any open port or remove --local-host-port=localhost:49222 and let's use random port</span>

<span class="hljs-string">ssh</span> <span class="hljs-string">-J</span> <span class="hljs-string">localhost:&lt;LOCAL_PORT&gt;</span> <span class="hljs-number">192.168</span><span class="hljs-number">.1</span><span class="hljs-number">.7</span>

<span class="hljs-string">gcloud</span> <span class="hljs-string">auth</span> <span class="hljs-string">login</span>
<span class="hljs-string">sudo</span> <span class="hljs-string">apt-get</span> <span class="hljs-string">install</span> <span class="hljs-string">google-cloud-sdk-gke-gcloud-auth-plugin</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">container</span> <span class="hljs-string">clusters</span> <span class="hljs-string">get-credentials</span> <span class="hljs-string">private-empmng-cluster</span> <span class="hljs-string">--zone</span> <span class="hljs-string">us-east4-c</span> <span class="hljs-string">--project</span> <span class="hljs-string">hand-on-lab-404211</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">config</span> <span class="hljs-string">set</span> <span class="hljs-string">run/region</span> <span class="hljs-string">us-east4</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">auth</span> <span class="hljs-string">configure-docker</span>
<span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">nodes</span>
</code></pre>
<p><strong>Step 2:</strong> Install ArgoCD Install ArgoCD on your GKE cluster using the official manifests</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">create</span> <span class="hljs-string">namespace</span> <span class="hljs-string">argocd</span>

<span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-n</span> <span class="hljs-string">argocd</span> <span class="hljs-string">-f</span> <span class="hljs-string">https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</span>
<span class="hljs-string">kubectl</span> <span class="hljs-string">-n</span> <span class="hljs-string">argocd</span> <span class="hljs-string">get</span> <span class="hljs-string">pods</span> <span class="hljs-string">-w</span>
<span class="hljs-string">kubectl</span> <span class="hljs-string">-n</span> <span class="hljs-string">argocd</span> <span class="hljs-string">get</span> <span class="hljs-string">svc</span>
</code></pre>
<p><strong>Step 3:</strong> Expose ArgoCD API Server</p>
<p>The ArgoCD API server allow you to access the web UI.</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">patch</span> <span class="hljs-string">svc</span> <span class="hljs-string">argocd-server</span> <span class="hljs-string">-n</span> <span class="hljs-string">argocd</span> <span class="hljs-string">--type='json'</span> <span class="hljs-string">-p</span> <span class="hljs-string">'[{"op":"replace","path":"/spec/type","value":"NodePort"}]'</span>
<span class="hljs-string">kubectl</span> <span class="hljs-string">-n</span> <span class="hljs-string">argocd</span> <span class="hljs-string">get</span> <span class="hljs-string">svc</span>
</code></pre>
<p><strong>Step 4:</strong> Access the ArgoCD Web UI</p>
<p>The ArgoCD web UI using the NodePort IP on <code>localhost:8088</code> using port forwarding.</p>
<ul>
<li><p>Retreive the cluster node to use securely:</p>
<pre><code class="lang-yaml">  <span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">nodes</span>
</code></pre>
</li>
<li><p>Retrieve the initial admin password using the following command:</p>
</li>
<li><pre><code class="lang-yaml">      <span class="hljs-string">kubectl</span> <span class="hljs-string">-n</span> <span class="hljs-string">argocd</span> <span class="hljs-string">get</span> <span class="hljs-string">secret</span> <span class="hljs-string">argocd-initial-admin-secret</span> <span class="hljs-string">-o</span> <span class="hljs-string">jsonpath='{.data.password}'</span> <span class="hljs-string">|</span> <span class="hljs-string">base64</span> <span class="hljs-string">-d</span>
</code></pre>
</li>
<li><p>Connect to the cluster node using IAP-tunnel, change <code>&lt;gke-private-cluster-node&gt;, &lt;argocd-node-port&gt;</code> (eg: 32347) and <code>&lt;cluster-location&gt;</code></p>
</li>
<li><pre><code class="lang-yaml">      <span class="hljs-string">gcloud</span> <span class="hljs-string">compute</span> <span class="hljs-string">start-iap-tunnel</span> <span class="hljs-string">&lt;gke-private-cluster-node&gt;</span> <span class="hljs-string">&lt;argocd-node-port&gt;</span> <span class="hljs-string">--local-host-port=localhost:8088</span> <span class="hljs-string">--zone=&lt;cluster-location&gt;</span>
</code></pre>
</li>
<li><p>Connect to ArgoCD Web UI using <a target="_blank" href="http://localhost:8088">localhost:8088</a></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715516193028/64e74628-29e2-4236-b907-b06713951e58.png" alt class="image--center mx-auto" /></p>
<p><strong>Step 5:</strong> Configure ArgoCD with GitHub In the ArgoCD web UI, navigate to the "Settings" section, "Repositories" and connect ArgoCD to your GitHub account. You'll need to create a GitHub personal access token with the necessary permissions (repo, admin:repo_hook, read:user, user:email) and provide it to ArgoCD.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715516375194/a2cdbd52-7e33-41b4-8547-313df68d515c.png" alt class="image--center mx-auto" /></p>
<p><strong>CONNECTION STATUS:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715516448595/428c788d-04f8-4eb5-b308-48de0b122d1d.png" alt class="image--center mx-auto" /></p>
<p><strong>Step 6:</strong> Create an Application in ArgoCD</p>
<p>Create a new ArgoCD application and point it to your GitHub repository containing the Kubernetes manifests. Specify the repository URL, target revision (branch or tag), and the path to your manifests.</p>
<p>Navigate to the "Applications" section and click on "+ NEW APP".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715516685481/07fd2365-05b7-4318-8bbe-ef86eb6eac2f.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>GENERAL section</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715516960280/f07ed26c-beba-453b-9f9d-732cc59f4f7d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>SOURCE section</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715516969090/a05f3b55-1c2b-46c4-ac24-6fcbf4662313.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>DESTINATION section</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715516974349/fe1a758f-0ac3-41d5-b844-a5a132b22fb7.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><strong>Step 7:</strong> Sync and Deploy Once the application is created,</p>
<p>You can sync and deploy your Kubernetes resources to the GKE cluster. ArgoCD will continuously monitor the Git repository for changes and automatically sync the cluster with the desired state defined in the manifests.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715517126795/ba0a5255-ffd0-4dd2-bd0d-f747f538b81f.png" alt class="image--center mx-auto" /></p>
<p><strong>Step 8:</strong> Verify the Deployment Verify that your Kubernetes resources are deployed correctly on the GKE cluster.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715517391492/ac731915-a6d5-44ae-a28f-c07b4e40294a.png" alt class="image--center mx-auto" /></p>
<p>You can use the <code>kubectl</code> command or the GKE console to inspect the resources,</p>
<p>change <code>&lt;your-app-namespace&gt;</code></p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">deployment</span> <span class="hljs-string">-n</span> <span class="hljs-string">&lt;your-app-namespace&gt;</span>
</code></pre>
<p><strong>Congratulations!</strong> You've successfully deployed ArgoCD on a private GKE cluster and set up GitOps deployment with GitHub. You can now leverage the power of GitOps to manage your Kubernetes resources in a declarative and version-controlled manner.</p>
<p><strong>Note:</strong> This blog post provides a high-level overview of the steps involved. For more detailed instructions and troubleshooting, refer to the official ArgoCD documentation and GKE guides.</p>
<p>Youtube Demo Video: <a target="_blank" href="https://youtu.be/u7O1wqbChK0?t=729">https://youtu.be/u7O1wqbChK0?t=729</a></p>
]]></content:encoded></item><item><title><![CDATA[My Journey to Cloud & DevOps: Tackling the Cloud Resume Challenge]]></title><description><![CDATA[Introduction
My name is Merlin Saha, and I'm an experienced software developer into the exciting world of Cloud Architecture and DevOps Engineering. This blog post chronicles my journey, from my initial interest in Google Cloud to building a robust c...]]></description><link>https://merlin.microworka.com/my-journey-to-cloud-devops-tackling-the-cloud-resume-challenge</link><guid isPermaLink="true">https://merlin.microworka.com/my-journey-to-cloud-devops-tackling-the-cloud-resume-challenge</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[cloud native]]></category><category><![CDATA[Google]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Tue, 08 Oct 2024 04:59:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716734003949/79b96a6a-84c7-43b9-8bf1-ac9610694ee6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p>My name is <a class="user-mention" href="https://hashnode.com/@sahamerlin">Merlin Saha</a>, and I'm an experienced software developer into the exciting world of Cloud Architecture and DevOps Engineering. This blog post chronicles my journey, from my initial interest in Google Cloud to building a robust cloud resume showcasing my newfound skills and Multicloud Architect Certifed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716221829652/871006ac-569b-4f7f-87b5-47cc9b968f3f.png" alt class="image--center mx-auto" /></p>
<p>My journey has been anything but conventional</p>
<p>- I worked as a motorcycle taxi driver to found my IT course and Bachelor in Software Engineering.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721019154579/aca72af9-659b-43df-a978-284fd50914b5.jpeg" alt class="image--center mx-auto" /></p>
<p>These challenges instilled in me the perseverance and resourcefulness that now fuel my cloud journey.</p>
<h3 id="heading-the-spark-discovering-cloud-architecture"><strong>The Spark: Discovering Cloud Architecture</strong></h3>
<p>Despide my first use of Google cloud in 2015 ant get my first MOOC Collections certificates on OnpenClassroom named <a target="_blank" href="https://openclassrooms.com/fr/learning-path-certificates/5118518284">Deploy your Java applications on the Google Cloud (Use Saha as name</a>). While working as a Full-stack Software Engineer in 2021, I started develop great solutions with Google Cloud Platform(GCP). This experience ignited a desire to explore cloud architecture further. With limited knowledge, I embarked on a learning quest, starting with Coursera specializations "Architecting with Google Compute Engine" and "Developing Applications with Google Cloud."</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718349626309/d850c612-6758-4841-a523-c5b59829ae46.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-building-a-strong-foundation"><strong>Building a Strong Foundation</strong></h3>
<p>My learning extended to Google Cloud certifications, including the Cloud Engineer Professional Certificate, Cloud Architect Professional Certificate and Cloud DevOps Engineer Professional Certificate. I actively participated in Google Cloud Skills Boost to solidify my understanding of theoretical concepts and Practices. However, I yearned for practical experience with the real world project that i can share with the Cloud World.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722175343644/22b8fc09-e7f9-4599-9d5c-050e851681e5.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-taking-initiative-personal-projects-and-real-world-challenges"><strong>Taking Initiative: Personal Projects and Real-World Challenges</strong></h3>
<p>To bridge the gap, I embarked on personal projects leveraging Google Cloud services. I built a 3-tier highly available application, further fueling my desire to showcase my skills to the world. This led me to Enter <a target="_blank" href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/">The Cloud Resume Challenge</a> by <a class="user-mention" href="https://hashnode.com/@forrestbrazeal">Forrest Brazeal</a>, an opportunity to demonstrate my capabilities as a certified GCP professional.</p>
<p><strong><em>The Road is not Easy, with plenty of obstacles, but we must continue!</em></strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718357193915/f6ed9b87-6807-483d-98d3-e3b8e157d0f9.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-cloud-resume-challenge-a-proving-ground"><strong>The Cloud Resume Challenge: A Proving Ground</strong></h3>
<p>The challenge resonated deeply. Here was a chance to showcase my expertise in cloud solutions and configurations, not just certifications. The challenge involved building a cloud-hosted resume utilizing serverless computing and DevOps practices.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716224651638/1d977b2a-e74e-48e9-8cc3-ffc640442812.png" alt class="image--center mx-auto" /></p>
<p><strong>The Challenge Breakdown:</strong></p>
<ul>
<li><p><strong>Building a Serverless Web App:</strong> The core challenge involves constructing a complete web application for your resume. Here, you'll leverage the power of serverless computing, eliminating the need to manage physical servers.</p>
</li>
<li><p><strong>Visitor Counter Integration:</strong> Spice things up by incorporating a visitor counter! This not only adds functionality but demonstrates your ability to integrate different components within the cloud environment.</p>
</li>
<li><p><strong>Cloud Certification Power Up:</strong> The challenge strongly recommends obtaining a cloud certification to solidify your foundational knowledge. This certification serves as a valuable credential for potential employers.</p>
</li>
</ul>
<p><strong>But Wait, There's More!</strong></p>
<p>The Cloud Resume Challenge doesn't stop there. It offers a variety of "mod tracks" to expand your project and further hone your skills:</p>
<ul>
<li><p><strong>Security Savvy:</strong> Delve into the world of cloud security practices, learning how to safeguard your application and data.</p>
</li>
<li><p><strong>DevOps Disciple:</strong> Embrace the DevOps philosophy by integrating continuous integration and continuous delivery (CI/CD) into your workflow, streamlining the development and deployment process.</p>
</li>
</ul>
<p>Now that you're armed with this knowledge, it's time to move In the next part.</p>
<h2 id="heading-cloud-resume-challenge-a-head-start-with-certifications">Cloud Resume Challenge: A Head Start with Certifications</h2>
<p>But what if you already have a solid foundation in cloud technologies? Here's where your existing certifications can be a game-changer!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716041349465/d5d9c19c-d0c0-466b-ab1a-44bceff9930f.png" alt class="image--center mx-auto" /></p>
<p>Fortunately, as we saw, before embarking on the challenge, I had already secured some key certifications:</p>
<ul>
<li><p><strong>Google Cloud Certified Professional Cloud DevOps Engineer</strong>.</p>
</li>
<li><p><strong>Google Cloud Certified Professional Cloud Architect</strong>.</p>
</li>
<li><p><strong>Google Cloud Certified Associate Cloud Engineer</strong></p>
</li>
<li><p><strong>Terraform Certified Associate 003</strong>.</p>
</li>
</ul>
<p>These certifications not only bolstered my confidence for the challenge but to verify the necessary skills to excel in a Cloud Architecture and DevOps Engineering roles.</p>
<h2 id="heading-lets-dive-deep-the-technical-backbone-of-cloud-resume-challenge">Let's Dive Deep: The Technical Backbone of Cloud Resume Challenge</h2>
<p>We've talked about the challenge and my prep, but now it's time to unveil the real star of the show: the technical architecture behind my cloud resume! Buckle up, because we're about to get geeky.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724008375998/f51f134c-9d8a-4580-9547-2d1a595af6f0.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-foundation-multiple-projects-for-multi-stage-deployments-with-devops-practices">Foundation: Multiple Projects for Multi-Stage Deployments with DevOps Practices</h3>
<p>The Cloud Resume Challenge emphasizes best practices like utilizing separate environments for development, testing, and production. This perfectly aligns with the <strong>"DevOps Mod: All The World's A Stage"</strong> concept. However, I took the approach a step further by incorporating <strong>Workload Identity Federation for enhanced security.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718352947556/a505c9e6-bdfd-442e-8ca2-e5ca0b616459.gif" alt class="image--center mx-auto" /></p>
<p><strong>Multi-Project Setup:</strong></p>
<ul>
<li><strong>Dedicated Project per Environment:</strong> Adhering to the Mod's recommendation, I created separate Google Cloud projects for each environment (Dev, QA, UAT, Prod) within our four-environment organization. This ensures isolated testing and resource management for each stage.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716135772513/b162dc1d-09fc-40c2-a1a0-c7b14b21d053.png" alt class="image--center mx-auto" /></p>
<p><strong>Workload Identity Federation Integration:</strong></p>
<ul>
<li><p><strong>Shared Workload Identity Project:</strong> Instead of a shared test project, I utilized a single Workload Identity Federation project. This project acts as a central hub, authenticating service accounts across all environment-specific projects.</p>
</li>
<li><p><strong>Secure Access for Workloads:</strong> Workload Identity Federation grants service accounts within each environment project the necessary permissions to access resources in that specific projects.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716134965653/225f49d2-4fc8-428a-9c28-60b0b86e8435.png" alt class="image--center mx-auto" /></p>
<p>By embracing multi-environment deployments, I not only enhanced the robustness of my Cloud Resume but also improved my understanding of DevOps best practices as highlighted in the "DevOps Mod."</p>
<h3 id="heading-embracing-automation-cloud-resume-goes-all-in-with-devops-practices">Embracing Automation: Cloud Resume Goes All In with DevOps Practices</h3>
<p>The Cloud Resume Challenge doesn't stop at building a cool web application; it's also about showcasing your expertise in DevOps practices. Here's how I incorporated automation into my Cloud Resume project, exceeding the basic challenge requirements and aligning with the "<strong>DevOps Mod: Automation Nation.</strong>"</p>
<p><strong>Infrastructure as Code (IaC) CI/CD:</strong></p>
<ol>
<li><p><strong>IaC GCP Foundation Provisioning:</strong> This dedicated pipeline utilizes GitHub Actions, Terraform Cloud, and Workload Identity to automate the provisioning of core GCP Folders, Projects and a shared Workload Identity project.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718353328879/c4159d3c-9269-4302-8fe9-cef7d15fa914.gif" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>IaC GCP Projects Provisioning:</strong> Separate dedicated GitHub and Terraform for Dev, QA, UAT, and Prod workspaces, handle the creation and configuration of environment-specific projects using Terraform Cloud Workspaces. This ensures isolated environments while enabling controlled cross-environment access.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718353229398/7bda8055-f56a-4ac3-a4c3-489568472666.gif" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p><strong>Backend and Frontend Deployment CI/CD:</strong></p>
<ol>
<li><p><strong>Frontend Project (Cloud Storage Deployment):</strong> A separate GitHub project holds the HTML/CSS/JS code for my resume website. This pipeline, powered by GitHub Actions and Workload Identity, automates the deployment of website files to Cloud Storage, along with clearing the CDN cache.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716049278147/d52832ca-4e99-4355-8237-688f69aeb72d.gif" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Backend Project (Python Cloud Function API):</strong> A dedicated GitHub project houses the Python code for my backend API. This pipeline leverages GitHub Actions and Workload Identity to automate the CI/CD process for the API.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725836050337/f7567ad6-0fed-458a-ac8c-58fff82e34bd.gif" alt class="image--center mx-auto" /></p>
<p>By embracing automation and exceeding the challenge requirements, This, align with the "DevOps Mod: Automation Nation."</p>
<h3 id="heading-solutions-approach-and-project-resource-breakdown"><strong>Solutions Approach and</strong> Project Resource Breakdown</h3>
<p>This breakdown organizes the resources used in our Cloud Resume project by component and its role:</p>
<ol>
<li><p><strong>Frontend:</strong></p>
<ul>
<li><p><strong>HTML5, CSS3:</strong> Write a simple HTML code with CSS and add a popup Modal to give more details in project Design section.</p>
</li>
<li><p><strong>Cloud Storage:</strong> Hosting Static Website content on Google Cloud Storage (Multi-region bucket), managing bucket creation, versioning, and public access configs.</p>
</li>
<li><p><strong>Content Delivery Network (CDN):</strong></p>
<ul>
<li><p><strong>Primary CDN:</strong> Cloudflare (Provides rate limiting for Cloud Storage)</p>
</li>
<li><p><strong>Secondary CDN:</strong> Google Cloud CDN Interconnect (Optimizes connectivity with Cloudflare)</p>
</li>
</ul>
</li>
<li><p><strong>Domain Name System (DNS):</strong></p>
<ul>
<li><p><strong>DNS Provider:</strong> Cloud DNS</p>
</li>
<li><p><strong>Security:</strong> DNSSEC</p>
</li>
</ul>
</li>
<li><p>Cloud External HTTPS Load Balancer</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716054462675/aa42dd20-d899-43a5-b9e5-bf958bf065d6.png" alt class="image--center mx-auto" /></p>
<p>    <strong><em>Resume url</em></strong> : <a target="_blank" href="https://sm-resume.microworka.com/">https://sm-resume.microworka.com/</a></p>
<ol start="2">
<li><p><strong>Backend Technologies:</strong></p>
<ul>
<li><p><strong>Firestore Datastore Mode:</strong> Database to stores visitor count</p>
</li>
<li><p><strong>Cloud Function (2nd Generation):</strong> Python 3.12 function to save visitor cloud to Database</p>
</li>
<li><p><strong>Google Cloud API Gateway:</strong> Used to save visitor count and restricts access)</p>
</li>
<li><p><strong>Google API Service API Keys Credentials:</strong> Restrict access to resume API Gateway URL</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716056503635/603a8e69-e4c2-4b0a-bfb0-0573040b1346.png" alt class="image--center mx-auto" /></p>
<ol start="3">
<li><p><strong>Bridging the Gap: Frontend and Backend Integration in Your Cloud Resume</strong></p>
<p> Your Cloud Resume project seamlessly connects the frontend and backend, demonstrating your grasp of web development concepts.</p>
<ul>
<li><p><strong>Javascript</strong>: leveraging communication witn frontend and backend</p>
</li>
<li><p><strong>Cypress:</strong> Tool for running smoke tests</p>
</li>
</ul>
</li>
</ol>
<p>    This combined approach, leveraging Javascript for communication witn frontend and backend and Cypress for testing, establishes a robust and well-tested connection between your Cloud Resume's frontend and backend, ensuring a flawless user experience.</p>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716132182171/bc3ebf9f-1ecc-482a-a9dd-a5b570813fb6.png" alt class="image--center mx-auto" /></p>
<ol start="4">
<li><p><strong>Security: Essential Services and Best Practices used</strong></p>
<p> In today's digital landscape, securing your cloud environment is paramount. For our Cloud Resume, we take security very seriously. This dives into the essential services and best practices we leverage to build a robust and secure it.</p>
<p> <strong>Core Security Services:</strong></p>
<ul>
<li><p><strong>Guarding Against Misconfigurations:</strong> Checkov, a policy-as-code tool, continuously scans our infrastructure configurations for security vulnerabilities. By proactively identifying misconfigurations, we prevent security weaknesses before deployment.</p>
</li>
<li><p><strong>Proactive Web App Security:</strong> Google Cloud Web Security Scanner plays a vital role in safeguarding our web applications. It actively hunts for security vulnerabilities, allowing us to address them before malicious actors can exploit them.</p>
</li>
<li><p><strong>Centralized Security Command Center:</strong> Maintaining a holistic view of our GCP security posture is crucial. Google Cloud Command Security Center offers a unified platform that aggregates findings from various security tools. This centralized view empowers us to prioritize and efficiently remediate security threats.</p>
</li>
<li><p><strong>Granular Access Control with IAM:</strong> Identity and Access Management (IAM) is the cornerstone of access control in GCP. IAM allows us to define user groups, service accounts, and their specific permissions for various GCP resources. This ensures only authorized entities can access resources, and only with the necessary permissions (read, write, etc.).</p>
</li>
<li><p><strong>DNSSEC: Tamper-Proof DNS:</strong> Domain Name System Security Extensions (DNSSEC) adds a crucial layer of security to our DNS infrastructure. By cryptographically authenticating DNS records, DNSSEC safeguards against DNS spoofing attempts.</p>
</li>
<li><p><strong>Seamless SSL/TLS Management:</strong> For secure communication between web applications and users, we rely on Certificate Manager. This service simplifies the process of issuing and managing SSL/TLS certificates for our GCP resources.</p>
</li>
<li><p><strong>Simplified Workload Identity Management:</strong> The Centralized Workload Identity Federation Project streamlines how workloads running on GCP authenticate to external services. This eliminates the need to manage individual credentials for each workload, enhancing security and manageability.</p>
</li>
<li><p><strong>Cloud Storage with Granular Access:</strong> Cloud Storage bucket policies provide granular control over access to data stored in Google Cloud Storage. These policies define who can access a bucket and what actions they can perform, ensuring sensitive data remains protected.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718352727075/b381c6f1-05db-47d7-8324-abab26ddf1ca.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ol>
<p>    <strong>Least Privilege:</strong></p>
<p>    A fundamental security principle we adhere to is the principle of least privilege. This means all service accounts used by our services are granted only the minimum set of permissions required to fulfill their designated tasks. This minimizes the potential attack surface and reduces the risk of unauthorized access in case of a compromised service account.</p>
<ol start="5">
<li><p><strong>Automating Cloud Resume: CI/CD and IaC</strong></p>
<ul>
<li><p><strong>Continuous Integration/Continuous Deployment (CI/CD):</strong> Implemented GitHub Actions for automated testing (Cypress), build, and deployment processes across multiple environments (dev, QA, UAT, prod).</p>
</li>
<li><p><strong>Infrastructure as Code (IaC):</strong> Utilized Terraform and Terraform Cloud for provisioning and managing GCP resources ans state, including projects, networking, security, and IAM configurations, with separate workspaces for each environment.</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716134675643/8db63a6c-f89a-4466-8aab-4a72e5fada76.png" alt class="image--center mx-auto" /></p>
<p>    Taking Cloud Resume project and treating it as an enterprise-level project to improve skills and experience can be <strong>Good to prepare your selve to potential challenge!</strong></p>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716229434905/6991c237-0904-4e7a-8ecf-7bec03970dca.png" alt class="image--center mx-auto" /></p>
<ol start="6">
<li><p><strong>Keeping an Eye on Your Cloud Resume: Monitoring and Alerting</strong></p>
<p> A crucial aspect of any well-managed cloud application is monitoring. Your Cloud Resume project demonstrates an understanding of this concept by incorporating three key tools:</p>
<ul>
<li><p><strong>Cloud Monitoring</strong>: Acts as a vigilant guardian, providing real-time insights into the health and performance of your Cloud Resume infrastructure, defining Health Checks, SLOs tracked API Gateway, Cloud Storage network latencies, and uptime, allowing to identify areas for cost optimization.</p>
</li>
<li><p><strong>Cloud Logging</strong>: Acts as the digital diary, recording all events and logs generated by your application.</p>
</li>
<li><p><strong>PagerDuty</strong>: Serves as the alarm system. It integrates with Cloud Monitoring and triggers alerts when predefined thresholds are breached.</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716132662996/71bd0426-f68a-4baa-a8a7-d10a8becfa25.png" alt class="image--center mx-auto" /></p>
<ol start="7">
<li><p><strong>Bonus: Generative AI and Appointement</strong></p>
<p> By incorporating a conversational AI chatbot and appointment scheduling, my Cloud Resume goes beyond the traditional format, offering a more engaging and interactive experience for potential employers. This demonstrates willingness to embrace cutting-edge technologies and ability to leverage them to create a truly unique and effective resume.</p>
<ul>
<li><p><strong>Conversational AI with Dialogflow CX:</strong> Users can interact with the chatbot by asking questions about experience, skills, or certifications listed on your resume.</p>
</li>
<li><p><strong>Appointment Scheduling with Google Calendar</strong>: Eliminates the need for back-and-forth emails or phone calls to schedule interviews, making the process more convenient for both me and the employer.</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716135498577/80fc6e05-772a-42b9-8829-fe9d68f83b54.png" alt class="image--center mx-auto" /></p>
<ol start="8">
<li><p><strong>High Availability:</strong> Multi-region Cloud Storage buckets, Cloud CDN, HTTPS, and an external load balancer ensured resilience.</p>
</li>
<li><p><strong>Security:</strong> Cloud Armor was initially considered, but Cloudflare was ultimately chosen for its rate limiting capabilities on Cloud Storage. Workload Identity Federation and IAM with least privilege access controls further bolstered security.</p>
</li>
</ol>
<h3 id="heading-aligning-with-the-google-cloud-architecture-framework"><strong>Aligning with the Google Cloud Architecture Framework</strong></h3>
<p>The Google Cloud Architecture Framework provides recommendations and best practices to help architects, developers, administrators, and other cloud professionals design and operate a secure, efficient, resilient, high-performance, and cost-effective cloud topology.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716053327946/cbdba283-867c-40a4-8a1a-63919e198e84.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>System Design:</strong> Implemented a secure and highly available architecture with a clear separation of concerns (database, backend, frontend, security).</p>
</li>
<li><p><strong>Operational Excellence:</strong> Automated deployments using CI/CD pipelines ensure efficient management of workloads.</p>
</li>
<li><p><strong>Security and Compliance:</strong> Utilized IAM, DNSSEC, Certificate Manager, Web Security Scanner, Security Command Center and Cloud Storage bucket policies for robust security.</p>
</li>
<li><p><strong>Reliability:</strong> Employed Cloud Storage Multi-region, Firestore, Cloud Load Balancing, and Cloud DNS for a resilient and available application.</p>
</li>
<li><p><strong>Cost Optimization:</strong> Leveraged serverless provisioning with Terraform to minimize costs, along with budget alerts and SLOs.</p>
</li>
<li><p><strong>Performance Efficiency:</strong> Optimized content delivery with Cloud CDN and fine-tuned cloud resources for optimal performance.</p>
</li>
</ul>
<p>This experience showcased my ability to not only grasp complex cloud concepts but also translate them into a secure, scalable, and feature-rich cloud solution.</p>
<p><strong>The Cloud Resume Challenge: Mission Accomplished, This project pushed me to confront challenges, learn from mistakes !</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718356826965/c292a0bb-baac-4bb6-a1df-0c016213d0d4.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-challenges-and-overcoming-them"><strong>Challenges and Overcoming Them</strong></h3>
<p>Setting up a GCP Foundation Organization and a robust deployment strategy proved to be hurdles. Additionally, integrating Cloud Armor for DDoS protection required alternative solutions like Cloudflare with Cloud DNS. The backend initially lacked a separate endpoint for uptime checks. To address this, I plan to migrate to Cloud Run for improved deployment strategies.</p>
<p><strong><em>Resume url</em></strong> : <a target="_blank" href="https://sm-resume.microworka.com/">https://sm-resume.microworka.com/</a></p>
<h3 id="heading-lessons-and-challenges-learned"><strong>Lessons and Challenges Learned</strong></h3>
<p>This project proved to be an invaluable learning experience, highlighting several key insights and challenges:</p>
<ol>
<li><p><strong>Strong Foundation:</strong> A solid understanding of cloud architecture principles is paramount. This project reinforced the importance of having a comprehensive grasp of GCP services and how they interact(Optimize GCP Landing Zone to meet my need with cost optimization by removing unecessarice service like VPC Sharing…).</p>
</li>
<li><p><strong>Hands-on Practice:</strong> While certifications provided a theoretical base, this practical application was crucial for truly internalizing cloud concepts. The challenge of building a real-world application bridged the gap between theory and practice.</p>
</li>
<li><p><strong>Automation is Key:</strong> Implementing DevOps practices, particularly CI/CD pipelines, significantly streamlined deployments and maintenance. This experience emphasized the efficiency gains from automation in cloud environments.</p>
</li>
<li><p><strong>Security as a Priority:</strong> The project underscored the critical nature of implementing security measures. From IAM policies to HTTPS implementation, each security decision proved vital in creating a production-ready application.</p>
</li>
<li><p><strong>Environment-Specific Deployments:</strong> Utilizing separate environments (Dev, QA, UAT, Prod) and leveraging Git's Fast-Forward Merge principle for deployments was enlightening. This approach ensured controlled, systematic rollouts and easier troubleshooting.</p>
</li>
<li><p><strong>Adapting to Limitations:</strong> The challenge with implementing Cloud Armor for Cloud Storage as a backend service was a valuable lesson in flexibility. Pivoting to Cloudflare as an alternative solution because of issue with rate-limit with cloud armor, demonstrated the importance of being adaptable and finding creative workarounds in cloud architecture.</p>
</li>
<li><p><strong>Cost Management:</strong> Balancing performance with cost-efficiency was an ongoing challenge. It highlighted the importance of continuous monitoring and optimization of cloud resources.</p>
</li>
<li><p><strong>Documentation is Crucial:</strong> Maintaining clear, up-to-date documentation throughout the project proved essential, especially when troubleshooting issues or onboarding new features.</p>
</li>
<li><p><strong>Community Support:</strong> Engaging with the cloud community, through forums and social media, provided invaluable insights and solutions to challenges encountered during the project.</p>
</li>
</ol>
<p><strong>Overcoming Specific Challenges:</strong></p>
<ol>
<li><p><strong>Multi-Environment Setup:</strong> Initially, configuring distinct environments posed a challenge. I overcame this by thoroughly studying GCP's resource hierarchy and implementing a clear naming convention and access policies for each environment.</p>
</li>
<li><p><strong>CI/CD Pipeline Complexity:</strong> Setting up a robust CI/CD pipeline that worked across all environments was initially daunting. I tackled this by starting with a basic pipeline and incrementally adding complexity, thoroughly testing at each stage.</p>
</li>
</ol>
<h3 id="heading-resume-are-in-the-cloud-now-looking-ahead-the-next-steps"><strong>Resume are in the Cloud Now, Looking Ahead: The Next Steps</strong></h3>
<p>My cloud resume is a stepping stone, not a destination. I plan to continuously improve it while optimizing costs. This blog post serves as just one way I'm sharing my knowledge. I intend to conttinous posting on this blogs and share knowledge with this Youtube channel <a target="_blank" href="https://www.youtube.com/@devsahamerlin?sub_confirmation=1">Youtube Channel</a> to empower others on the cloud journeys, including cost-effective practices.</p>
<p><strong><em>Resume url</em></strong> : <a target="_blank" href="https://sm-resume.microworka.com/">https://sm-resume.microworka.com/</a></p>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>My transformation from software developer to aspiring Multi-cloud architect is a testament to continuous learning and a passion for technology. The Cloud Resume Challenge provided a project to challenge my skills, and I'm confident it will be a valuable asset in my next step.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722182147172/a38e8987-1158-4574-808b-871f697c4389.gif" alt class="image--center mx-auto" /></p>
<p>Are you seeking a seasoned professional for roles such as:</p>
<ul>
<li><p>Cloud &amp; DevOps Engineer</p>
</li>
<li><p>Cloud Platform Engineer</p>
</li>
<li><p>Cloud Solutions Architect</p>
</li>
<li><p>Cloud Architect</p>
</li>
<li><p>Cloud Technology Evangelist</p>
</li>
<li><p>Multi-Cloud &amp; DevOps Engineer</p>
</li>
<li><p>Multi-Cloud Technology Evangelist</p>
</li>
<li><p>Cloud Automation Engineer</p>
</li>
</ul>
<p>I offer a combination of skills and experiences:</p>
<ul>
<li><p>9+ years in software development with extensive cloud expertise across GCP and Azure with knowlege on OCI, and AWS</p>
</li>
<li><p>15+ cloud certifications, including Google Cloud, Azure, Databricks, Aviatrix, and Oracle Cloud</p>
</li>
<li><p>Proven track record of architecting and implementing secure, scalable cloud solutions</p>
</li>
<li><p>Strong background in DevSecOps, infrastructure automation, and Generative AI integration</p>
</li>
<li><p>Experience in optimizing cloud costs and improving operational efficiency</p>
</li>
<li><p>Passion for innovative business solutions and continuous learning in cloud technologies</p>
</li>
</ul>
<p>My goal is to leverage this diverse skill set to drive organization's cloud transformation, enhance security posture, and accelerate time-to-market. I'm particularly adept at:</p>
<ul>
<li><p>Designing and implementing multi-cloud architectures</p>
</li>
<li><p>Optimizing DevOps processes and CI/CD pipelines</p>
</li>
<li><p>Integrating cutting-edge technologies like Kubernetes and Generative AI</p>
</li>
<li><p>Ensuring robust cloud security and compliance</p>
</li>
</ul>
<p>Let's connect and explore how my expertise in cloud architecture, DevOps, and emerging technologies can add significant value to your team and drive your cloud initiatives forward <a target="_blank" href="https://www.linkedin.com/in/merlin-saha/">https://www.linkedin.com/in/merlin-saha/</a></p>
]]></content:encoded></item><item><title><![CDATA[Multi-Cloud in 2025: Beyond the Hype]]></title><description><![CDATA[The cloud computing landscape has evolved significantly over the past decade. While many organizations already use multiple cloud services, there's a crucial distinction between simply using multiple clouds and implementing a strategic multi-cloud ar...]]></description><link>https://merlin.microworka.com/the-multi-cloud-revolution-embracing-flexibility-security-and-performance</link><guid isPermaLink="true">https://merlin.microworka.com/the-multi-cloud-revolution-embracing-flexibility-security-and-performance</guid><category><![CDATA[multicloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[innovation]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Sun, 30 Jun 2024 09:36:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731508095906/4f8ade8b-83c8-42be-b9c9-9f420966b53b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The cloud computing landscape has evolved significantly over the past decade. While many organizations already use multiple cloud services, there's a crucial distinction between simply using multiple clouds and implementing a strategic multi-cloud architecture. This comprehensive guide explores why this distinction matters and how enterprises can build an effective multi-cloud strategy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731504417701/1b463e98-ad2a-4ac3-9ed9-a2471448a94f.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-understanding-true-multi-cloud">Understanding True Multi-Cloud<strong>:</strong></h2>
<h3 id="heading-what-multi-cloud-isnt">What Multi-Cloud Isn't</h3>
<ul>
<li><p>Running different workloads on different clouds without integration</p>
</li>
<li><p>Having multiple disconnected cloud accounts across departments</p>
</li>
<li><p>Using SaaS solutions from various providers without a unified strategy</p>
</li>
</ul>
<h3 id="heading-what-multi-cloud-should-be">What Multi-Cloud Should Be</h3>
<ul>
<li><p>A deliberate architectural approach leveraging each cloud's strengths</p>
</li>
<li><p>An integrated ecosystem with seamless data and application flow</p>
</li>
<li><p>A unified governance and management framework across providers</p>
</li>
</ul>
<h2 id="heading-the-strategic-advantages-of-multi-cloud">The Strategic Advantages of Multi-Cloud</h2>
<h3 id="heading-1-best-of-breed-solutions">1. Best-of-Breed Solutions</h3>
<p>Modern enterprises require diverse capabilities that no single cloud provider can fully deliver. A strategic multi-cloud approach allows organizations to:</p>
<ul>
<li><p>Leverage AWS's extensive service ecosystem for microservices</p>
</li>
<li><p>Utilize Google Cloud's superior AI/ML capabilities</p>
</li>
<li><p>Take advantage of Azure's deep integration with Microsoft enterprise tools</p>
</li>
<li><p>Access Oracle's high-performance database solutions across clouds</p>
</li>
</ul>
<p>Real-world example: MongoDB Atlas deployments across AWS, Azure, and Google Cloud provide high availability with workload isolation while maintaining consistent performance across regions.</p>
<h3 id="heading-2-enhanced-enterprise-resilience">2. Enhanced Enterprise Resilience</h3>
<p>Multi-cloud architectures provide natural redundancy and disaster recovery capabilities:</p>
<ul>
<li><p>Geographic distribution of workloads</p>
</li>
<li><p>Provider-level failover options</p>
</li>
<li><p>Reduced impact from regional outages</p>
</li>
<li><p>Enhanced business continuity planning</p>
</li>
</ul>
<p>Example: Oracle Database availability on both Microsoft Azure and Google Cloud ensures critical workloads remain operational even during provider-specific incidents.</p>
<h3 id="heading-3-strategic-flexibility">3. Strategic Flexibility</h3>
<p>A well-implemented multi-cloud strategy offers:</p>
<ul>
<li><p>Negotiating leverage with providers</p>
</li>
<li><p>Ability to switch workloads between clouds</p>
</li>
<li><p>Optimization of costs across providers</p>
</li>
<li><p>Freedom to choose best-fit services for each requirement</p>
</li>
</ul>
<h2 id="heading-essential-building-blocks-for-multi-cloud-success">Essential Building Blocks for Multi-Cloud Success</h2>
<h3 id="heading-1-connectivity-solutions">1. Connectivity Solutions</h3>
<p>Modern multi-cloud architectures require robust interconnection:</p>
<ul>
<li><p><strong>Direct Connections:</strong></p>
<ul>
<li><p>Google Cloud Dedicated Interconnect</p>
</li>
<li><p>Google Cloud Cross-Cloud Interconnect</p>
</li>
<li><p>Azure ExpressRoute</p>
</li>
<li><p>Oracle FastConnect</p>
</li>
<li><p>AWS DirectConnect</p>
</li>
</ul>
</li>
<li><p><strong>Network Orchestration:</strong></p>
<ul>
<li><p>Aviatrix Multi-Cloud Network Architecture</p>
</li>
<li><p>Software-defined networking across clouds</p>
</li>
<li><p>Unified security policies</p>
</li>
</ul>
</li>
<li><p>CPS Multi-cloud &amp; Hybrid Services</p>
<ul>
<li><p>Google Cloud Anthos</p>
</li>
<li><p>AWS Outposts</p>
</li>
<li><p>Azure Arc</p>
</li>
<li><p>Azure Stack</p>
</li>
<li><p>VMWare</p>
</li>
</ul>
</li>
</ul>
<iframe width="560" height="315" src="https://www.youtube.com/embed/0M1wLVGQJck?si=RbnYoGpLNc1YrA4J"></iframe>

<h3 id="heading-2-automation-and-infrastructure-as-code">2. Automation and Infrastructure as Code</h3>
<p>Successful multi-cloud management requires sophisticated automation:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Example Terraform configuration for multi-cloud</span>
<span class="hljs-string">provider</span> <span class="hljs-string">"aws"</span> {
  <span class="hljs-string">region</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-west-2"</span>
}

<span class="hljs-string">provider</span> <span class="hljs-string">"azurerm"</span> {
  <span class="hljs-string">features</span> {}
}

<span class="hljs-string">provider</span> <span class="hljs-string">"google"</span> {
  <span class="hljs-string">project</span> <span class="hljs-string">=</span> <span class="hljs-string">"my-project"</span>
  <span class="hljs-string">region</span>  <span class="hljs-string">=</span> <span class="hljs-string">"us-central1"</span>
}

<span class="hljs-comment"># Cross-cloud resource management</span>
</code></pre>
<iframe width="560" height="315" src="https://www.youtube.com/embed/p6hGNs6_Gfo?si=dHLLZptRRDZlnmMV"></iframe>

<p>Key components:</p>
<ul>
<li><p>Terraform for infrastructure provisioning</p>
</li>
<li><p>GitHub Actions for CI/CD pipelines</p>
</li>
<li><p>Terraform Cloud for state management</p>
</li>
<li><p>Custom scripts for cross-cloud orchestration</p>
</li>
</ul>
<h3 id="heading-3-security-and-governance">3. Security and Governance</h3>
<p>Critical considerations for multi-cloud security:</p>
<ul>
<li><p>Identity and Access Management (IAM) across clouds</p>
</li>
<li><p>Consistent security policies and compliance</p>
</li>
<li><p>Centralized logging and monitoring</p>
</li>
<li><p>Regular security audits and assessments</p>
</li>
</ul>
<h2 id="heading-implementation-best-practices">Implementation Best Practices</h2>
<h3 id="heading-1-strategic-planning">1. Strategic Planning</h3>
<ul>
<li><p>Begin with clear business requirements</p>
</li>
<li><p>Define specific criteria for workload placement</p>
</li>
<li><p>Create a detailed migration roadmap</p>
</li>
<li><p>Establish KPIs for success measurement</p>
</li>
</ul>
<h3 id="heading-2-technical-architecture">2. Technical Architecture</h3>
<ul>
<li><p>Design for interoperability</p>
</li>
<li><p>Implement consistent naming conventions</p>
</li>
<li><p>Plan for data sovereignty requirements</p>
</li>
<li><p>Consider latency between cloud providers</p>
</li>
</ul>
<h3 id="heading-3-operational-excellence">3. Operational Excellence</h3>
<ul>
<li><p>Develop cross-cloud monitoring strategies</p>
</li>
<li><p>Implement centralized logging</p>
</li>
<li><p>Create unified incident response procedures</p>
</li>
<li><p>Maintain documentation and training programs</p>
</li>
</ul>
<h2 id="heading-common-challenges-and-solutions">Common Challenges and Solutions</h2>
<h3 id="heading-1-cost-management">1. Cost Management</h3>
<ul>
<li><p>Implement cloud cost management tools</p>
</li>
<li><p>Regular cost optimization reviews</p>
</li>
<li><p>Clear chargeback mechanisms</p>
</li>
<li><p>Budget alerts and monitoring</p>
</li>
</ul>
<h3 id="heading-2-skill-gaps">2. Skill Gaps</h3>
<ul>
<li><p>Training programs for team members</p>
</li>
<li><p>Partnerships with cloud experts</p>
</li>
<li><p>Documentation and knowledge sharing</p>
</li>
<li><p>Regular skill assessment and development</p>
</li>
</ul>
<h3 id="heading-3-complex-operations">3. Complex Operations</h3>
<ul>
<li><p>Automated operational procedures</p>
</li>
<li><p>Clear escalation paths</p>
</li>
<li><p>Defined responsibility matrices</p>
</li>
<li><p>Regular operational reviews</p>
</li>
</ul>
<h2 id="heading-measuring-multi-cloud-success">Measuring Multi-Cloud Success</h2>
<p>Key metrics to track:</p>
<ol>
<li><p><strong>Technical Metrics:</strong></p>
<ul>
<li><p>Cross-cloud latency</p>
</li>
<li><p>Service availability</p>
</li>
<li><p>Recovery time objectives</p>
</li>
<li><p>Performance benchmarks</p>
</li>
</ul>
</li>
<li><p><strong>Business Metrics:</strong></p>
<ul>
<li><p>Cost optimization</p>
</li>
<li><p>Time-to-market improvements</p>
</li>
<li><p>Resource utilization</p>
</li>
<li><p>Innovation enablement</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-future-trends-in-multi-cloud">Future Trends in Multi-Cloud</h2>
<p>Emerging developments to watch:</p>
<ul>
<li><p>Edge computing integration</p>
</li>
<li><p>AI-driven cloud orchestration</p>
</li>
<li><p>Enhanced cross-cloud services</p>
</li>
<li><p>Improved standardization</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>A successful multi-cloud strategy requires careful planning, robust architecture, and continuous optimization. While the journey may seem complex, the benefits of increased flexibility, resilience, and innovation potential make it worthwhile for many organizations. Remember: multi-cloud should never be adopted simply because it's trendy – it must align with specific business needs and capabilities.</p>
<hr />
<p><em>Last updated: November 2024</em></p>
<h2 id="heading-resources-to-get-started">Resources to get started</h2>
<ul>
<li><h3 id="heading-multi-cloud-automation-azure-aws-gcp-with-terraform-cloud-and-cicd-on-github-actionshttpswwwyoutubecomplaylistlistplb1jwwiio9jo2zxpufbrmuamthsiu1zxcampindex2ampppiaqb"><a target="_blank" href="https://www.youtube.com/playlist?list=PLB1JwWIio9JO2ZXpufbRmUaMThSIU1zxc&amp;index=2&amp;pp=iAQB">Multi-Cloud Automation Azure, AWS, GCP with Terraform Cloud and CI/CD on GitHub Actions</a></h3>
</li>
<li><h3 id="heading-building-a-global-e-commerce-empire-anthos-multi-cloud-for-zero-downtime-scale-on-gcp-amp-awshttpswwwyoutubecomwatchv0m1wlvgqjck"><a target="_blank" href="https://www.youtube.com/watch?v=0M1wLVGQJck">Building a Global E-commerce Empire - Anthos Multi Cloud for Zero Downtime Scale on GCP &amp; AWS</a></h3>
</li>
</ul>
<p>#multicloud #cloudcomputing #innovation #automation #devops</p>
]]></content:encoded></item><item><title><![CDATA[Establish a Secure Connection between GitHub Actions and Google Cloud Platform (GCP) using Workload Identity Federation.]]></title><description><![CDATA[GitHub Actions makes it easy to automate your software development workflows, including building, testing, and deploying your applications to various environments. When deploying to Google Cloud Platform (GCP), you typically need to authenticate with...]]></description><link>https://merlin.microworka.com/establish-a-secure-connection-between-github-actions-and-google-cloud-platform-gcp-using-workload-identity-federation</link><guid isPermaLink="true">https://merlin.microworka.com/establish-a-secure-connection-between-github-actions-and-google-cloud-platform-gcp-using-workload-identity-federation</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Devops]]></category><category><![CDATA[GitHub Actions]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[workload-identity-federation]]></category><category><![CDATA[GCP]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Mon, 03 Jun 2024 19:50:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717442814892/4c92ab6e-2bc7-4078-9382-0f2a533f30d9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>GitHub Actions makes it easy to automate your software development workflows, including building, testing, and deploying your applications to various environments. When deploying to Google Cloud Platform (GCP), you typically need to authenticate with GCP using a service account key. However, managing and rotating these keys can be cumbersome and pose a security risk if the keys are accidentally exposed.</p>
<p>Workload Identity Federation is a feature in GCP that allows you to securely authenticate to GCP services without needing to manage service account keys. Instead, you can use a Google Cloud identity (such as a service account) to grant access to your GitHub Actions workflow, and GCP will automatically authenticate the workflow based on the configured identity.</p>
<p>Here's how to set it up:</p>
<ol>
<li><p><strong>Create Workload Identity Federation Pool and Provider in GCP</strong></p>
<ul>
<li><p>Go to the <a target="_blank" href="https://console.cloud.google.com/iam-admin">IAM &amp; Admin</a> section and click on <a target="_blank" href="https://console.cloud.google.com/iam-admin/workload-identity-pools">Workload Identity Federation</a> in the GCP Console and click on "<strong>CREATE POOL</strong>".</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715535430691/1e6a17c6-91af-424e-9187-f69af6a87f02.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Create an identity pool: Specify the required information and click on "<strong>CONTINUE</strong>"</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715535462414/b83a912a-aa28-4d82-a94f-fa4f705bccc8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Add a provider to pool: select <code>OpenID Connect (OIDC)</code> in <strong>Select a provider</strong> and add <code>https://token.actions.githubusercontent.com</code> in <strong>Issuer (URL)</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715535763974/917849ea-874d-40e7-b896-5cd550b03a9d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Configure provider attributes</strong> as like this:</p>
<pre><code class="lang-yaml">  <span class="hljs-string">google.subject</span> <span class="hljs-string">=</span> <span class="hljs-string">assertion.sub</span>
  <span class="hljs-string">attribute.repository</span> <span class="hljs-string">=</span> <span class="hljs-string">assertion.repository</span>
</code></pre>
</li>
<li><p>Save the pool andprovider</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715536017307/12ae8424-52cb-454b-91d9-b729010d5078.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Create a Google Cloud service account</strong></p>
</li>
</ol>
<ul>
<li><p>Go to the <a target="_blank" href="https://console.cloud.google.com/iam-admin/serviceaccounts">Service Accounts section</a> in the GCP Console.</p>
</li>
<li><p>Click "<strong>Create Service Account.</strong>"</p>
</li>
<li><p>Provide a name for the service account (e.g., "github-actions-sa").</p>
</li>
<li><p>Optionally, you can add a description.</p>
</li>
<li><p>Click "<strong>Create and Continue.</strong>"</p>
</li>
<li><p>On the next screen, skip granting roles for now, and click "<strong>Done</strong>."</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715536254528/77eab40d-fbbb-4eb4-9a99-7c84dcad2029.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<ol start="3">
<li><p><strong>Grant required role to Google Cloud service account</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-string">roles/iam.workloadIdentityUser</span>

 <span class="hljs-comment"># And Add Annother Role your need to specific task, eg:</span>
 <span class="hljs-string">roles/artifactregistry.writer</span>
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715536443745/d595d3b5-2623-4a30-a5c6-a7541b4a4293.png" alt class="image--center mx-auto" /></p>
<p> Click on "<strong>DONE</strong>"</p>
</li>
<li><p><strong>Update</strong><a target="_blank" href="https://console.cloud.google.com/iam-admin/workload-identity-pools">Workload Identity Federation</a> to add service <a target="_blank" href="mailto:github-actions-sa@hand-on-lab-404211.iam.gserviceaccount.com">github-actions-sa@&lt;project_id&gt;.iam.gserviceaccount.com</a></p>
<p> Click on GitHub Actions Display Name</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715536789356/be8b5ac4-da9d-4a46-9b79-aaf81ea9f576.png" alt class="image--center mx-auto" /></p>
<p> And Click <strong>Grant Access</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715536957930/e126d8fc-e892-4dab-abd9-70d706dc261c.png" alt class="image--center mx-auto" /></p>
<p> Add your SA email</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715537138569/bc5664b8-12e0-487e-8cfa-4c8f52fba293.png" alt class="image--center mx-auto" /></p>
<p> Add your GitHub Repository like this: <code>github_account/github_repository</code> and Click on "<strong>SAVE</strong>" in the popup page, click on <strong>DISMISS</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715537251607/c65c6a66-8443-401f-85ff-d7ac961ba7e2.png" alt class="image--center mx-auto" /></p>
<p> Your provider <strong>CONNECTED SERVICE ACCOUNTS</strong> should look like this</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715537654080/6eec0b47-4345-45dc-a580-e6644f26f0c2.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Use Workload Identity Federation in your GitHub Actions workflow</strong></p>
<p> Copy Pool ID and Provider ID in <a target="_blank" href="https://console.cloud.google.com/iam-admin/workload-identity-pools">Workload Identity Federation</a> in this case is <code>github-actions</code> and <code>github-actions-provide</code>r</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715537826490/99198a17-15af-4aac-872d-9fa15e7f80b3.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<ul>
<li><p>In your GitHub repository, create a new workflow file (e.g., <code>.github/workflows/upload.yml</code>).</p>
</li>
<li><p>In the workflow file, add the following step to authenticate with GCP using Workload Identity Federation: change <code>&lt;GCP_PROJECT_NUMBER&gt;</code> and <code>&lt;GCP_PROJECT_ID&gt;</code></p>
<pre><code class="lang-yaml">        <span class="hljs-bullet">-</span> <span class="hljs-attr">id:</span> <span class="hljs-string">'auth'</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">'Authenticate Google Cloud'</span>
          <span class="hljs-attr">uses:</span> <span class="hljs-string">'google-github-actions/auth@v1'</span>
          <span class="hljs-attr">with:</span>
            <span class="hljs-attr">create_credentials_file:</span> <span class="hljs-literal">true</span>
            <span class="hljs-attr">workload_identity_provider:</span> <span class="hljs-string">'projects/&lt;GCP_PROJECT_NUMBER&gt;/locations/global/workloadIdentityPools/github-actions/providers/github-actions-provider'</span>
            <span class="hljs-attr">service_account:</span> <span class="hljs-string">'github-actions-sa@$&lt;GCP_PROJECT_ID&gt;.iam.gserviceaccount.com'</span>
</code></pre>
<p>  Replace the <code>workload_identity_provider</code> value with the one you configured in step 3, and the <code>service_account</code> value with the email address of the service account you created in step 2.</p>
</li>
<li><p>After the authentication step, you can use the authenticated context to interact with GCP services, such as deploying to Cloud Run, uploading files to Cloud Storage, or running commands on Compute Engine instances</p>
</li>
</ul>
<p><strong>Full code exemple:</strong> <a target="_blank" href="https://github.com/devsahamerlin/employees-managements-api">https://github.com/devsahamerlin/employees-managements-api</a></p>
<p><strong>Congratulations</strong>! By using Workload Identity Federation, you no longer need to manage and rotate service account keys manually. GCP will automatically authenticate your GitHub Actions workflow based on the configured identity, providing a more secure and convenient way to authenticate to GCP services.</p>
<p><strong>Note</strong> that, this is a high-level overview of the process, and you may need to adjust the configuration based on your specific requirements and GCP project setup.</p>
]]></content:encoded></item><item><title><![CDATA[Streamlining DevSecOps with Canary and Parallel Deployments on GCP]]></title><description><![CDATA[In today's fast-paced software development landscape, DevSecOps principles have become increasingly crucial for organizations to deliver secure, high-quality applications rapidly. Google Cloud Platform (GCP) offers a robust set of services and tools ...]]></description><link>https://merlin.microworka.com/streamlining-devsecops-with-canary-and-parallel-deployments-on-gcp</link><guid isPermaLink="true">https://merlin.microworka.com/streamlining-devsecops-with-canary-and-parallel-deployments-on-gcp</guid><category><![CDATA[DevSecOps]]></category><category><![CDATA[Devops]]></category><category><![CDATA[google cloud]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Sun, 02 Jun 2024 18:46:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717350392114/4e048e30-b5b9-475a-9981-8b179e393f42.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's fast-paced software development landscape, DevSecOps principles have become increasingly crucial for organizations to deliver secure, high-quality applications rapidly. Google Cloud Platform (GCP) offers a robust set of services and tools that enable seamless implementation of DevSecOps practices, including advanced deployment strategies like Canary and Parallel Deployments.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1717357638826/7c759638-e3ff-4bbf-ba62-c4df89ee4947.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-service-and-tools-used">Service and Tools Used:</h3>
<p><strong>Cloud Source Repositories:</strong> It's a cloud-based source code management service provided by GCP. It provides a private and secure Git version control repository to store and manage source code versions of our projects, Disadvantage: Lack of PR or MR option;</p>
<p><strong>Cloud Build:</strong> It's a continuous integration (CI) service provided by GCP. It allows to build, test, analyze and deploy applications on GCP using build and deployment workflows that we can configure according to our needs.</p>
<p><strong>Cloud build Trigger:</strong> It's an event trigger that allows to automatically launch a build process in Cloud Build in response to a specific event such as the creation or modification of a file in a source repository or a new container image added to a container registry.</p>
<p><strong>Artifact Registry:</strong> It's a cloud-based package storage service provided by GCP. It allows developers to store, manage and distribute packages such as JAR files, Python libraries and Docker images in a secure private repository.</p>
<p><strong>IAM:</strong> Identity and access management service to manage permissions and roles.</p>
<p><strong>Container Scanning API:</strong> It allows you to analyze container images stored in a private Artifact Registry repository to identify known security vulnerabilities. The service regularly scans the images stored in the repository to detect known vulnerabilities in the packages, libraries, and dependencies included in the container image.</p>
<p><strong>Container Analysis API:</strong> It's a service offered by Google Cloud Platform (GCP) that allows you to analyze container images stored in Google Artifact or Container Registry or other compatible container registries.</p>
<p><strong>Binary Authorization:</strong> Is a GCP service that allows you to verify and authorize container deployments in a GCP environment;</p>
<p><strong>Google Kubernetes Engine (GKE):</strong> It's a cloud-based containerization service provided by Google Cloud Platform (GCP). It allows you to deploy, manage, and orchestrate Docker containers at scale on GCP.</p>
<p><strong>Backup for GKE (Google Kubernetes Engine):</strong> Backup for GKE is a data backup service for Kubernetes clusters running on GKE, allowing users to protect their applications from data loss and simplify migration between clusters.</p>
<p><strong>Cloud Deploy:</strong> It's a continuous deployment (CD) service that simplifies and automates the process of deploying applications on GCP.</p>
<p><strong>Cloud Functions:</strong> Serverless platform for executing code on demand. Example: deploying notification email sending functions with node Js;</p>
<p><strong>Cloud Storage:</strong> Object storage service to store and manage files.</p>
<p><strong>Operations Suite:</strong> GCP Logging and Monitoring or monitoring and logging toolset for managing logs and alerts. Pub/Sub receives events published by Cloud Deploy and allows Cloud Functions to send notification emails to teams as needed (Notifications, Approve, Canary Step Progress);</p>
<p><strong>Static IP Address:</strong> This is a fixed and persistent Internet IP address assigned to a resource hosted on Google Cloud Platform, which is publicly accessible on the Internet, typically used for resources such as virtual machine instances, load balancers or VPN gateways that need a persistent and publicly accessible IP address for clients to access them. Example: we use it for the load balancer on our different environments;</p>
<p><strong>Google Cloud Load Balancer or load balancers:</strong> it is a load distribution service for managing network traffic. Example: Allow external Applications to interact with our service from an external Static IP address;</p>
<p><strong>Pub/Sub</strong>: it's an asynchronous messaging service offered by GCP. It allows applications to communicate with each other by publishing and subscribing messages to "topics", which can be thematic broadcast channels.</p>
<p><strong>KMS (Key Management Service):</strong> it's an encryption key management service offered by GCP. It allows you to create, store, manage and use encryption keys to protect data stored in the cloud. Example: store image keys to allow binary authorizations to verify containers before deployment;</p>
<p><strong>Compute Engine:</strong> Google Compute Engine (GCE) is a cloud computing service offered by Google that allows users to rent virtual machines (VMs) on their cloud infrastructure. Example, in order not to use SonarCloud, we will use it to deploy Sonarqube;</p>
<p><strong>Sonarqube:</strong> is an open-source platform for code quality management and static code analysis. It allows development teams to verify the quality of source code, detect security issues, identify potential bugs and measure code coverage;</p>
<p><strong>Ansible:</strong> Ansible is an open-source configuration management, deployment automation and system orchestration tool. It allows you to centrally manage the configuration and deployment of a large number of machines, whether physical, virtual or in the cloud;</p>
<p><strong>Docker Compose:</strong> Docker Compose is an open-source Docker container management tool. It allows you to define, configure and launch multiple Docker containers at the same time, using a YAML file to describe all the necessary services and configurations.</p>
<h3 id="heading-check-poc-video-here-in-french"><strong>Check PoC video here in French</strong></h3>
<iframe width="auto" height="auto" src="https://www.youtube.com/embed/JIJEAFsJ1bw?si=UDR_TDT0t7imLiM6"></iframe>

<p>These strategies, combined with GCP's auto-scaling, load balancing, and traffic splitting capabilities, ensure a smooth, low-risk transition to newer application versions.</p>
<p>By embracing DevSecOps principles and leveraging GCP's powerful services, organizations can achieve faster time-to-market, improved security and compliance, and enhanced application resilience, enabling them to stay ahead in the competitive digital landscape.</p>
]]></content:encoded></item><item><title><![CDATA[How to Safely Link Terraform Cloud and Google Cloud Platform via Workload Identity Federation]]></title><description><![CDATA[Terraform Cloud is a great service for managing Terraform configurations and applying them to provision infrastructure across different cloud providers, including Google Cloud Platform (GCP). However, securely authenticating Terraform Cloud to GCP ca...]]></description><link>https://merlin.microworka.com/how-to-safely-link-terraform-cloud-and-google-cloud-platform-via-workload-identity-federation</link><guid isPermaLink="true">https://merlin.microworka.com/how-to-safely-link-terraform-cloud-and-google-cloud-platform-via-workload-identity-federation</guid><category><![CDATA[google cloud]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[terraform-cloud]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[automation]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Sun, 26 May 2024 11:07:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715538543785/4ccd881f-3ae8-4065-baa1-6de2f483d133.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Terraform Cloud is a great service for managing Terraform configurations and applying them to provision infrastructure across different cloud providers, including Google Cloud Platform (GCP). However, securely authenticating Terraform Cloud to GCP can be a challenge. Workload Identity Federation is a new feature in GCP that allows you to safely authenticate access to GCP resources without using long-lived credentials like service account keys. This post will walk through how to set up Workload Identity Federation between Terraform Cloud and GCP.</p>
<p><strong>If you prefer French, check PoC video here</strong></p>
<iframe width="auto" height="auto" src="https://www.youtube.com/embed/ebV8VeNdscU?si=uPu4sJYCobh9qjQE"></iframe>

<p><strong>Prerequisites:</strong></p>
<ul>
<li><p>A GCP project</p>
</li>
<li><p>A Terraform Cloud account and workspace</p>
</li>
</ul>
<p><strong>Step 1:</strong> Enable Required GCP APIs First, you need to enable the required APIs in your GCP project:</p>
<pre><code class="lang-yaml"><span class="hljs-string">gcloud</span> <span class="hljs-string">services</span> <span class="hljs-string">enable</span> <span class="hljs-string">iamcredentials.googleapis.com</span>
<span class="hljs-string">gcloud</span> <span class="hljs-string">services</span> <span class="hljs-string">enable</span> <span class="hljs-string">cloudresourcemanager.googleapis.com</span>
</code></pre>
<p><strong>Step 2:</strong> Create a Google Cloud Workload Identity Pool: It's a collection of workloads that share the same identity and access policy. Create one with:</p>
<pre><code class="lang-yaml"><span class="hljs-string">gcloud</span> <span class="hljs-string">iam</span> <span class="hljs-string">workload-identity-pools</span> <span class="hljs-string">create</span> <span class="hljs-string">tfc-wif-pool</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--project=&lt;PROJECT_ID&gt;</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--location=global</span>
</code></pre>
<p><strong>Step 3:</strong> Create a Google Cloud Workload Identity Pool Provider: This links your external identity provider (in this case Terraform Cloud) to the workload identity pool:</p>
<pre><code class="lang-yaml"><span class="hljs-string">gcloud</span> <span class="hljs-string">iam</span> <span class="hljs-string">workload-identity-pools</span> <span class="hljs-string">providers</span> <span class="hljs-string">create-oidc</span> <span class="hljs-string">tfc-wif-provider</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--project=&lt;PROJECT_ID&gt;</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--location=global</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--workload-identity-pool=tfc-wif-pool</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--issuer-uri=https://app.terraform.io</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--attribute-mapping="google.subject=assertion.terraform_workspace_id"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716717334229/2814699d-d72b-4035-8eb6-41e63fa6998e.png" alt class="image--center mx-auto" /></p>
<p><strong>Step 4:</strong> Create a Google Service Account for Terraform: This represents the identity that Terraform will use:</p>
<pre><code class="lang-yaml"><span class="hljs-string">gcloud</span> <span class="hljs-string">iam</span> <span class="hljs-string">service-accounts</span> <span class="hljs-string">create</span> <span class="hljs-string">tfc-wif-sa</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--project=&lt;PROJECT_ID&gt;</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716717417185/deea212c-27a2-429b-b941-68d4c67d7688.png" alt class="image--center mx-auto" /></p>
<p><strong>Step 5:</strong> Allow the Service Account to user the Workload Identity:</p>
<pre><code class="lang-yaml"><span class="hljs-string">gcloud</span> <span class="hljs-string">projects</span> <span class="hljs-string">add-iam-policy-binding</span> <span class="hljs-string">&lt;PROJECT_ID&gt;</span> <span class="hljs-string">\</span>
    <span class="hljs-string">--member</span> <span class="hljs-string">serviceAccount:tfc-wif-sa@&lt;PROJECT_ID&gt;.iam.gserviceaccount.com</span> <span class="hljs-string">\</span>
    <span class="hljs-string">--role</span> <span class="hljs-string">roles/iam.workloadIdentityUser</span>
</code></pre>
<p><strong>Step 6:</strong> Add Permissions to the Service Account and any required permissions to the service account, such as the ability to create and manage compute Engine resources:</p>
<pre><code class="lang-yaml"><span class="hljs-string">gcloud</span> <span class="hljs-string">projects</span> <span class="hljs-string">add-iam-policy-binding</span> <span class="hljs-string">&lt;PROJECT_ID&gt;</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--member="serviceAccount:tfc-wif-sa@&lt;PROJECT_ID&gt;.iam.gserviceaccount.com"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--role="roles/compute.admin"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716717578432/ad6f03b1-8e73-4afa-8e96-4e47f552cd1f.png" alt class="image--center mx-auto" /></p>
<p><strong>Step 7:</strong> Configure Terraform Cloud In your Terraform Cloud workspace:</p>
<ol>
<li><p>Go to "Variables" and create a new environment : change <code>PROJECT_NUMBER</code> and <code>PROJECT_ID</code></p>
<pre><code class="lang-yaml"> <span class="hljs-string">TFC_GCP_PROJECT_NUMBER</span> <span class="hljs-string">=</span> <span class="hljs-string">&lt;PROJECT_NUMBER&gt;</span> 
 <span class="hljs-string">TFC_GCP_PROVIDER_AUTH</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>    
 <span class="hljs-string">TFC_GCP_RUN_SERVICE_ACCOUNT_EMAIL</span> <span class="hljs-string">=</span> <span class="hljs-string">tfc-wif-sa@&lt;PROJECT_ID&gt;.iam.gserviceaccount.com</span>
 <span class="hljs-string">TFC_GCP_WORKLOAD_POOL_ID</span> <span class="hljs-string">=</span> <span class="hljs-string">tfc-wif-pool</span>
 <span class="hljs-string">TFC_GCP_WORKLOAD_PROVIDER_ID</span> <span class="hljs-string">=</span> <span class="hljs-string">tfc-wif-provider</span>
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716717634162/9fa76915-8724-4dee-8b8c-4f985148acf4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Update</strong><a target="_blank" href="https://console.cloud.google.com/iam-admin/workload-identity-pools">Workload Identity Federation</a> to add service <strong>tfc-wif-sa@&lt;PROJECT_ID&gt;.</strong><a target="_blank" href="http://iam.gserviceaccount.com"><strong>iam.gserviceaccount.com</strong></a></p>
<p> Click on <strong>tfc-wif-pool</strong> Display Name</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716717874949/1d2f806a-3aa6-4fe7-81d2-8d1d89233570.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>And Click <strong>Grant Access</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716717922397/654b1581-a898-458f-97ea-8cef9fb671ce.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Add your SA email</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715543130130/666dfd14-0cac-4041-bc42-3bf7c81e80a0.png" alt class="image--center mx-auto" /></p>
<p> Add your <strong>Terraform cloud Workspace ID</strong> like this:</p>
<p> Copy your **ws-**xSSKSSSSSS</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716718030273/e803b862-8ab8-45b6-b292-6d10d33f68e0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Update subject with the workspace ID <code>subject</code> and Click on "<strong>SAVE</strong>" in the popup page, click on <strong>DISMISS</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716718080974/7c884182-bdee-4fdd-b265-5b745d96bb4c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Your provider <strong>CONNECTED SERVICE ACCOUNTS</strong> should look like this</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716718133600/148cb6d0-5a64-4ac2-83d6-dadb53101290.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>create a terraform file <code>main.tf</code>, add the <code>google</code> provider block and resource to create <code>Compute Engine Instance</code>:</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-string">terraform</span> {
  <span class="hljs-string">cloud</span> {
    <span class="hljs-string">organization</span> <span class="hljs-string">=</span> <span class="hljs-string">"change_me"</span> <span class="hljs-comment"># change this</span>

    <span class="hljs-string">workspaces</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"tfc-gcp-wif"</span>
    }
  }
}

<span class="hljs-string">provider</span> <span class="hljs-string">"google"</span> {
  <span class="hljs-string">project</span> <span class="hljs-string">=</span> <span class="hljs-string">var.gcp_project_id</span>
  <span class="hljs-string">region</span>  <span class="hljs-string">=</span> <span class="hljs-string">var.gcp_region</span>
  <span class="hljs-string">zone</span>    <span class="hljs-string">=</span> <span class="hljs-string">var.gcp_zone</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"gcp_project_id"</span> {}

<span class="hljs-string">variable</span> <span class="hljs-string">"gcp_region"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-east4"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"gcp_zone"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-east4-c"</span>
}

<span class="hljs-string">resource</span> <span class="hljs-string">"google_compute_instance"</span> <span class="hljs-string">"test-instance"</span> {
  <span class="hljs-string">name</span>         <span class="hljs-string">=</span> <span class="hljs-string">"test-instance"</span>
  <span class="hljs-string">machine_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"n1-standard-1"</span>
  <span class="hljs-string">zone</span>         <span class="hljs-string">=</span> <span class="hljs-string">var.gcp_zone</span>

  <span class="hljs-string">boot_disk</span> {
    <span class="hljs-string">initialize_params</span> {
      <span class="hljs-string">image</span> <span class="hljs-string">=</span> <span class="hljs-string">"centos-cloud/centos-7"</span>
      <span class="hljs-string">size</span> <span class="hljs-string">=</span> <span class="hljs-number">20</span>
    }
  }

  <span class="hljs-string">network_interface</span> {
    <span class="hljs-string">subnetwork</span> <span class="hljs-string">=</span> <span class="hljs-string">"default"</span>
  }
}
</code></pre>
<p>Run <strong>terraform init</strong>, <strong>terraform plan</strong> and <strong>terraform apply --auto-approve</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716718315091/b970e37a-8592-493a-9e89-e52713275187.png" alt class="image--center mx-auto" /></p>
<p>Congratulations, you did it !</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716718223474/44c37980-2ac9-4c91-9700-420eabaf0879.png" alt class="image--center mx-auto" /></p>
<p>That's it! Terraform Cloud can now securely provision resources in your GCP project using Workload Identity Federation without needing to manage long-lived service account keys. The service account permissions can be scoped as needed, and the external identity is validated by GCP for each run.</p>
<p><strong>Don't forget to destroy ressources !</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716718274208/b24389bc-490a-4403-8c65-27666f59ccda.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Easy Steps to Configure MongoDB Atlas with Terraform and Terraform Cloud]]></title><description><![CDATA[We'll walk through the process of creating MongoDB Atlas resources using Terraform and Terraform Cloud. Terraform is an open-source infrastructure as code (IaC) tool that allows you to provision and manage cloud resources in a declarative way. MongoD...]]></description><link>https://merlin.microworka.com/easy-steps-to-configure-mongodb-atlas-with-terraform-and-terraform-cloud</link><guid isPermaLink="true">https://merlin.microworka.com/easy-steps-to-configure-mongodb-atlas-with-terraform-and-terraform-cloud</guid><category><![CDATA[Terraform]]></category><category><![CDATA[terraform-cloud]]></category><category><![CDATA[MongoDB]]></category><category><![CDATA[MongoDB Atlas]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[Infrastructure as code]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Mon, 20 May 2024 08:43:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716110995331/db23cd5e-918a-440e-9af2-df95cd1458fb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We'll walk through the process of creating MongoDB Atlas resources using Terraform and Terraform Cloud. Terraform is an open-source infrastructure as code (IaC) tool that allows you to provision and manage cloud resources in a declarative way. MongoDB Atlas is a fully-managed cloud database service provided by MongoDB. By combining Terraform and Terraform Cloud, we can streamline the process of provisioning and managing our MongoDB Atlas resources.</p>
<p><strong>If you prefer French, you can watch the video version here</strong></p>
<iframe width="auto" height="auto" src="https://www.youtube.com/embed/GbGBIU97sCY?si=8NVJhwAlenAMxdTs"></iframe>

<h3 id="heading-step-1-set-up-terraform-cloud"><strong>Step 1: Set up Terraform Cloud</strong></h3>
<ol>
<li><p>Sign up for a Terraform Cloud account (<a target="_blank" href="https://app.terraform.io/signup/account">https://app.terraform.io/signup/account</a>)</p>
</li>
<li><p>Create a new organization</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715519171248/55c60926-8719-4077-a914-687d8c1d7f13.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Create a terraform Projects</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715519182572/1f5501c4-f908-4341-b98d-55c68d5f59dd.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Create a new workspace: you need to choose the best option for your use case (just read each option description), here we will choose CLI-Driven Workflow</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715519315555/c60481af-cee6-443c-9b7f-e508a8d5943f.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-step-2-set-up-mongodb-atlas"><strong>Step 2: Set up MongoDB Atlas</strong></h3>
<ol>
<li><p>Sign up for a Terraform Cloud account (<a target="_blank" href="https://account.mongodb.com/account/login">https://account.mongodb.com/account/login</a></p>
</li>
<li><p><a target="_blank" href="https://cloud.mongodb.com/v2#/preferences/organizations">Create a new organization</a></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715519926587/d61be403-e316-4dbc-be24-e596ee934839.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715519931815/c54f6513-fd77-4866-b7bf-211b5ec5e3ae.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>In the MongoDB Atlas Organization, on "Settings" section, copy Organization ID</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715520582680/b449910b-a1db-46e0-a038-e8ee76c48896.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>In Organization "Access Manager" section, create API Key With Public and Private key</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715520878219/699cb652-41e7-4eb5-913d-43a99fcc1a3e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Provide the required permission to the API Key</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715520986337/98d339b6-d9a3-4237-8b08-45ae111dc8cd.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Allow your IP CIDR to access MongoDB Atlas <code>your-ip/32</code></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715528953307/2e82c8e8-1c1d-43c5-be64-415693740c62.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Copy and save API Public and Private Key Information, we will use it on Terraform Cloud.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715529036514/738d5c4f-3507-4e38-93e9-bbcfaff861d5.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-step-3-integrate-mongogb-atlas-with-terraform-cloud"><strong>Step 3: Integrate MongoGB Atlas with Terraform Cloud</strong></h3>
<p>Configure variables and environment variables in Terraform Cloud, with your MongoDB Atlas credentials (API key, Organization ID, etc.)</p>
<ul>
<li><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715524844203/744321c4-51b1-46ee-a85a-020888d4762d.png" alt class="image--center mx-auto" /></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715524963298/57aa3a71-ede9-41f8-99f5-19b15e565b3e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-4-configure-terraform-for-mongodb-atlas"><strong>Step 4: Configure Terraform for MongoDB Atlas</strong></h3>
<ol>
<li><p>Install Terraform on your local machine (<a target="_blank" href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli">https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli</a>)</p>
</li>
<li><p>Add the MongoDB Atlas provider to your Terraform configuration file (e.g., <code>main.tf</code>)</p>
<pre><code class="lang-yaml"> <span class="hljs-comment"># main.tf</span>
 <span class="hljs-string">provider</span> <span class="hljs-string">"mongodbatlas"</span> {
   <span class="hljs-string">public_key</span> <span class="hljs-string">=</span> <span class="hljs-string">var.mongoDB_Atlas_Public_Key</span>
   <span class="hljs-string">private_key</span>  <span class="hljs-string">=</span> <span class="hljs-string">var.mongoDB_Atlas_Private_Key</span>
 }

 <span class="hljs-comment"># variable.tf</span>
 <span class="hljs-string">variable</span> <span class="hljs-string">"mongoDb_Atlas_Public_Key"</span> {
     <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Public Key to add on Terraform Cloud"</span>
 }

 <span class="hljs-string">variable</span> <span class="hljs-string">"mongoDb_Atlas_Private_Key"</span> {
     <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Private Key to add on Terraform Cloud"</span>
 }
</code></pre>
</li>
<li><p>In terraform cloud namespace click on "Overview" and copy the terraform code</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715525263494/375f652c-34ae-41bf-9a34-6a7239e2c45a.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Update <strong>main.tf</strong> and add terraform cloud namespace, with the code copied</p>
<pre><code class="lang-yaml"> <span class="hljs-comment"># main.tf</span>
 <span class="hljs-string">terraform</span> {
   <span class="hljs-string">cloud</span> {
     <span class="hljs-string">organization</span> <span class="hljs-string">=</span> <span class="hljs-string">"devsahamerlin"</span>

     <span class="hljs-string">workspaces</span> {
       <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"mongodb-atlas"</span>
     }
   }
 }
</code></pre>
</li>
<li><p>Add MongoDB Atlas Version</p>
<pre><code class="lang-yaml"> <span class="hljs-string">terraform</span> {
   <span class="hljs-string">cloud</span> {
     <span class="hljs-string">organization</span> <span class="hljs-string">=</span> <span class="hljs-string">"devsahamerlin"</span>

     <span class="hljs-string">workspaces</span> {
       <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"mongodb-atlas"</span>
     }
   }
   <span class="hljs-string">required_providers</span> {
     <span class="hljs-string">mongodbatlas</span> <span class="hljs-string">=</span> {
       <span class="hljs-string">source</span>  <span class="hljs-string">=</span> <span class="hljs-string">"mongodb/mongodbatlas"</span>,
       <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">"1.8.0"</span>
     }
   }
 }
</code></pre>
</li>
<li><p>Your Complete main.tf in this step will look like this</p>
<pre><code class="lang-yaml"> <span class="hljs-comment"># main.tf</span>
 <span class="hljs-string">terraform</span> {
   <span class="hljs-string">cloud</span> {
     <span class="hljs-string">organization</span> <span class="hljs-string">=</span> <span class="hljs-string">"devsahamerlin"</span>

     <span class="hljs-string">workspaces</span> {
       <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"mongodb-atlas"</span>
     }
   }
   <span class="hljs-string">required_providers</span> {
     <span class="hljs-string">mongodbatlas</span> <span class="hljs-string">=</span> {
       <span class="hljs-string">source</span>  <span class="hljs-string">=</span> <span class="hljs-string">"mongodb/mongodbatlas"</span>,
       <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">"1.8.0"</span>
     }
   }
 }

 <span class="hljs-string">provider</span> <span class="hljs-string">"mongodbatlas"</span> {
   <span class="hljs-string">public_key</span> <span class="hljs-string">=</span> <span class="hljs-string">var.mongoDb_Atlas_Public_Key</span>
   <span class="hljs-string">private_key</span>  <span class="hljs-string">=</span> <span class="hljs-string">var.mongoDb_Atlas_Private_Key</span>
 }

 <span class="hljs-string">variable</span> <span class="hljs-string">"mongoDb_Atlas_Public_Key"</span> {
     <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Public Key to add on Terraform Cloud"</span>
 }

 <span class="hljs-string">variable</span> <span class="hljs-string">"mongoDb_Atlas_Private_Key"</span> {
     <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Private Key to add on Terraform Cloud"</span>
 }
</code></pre>
</li>
<li><p>Run <code>terraform login</code> to connect to terraform cloud</p>
<pre><code class="lang-bash"> terraform login
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715526319136/9fa3623f-ec55-494a-ab5c-8f199289da7c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Initialize a new Terraform configuration by running <code>terraform init</code></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715526481198/d0dc7f28-9098-4f31-9af0-edc1e5ce6277.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-step-5-define-mongodb-atlas-resources"><strong>Step 5: Define MongoDB Atlas Resources</strong></h3>
<ol>
<li><p>Use the Terraform MongoDB Atlas provider to define the resources you want to create (e.g., projects, clusters, database users, etc.)</p>
</li>
<li><p>Write the resource definitions in your Terraform configuration <code>main.tf</code> file(s)</p>
<pre><code class="lang-yaml"> <span class="hljs-string">resource</span> <span class="hljs-string">"mongodbatlas_project"</span> <span class="hljs-string">"project"</span> {
   <span class="hljs-string">name</span>   <span class="hljs-string">=</span> <span class="hljs-string">"meanstack"</span>
   <span class="hljs-string">org_id</span> <span class="hljs-string">=</span> <span class="hljs-string">var.mongogb_atlas_org_id</span>

   <span class="hljs-string">is_collect_database_specifics_statistics_enabled</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
   <span class="hljs-string">is_data_explorer_enabled</span>                         <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
   <span class="hljs-string">is_performance_advisor_enabled</span>                   <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
   <span class="hljs-string">is_realtime_performance_panel_enabled</span>            <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
   <span class="hljs-string">is_schema_advisor_enabled</span>                        <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
 }
</code></pre>
<p> We add new variables <code>mongogb_atlas_org_id</code>, let's update terraform cloud variable</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715528082987/baff10d0-62ae-41c6-8d8a-3071e2e970f4.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-step-4-plan-and-apply-the-changes"><strong>Step 4: Plan and Apply the Changes</strong></h3>
<p>Terraform provides two crucial commands for managing infrastructure changes: <code>plan</code> and <code>apply</code>. Your terraform main.tf file should look like this now:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># main.tf</span>
<span class="hljs-string">terraform</span> {
  <span class="hljs-string">cloud</span> {
    <span class="hljs-string">organization</span> <span class="hljs-string">=</span> <span class="hljs-string">"devsahamerlin"</span>

    <span class="hljs-string">workspaces</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"mongodb-atlas"</span>
    }
  }
  <span class="hljs-string">required_providers</span> {
    <span class="hljs-string">mongodbatlas</span> <span class="hljs-string">=</span> {
      <span class="hljs-string">source</span>  <span class="hljs-string">=</span> <span class="hljs-string">"mongodb/mongodbatlas"</span>,
      <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">"1.8.0"</span>
    }
  }
}

<span class="hljs-string">provider</span> <span class="hljs-string">"mongodbatlas"</span> {
  <span class="hljs-string">public_key</span> <span class="hljs-string">=</span> <span class="hljs-string">var.mongoDb_Atlas_Public_Key</span>
  <span class="hljs-string">private_key</span>  <span class="hljs-string">=</span> <span class="hljs-string">var.mongoDb_Atlas_Private_Key</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"mongoDb_Atlas_Public_Key"</span> {
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Public Key to add on Terraform Cloud"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"mongoDb_Atlas_Private_Key"</span> {
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Private Key to add on Terraform Cloud"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"mongogb_atlas_org_id"</span> {
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"MongoDB Atlas organization ID"</span>
}

<span class="hljs-string">resource</span> <span class="hljs-string">"mongodbatlas_project"</span> <span class="hljs-string">"project"</span> {
  <span class="hljs-string">name</span>   <span class="hljs-string">=</span> <span class="hljs-string">"meanstack"</span>
  <span class="hljs-string">org_id</span> <span class="hljs-string">=</span> <span class="hljs-string">var.mongogb_atlas_org_id</span>

  <span class="hljs-string">is_collect_database_specifics_statistics_enabled</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
  <span class="hljs-string">is_data_explorer_enabled</span>                         <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
  <span class="hljs-string">is_performance_advisor_enabled</span>                   <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
  <span class="hljs-string">is_realtime_performance_panel_enabled</span>            <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
  <span class="hljs-string">is_schema_advisor_enabled</span>                        <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
}
</code></pre>
<ol>
<li><p>Run <code>terraform plan</code> to preview the changes Terraform will make</p>
<p> This command analyzes your Terraform configuration files and compares them to the existing infrastructure (if any) managed by Terraform. It then generates a detailed plan outlining the actions Terraform will take to achieve the desired state defined in your configuration. The plan will typically show:</p>
<ul>
<li><p>Resources to be created</p>
</li>
<li><p>Resources to be modified (attributes changing)</p>
</li>
<li><p>Resources to be destroyed (if present)</p>
</li>
</ul>
</li>
<li><p>Review the plan output and ensure everything looks correct</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715528250973/8f834eaa-058c-47b6-b67c-71c6da38fcc5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Run <code>terraform apply</code> to create the MongoDB Atlas resources</p>
<p> Once you've reviewed and approved the plan generated by <code>terraform plan</code>, you can use the <code>apply</code> command to execute the planned actions. This will create, modify, or destroy resources as outlined in the plan.</p>
<p> <strong>Important points about</strong><code>apply</code>:</p>
<ul>
<li><p><strong>Irreversible changes:</strong> Applying the plan makes permanent changes to your infrastructure. Make sure you understand the plan and have backups before proceeding.</p>
</li>
<li><p><strong>Confirmation prompt:</strong> By default, <code>apply</code> will prompt you for confirmation before making any changes.</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715530347715/fc580c69-23ae-46b1-9cfd-2241232314ff.png" alt class="image--center mx-auto" /></p>
<p>    if you get error while applying , make sure your IP address is added to <code>Error: error creating Project: POST</code><a target="_blank" href="https://cloud.mongodb.com/api/atlas/v1.0/groups"><code>https://cloud.mongodb.com/api/atlas/v1.0/groups</code></a><code>: 403 (request "IP_ADDRESS_NOT_ON_ACCESS_LIST") IP address your_ip is not allowed to access this resource.</code></p>
<ol start="4">
<li><p>Verify that your ressource is avaible on MongoDB Atlas</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715530418256/d88eea6e-ec2e-434c-937f-2b28005a761d.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-step-5-destroy-mongodb-atlas-resources"><strong>Step 5: Destroy MongoDB Atlas Resources</strong></h3>
<p>After you've finished working with the MongoDB Atlas resources created using Terraform, you can destroy them to avoid incurring unnecessary costs. Here's how:</p>
<ol>
<li><p>Run <code>terraform plan -destroy</code> to see which resources will be destroyed</p>
</li>
<li><p>Review the plan output to ensure you're destroying the correct resources</p>
</li>
<li><p>Run <code>terraform destroy</code> to destroy all the MongoDB Atlas resources you created</p>
</li>
<li><p>Terraform will prompt you to confirm the destruction of resources</p>
</li>
<li><p>Type "yes" and press Enter to confirm</p>
</li>
</ol>
<p>It's important to note that the <code>terraform destroy</code> command will permanently delete all the resources defined in your Terraform configuration. Make sure to back up any important data before running this command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715530849837/4ad6ba50-f410-41f2-b400-5e333d1237c8.png" alt class="image--center mx-auto" /></p>
<p>You can also trigger a "Destroy" run from the user interface to destroy the resources managed by your workspace.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715530667966/41d964c8-907f-4ffe-b1bc-d8eb2c95d2f3.png" alt class="image--center mx-auto" /></p>
<p>Congratulations !</p>
<p>Using Terraform and Terraform Cloud for managing MongoDB Atlas resources offers several key benefits and advantages:</p>
<ol>
<li><p><strong>Infrastructure as Code</strong>: Terraform allows you to define your MongoDB Atlas resources as code, making it easier to manage, version, and collaborate on your infrastructure configurations.</p>
</li>
<li><p><strong>Version Control</strong>: By storing your Terraform configurations in a version control system like Git, you can track changes, revert to previous states, and collaborate with team members more effectively.</p>
</li>
<li><p><strong>Automated Workflows</strong>: Terraform Cloud enables automated workflows for provisioning and managing your MongoDB Atlas resources. You can trigger runs based on various events, such as code changes or scheduled intervals, streamlining the deployment process.</p>
</li>
<li><p><strong>Collaboration and Governance</strong>: Terraform Cloud provides features for team collaboration, including access controls, policy enforcement, and centralized management of Terraform configurations and state files.</p>
</li>
<li><p><strong>Consistency and Reproducibility</strong>: With Terraform, you can ensure consistent and reproducible deployments of your MongoDB Atlas resources across different environments (development, staging, production), reducing the risk of configuration drift.</p>
</li>
<li><p><strong>Resource Lifecycle Management</strong>: Terraform not only provisions resources but also manages their lifecycle. You can easily update or destroy resources as needed, ensuring efficient resource management and cost optimization.</p>
</li>
<li><p><strong>Multi-Cloud and Multi-Provider Support</strong>: While this blog post focuses on MongoDB Atlas, Terraform supports a wide range of cloud providers and services, making it a versatile tool for managing your entire infrastructure.</p>
</li>
</ol>
<p>By leveraging Terraform and Terraform Cloud for managing MongoDB Atlas resources, you can benefit from a streamlined, version-controlled, and collaborative approach to infrastructure provisioning and management. This not only improves efficiency and consistency but also enables better governance, compliance, and cost optimization for your MongoDB Atlas deployments.</p>
]]></content:encoded></item><item><title><![CDATA[Building a Secure CI/CD Pipeline on Oracle Cloud with DevSecOps Tools]]></title><description><![CDATA[Automated Secure CI/CD Pipeline for Oracle Cloud Infrastructure with DevSecOps Practices (Jenkins, OWASP Dependency Check, Trivy, SonarQube, VCN, Compartment, Security Group, Maven, GitHub, Docker, Docker Hub, ArgoCD, Kubernetes) Using Terraform, Ans...]]></description><link>https://merlin.microworka.com/building-a-secure-cicd-pipeline-on-oracle-cloud-with-devsecops-tools-for-free</link><guid isPermaLink="true">https://merlin.microworka.com/building-a-secure-cicd-pipeline-on-oracle-cloud-with-devsecops-tools-for-free</guid><category><![CDATA[Secure CI/CD Pipeline]]></category><category><![CDATA[Oracle Autonomous Database]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[Oracle Cloud]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[terraform-cloud]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[owasp]]></category><category><![CDATA[trivy]]></category><category><![CDATA[sonarqube]]></category><category><![CDATA[maven]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Docker]]></category><category><![CDATA[dockerhub]]></category><category><![CDATA[ArgoCD]]></category><dc:creator><![CDATA[Merlin Saha]]></dc:creator><pubDate>Sat, 18 May 2024 04:06:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715975654933/b1f906ef-1706-48d2-9ea5-5c6c3e5e9e98.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Automated Secure CI/CD Pipeline for Oracle Cloud Infrastructure with DevSecOps Practices (Jenkins, OWASP Dependency Check, Trivy, SonarQube, VCN, Compartment, Security Group, Maven, GitHub, Docker, Docker Hub, ArgoCD, Kubernetes) Using Terraform, Ansible, and Bash Scripts</strong></p>
<h2 id="heading-introduction">Introduction</h2>
<p>We'll walk through the tools to set up an automated secure CI/CD pipeline for Oracle Cloud Infrastructure (OCI) using popular DevSecOps tools and techniques. We'll explore two deployment choices: one on Oracle Cloud Virtual Machines and the other on Kubernetes installed on Oracle Cloud Virtual Machines. Additionally, we'll leverage Terraform, Ansible and Bash scripts to automate the provisioning and configuration of the required infrastructure and tools and Argo CD for Deployment.</p>
<h2 id="heading-technologies-used"><strong>Technologies Used:</strong></h2>
<ol>
<li><p><strong>Jenkins</strong>: An open-source automation server that facilitates Continuous Integration and Continuous Deployment (CI/CD) processes.</p>
</li>
<li><p><strong>OWASP Dependency-Check</strong>: A utility that identifies project dependencies and checks for known, publicly disclosed vulnerabilities.</p>
</li>
<li><p><strong>Trivy</strong>: A simple and comprehensive vulnerability scanner for containers and other artifacts, suitable for CI/CD pipelines.</p>
</li>
<li><p><strong>SonarQube</strong>: A platform for continuous code quality analysis, providing insights into code quality, security vulnerabilities, and technical debt.</p>
</li>
<li><p><strong>Oracle Autonomous Database</strong>: A fully managed, preconfigured database environment that is designed to provide high performance, high availability, and automated patching and upgrades.</p>
</li>
<li><p><strong>Virtual Cloud Network (VCN)</strong>: OCI's software-defined networking service that provides connectivity between resources in the cloud.</p>
</li>
<li><p><strong>Compartment</strong>: A logical container in OCI for isolating and controlling access to resources.</p>
</li>
<li><p><strong>Security Group</strong>: A virtual firewall that controls inbound and outbound traffic at the instance level in OCI.</p>
</li>
<li><p><strong>Maven</strong>: A build automation tool used primarily for Java projects, managing dependencies and building and packaging applications.</p>
</li>
<li><p><strong>GitHub</strong>: A web-based version control and collaboration platform for software development.</p>
</li>
<li><p><strong>Docker</strong>: An open-source platform for building, shipping, and running applications in containers.</p>
</li>
<li><p><strong>Docker Hub</strong>: A cloud-based registry service for storing, distributing, and managing Docker container images.</p>
</li>
<li><p><strong>ArgoCD</strong>: A declarative continuous delivery tool for Kubernetes applications.</p>
</li>
<li><p><strong>Kubernetes</strong>: An open-source container orchestration system for automating deployment, scaling, and management of containerized applications.</p>
</li>
<li><p><strong>Terraform</strong>: An infrastructure-as-code (IaC) tool that allows you to provision and manage cloud resources using a declarative configuration language.</p>
</li>
<li><p><strong>Ansible</strong>: An open-source IT automation tool that automates software provisioning, configuration management, and application deployment.</p>
</li>
<li><p><strong>Bash Scripts</strong>: Shell scripts written in the Bash scripting language for automating various tasks.</p>
</li>
</ol>
<h3 id="heading-check-poc-video-here-in-french"><strong>Check PoC video here in French</strong></h3>
<iframe width="auto" height="auto" src="https://www.youtube.com/embed/mvBNh6scVHk?si=pYrDWw3QiaMcehR7"></iframe>

<h2 id="heading-step-1-provision-infrastructure-using-terraform"><strong>Step 1: Provision Infrastructure using Terraform</strong></h2>
<ol>
<li><p>Use Terraform to provision the required infrastructure resources in OCI, including:</p>
<ul>
<li><p>Virtual Cloud Network (VCN)</p>
</li>
<li><p>Subnets</p>
</li>
<li><p>Compute instances (for Jenkins, SonarQube, etc.)</p>
</li>
<li><p>Oracle Autonomous Database instance</p>
</li>
<li><p>Security groups</p>
</li>
<li><p>Compartments</p>
</li>
<li><p>Etc.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-step-2-configure-jenkinsagent-and-master-on-oci-automically-with-ansible-playbook-and-bash-script"><strong>Step 2: Configure Jenkins(Agent and Master) on OCI, automically with Ansible Playbook and Bash Script</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716004136152/5c8bf4d8-dca3-4e4b-8d90-92b863ac0956.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Install Java and Jenkins on the designated Compute instance.</p>
</li>
<li><p>Intall Kubernetes</p>
</li>
<li><p>Install Trivy on the Jenkins Compute instance.</p>
</li>
<li><p>Install Docker</p>
</li>
<li><p>Configure Jenkins Master and Ajent.</p>
</li>
<li><p>Deploy ArgoCD</p>
</li>
</ol>
<h2 id="heading-step-3-integrate-owasp-dependency-check"><strong>Step 3: Integrate OWASP Dependency-Check</strong></h2>
<ol>
<li><p>Install OWASP Dependency-Check on the Jenkins Compute instance.</p>
</li>
<li><p>Create a Jenkins pipeline job and add a stage to run OWASP Dependency-Check.</p>
</li>
<li><p>Configure OWASP Dependency-Check to scan your application's dependencies for known vulnerabilities.</p>
</li>
</ol>
<h2 id="heading-step-4-integrate-trivy-for-container-image-scanning"><strong>Step 4: Integrate Trivy for Container Image Scanning</strong></h2>
<ol>
<li><p>Add a stage in your Jenkins pipeline to scan the built container image using Trivy.</p>
</li>
<li><p>Configure Trivy to scan for vulnerabilities in the container image and its dependencies.</p>
</li>
</ol>
<h2 id="heading-step-5-integrate-sonarqube-for-code-quality-analysis"><strong>Step 5: Integrate SonarQube for Code Quality Analysis</strong></h2>
<ol>
<li><p>Use Terraform to provision a SonarQube server on OCI.</p>
</li>
<li><p>Add a stage in your Jenkins pipeline to run SonarQube analysis on your code.</p>
</li>
<li><p>Configure SonarQube to analyze your code for bugs, code smells, and security vulnerabilities.</p>
</li>
</ol>
<h2 id="heading-step-6-configure-the-cicd-pipeline"><strong>Step 6: Configure the CI/CD Pipeline</strong></h2>
<ol>
<li><p>Define the stages in your Jenkins pipeline:</p>
<ul>
<li><p>Checkout code from GitHub</p>
</li>
<li><p>Build the application using Maven</p>
</li>
<li><p>Run OWASP Dependency-Check</p>
</li>
<li><p>Build and scan the container image with Trivy</p>
</li>
<li><p>Run SonarQube analysis</p>
</li>
<li><p>Include a stage for building and pushing the container image to Docker Hub.</p>
</li>
<li><p>Add a stage in the pipeline to trigger ArgoCD deployment after successful pipeline execution.</p>
</li>
<li><p>Etc.</p>
</li>
</ul>
</li>
<li><p>Configure Jenkins to trigger the pipeline automatically on code commits or manually as needed.</p>
</li>
</ol>
<h2 id="heading-step-6-deployment-choice-1-oracle-cloud-virtual-machines"><strong>Step 6: Deployment Choice 1, Oracle Cloud Virtual Machines</strong></h2>
<ol>
<li>Deploy the application on Oracle Cloud Virtual Machines</li>
</ol>
<h2 id="heading-step-7-deployment-choice-2-kubernetes-on-oracle-cloud-virtual-machines"><strong>Step 7: Deployment Choice 2, Kubernetes on Oracle Cloud Virtual Machines</strong></h2>
<ol>
<li>Configure ArgoCD to automatically deploy your application to the Kubernetes cluster based on the pipeline output.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716003933730/c99c9c14-64ee-484f-8b99-6eee59cda39f.png" alt class="image--center mx-auto" /></p>
<p><strong>Contratulations to have read till here !</strong></p>
<h2 id="heading-step-8-source-code"><strong>Step 8: Source code</strong></h2>
<p>Source code here with a README file: <a target="_blank" href="https://github.com/devsahamerlin/iac-spring-boot-atp-jenkins-oci-devsecops">https://github.com/devsahamerlin/iac-spring-boot-atp-jenkins-oci-devsecops</a></p>
<p>Video Link here in French: <a target="_blank" href="https://www.youtube.com/watch?v=mvBNh6scVHk">https://www.youtube.com/watch?v=mvBNh6scVHk</a></p>
<p>By following steps in video you will leveraging tools like Terraform, Ansible, and Bash scripts, to automate the provisioning and configuration of an end-to-end secure CI/CD pipeline for Oracle Cloud Infrastructure. This pipeline incorporates DevSecOps practices using popular tools like Jenkins, OWASP Dependency Check, Trivy, and SonarQube, ensuring that security is addressed throughout the entire software development lifecycle.</p>
<p>The deployment choices provided cater to different scenarios: the first choice deploys the application on Oracle Cloud Virtual Machines, while the second choice leverages Kubernetes for container orchestration and deployment on Oracle Cloud Virtual Machines. In both choices, all required resources, including Oracle Autonomous Database, are provisioned using the provided Terraform code.</p>
]]></content:encoded></item></channel></rss>