Skip to content

Commit 505a49b

Browse files
authored
[APT-2073] Enable searching KB discussions (#555)
* [APT-2703] Enable searching KB discussions This PR adds a bunch of "discussions" pages that are generated by the docs-sourcer and mirror the official Gruntwork Knowledge Base discussions. We also update our search configuration to index these new pages, in effect, making it possible to search the KB discussions from the docs site.
1 parent b03a991 commit 505a49b

File tree

399 files changed

+9167
-3
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

399 files changed

+9167
-3
lines changed

Diff for: docs/discussions/knowledge-base/10.mdx

+23
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
---
2+
hide_table_of_contents: true
3+
hide_title: true
4+
custom_edit_url: null
5+
---
6+
7+
import CenterLayout from "/src/components/CenterLayout"
8+
import GitHub from "/src/components/GitHub"
9+
10+
<CenterLayout>
11+
<span className="searchCategory">Knowledge Base</span>
12+
<h1>The requested &#x7B;CPU|MEMORY&#x7D; configuration is above your limit</h1>
13+
<GitHub discussion={{"id":"MDEwOkRpc2N1c3Npb24zNTM5Mzg0","number":10,"author":{"login":"zackproser"},"title":"The requested {CPU|MEMORY} configuration is above your limit","body":"_A few users with EKS Fargate reference architectures began encountering the same or similar error messages the week of August 9th 2021:_\r\n\r\n`The requested CPU configuration is above your limit` or `The requested MEMORY configuration is above your limit`. ","bodyHTML":"<p dir=\"auto\"><em>A few users with EKS Fargate reference architectures began encountering the same or similar error messages the week of August 9th 2021:</em></p>\n<p dir=\"auto\"><code class=\"notranslate\">The requested CPU configuration is above your limit</code> or <code class=\"notranslate\">The requested MEMORY configuration is above your limit</code>.</p>","answer":{"body":"We've determined that AWS is applying an artificial ECS resource limit to newer accounts - and it appears to be set to 2 vCPU and 4GB of RAM. There are further clues pointing to this limit in [this thread](https://forum.numer.ai/t/some-troubleshooting-for-setting-up-numerai-compute/3623/8).\r\n\r\nThis is lower than what we tend to specify for the Elastic Deploy Runner - especially so that it can handle larger module deployments and the more resource intensive run-all command, which explains the error message. \r\n\r\nThis means that if you recently had a ref arch deployed that uses Fargate for the ECS Deploy Runner and are encountering this error, you have two main workarounds available right now:\r\n\r\n- Change your memory requests in your ECS Deploy Runner terragrunt.hcl configs to 4096 down from any value that is currently higher (container_memory = 4096). Ensure you are not requesting any more than 2 vCPU. Plan and apply your changes in each account to rebuild the EDR with lower memory requests and then resume building / planning from there.\r\n- Go through AWS Support. If you want to go through AWS Support - follow these instructions:\r\n\r\n1. Go to support center\r\n2. Create a new case in category “Service limit increase”\r\n3. Set “Limit type” to “Fargate”\r\n4. Set Request fields by choosing the region you operate in, the “Limit” field to “Concurrent Task Limit”, and set “New limit value” to 500\r\n5. In the use case description, say: \"NOTE: We actually want to increase the Fargate resource limits to allow tasks using 4 vCPUs and 8GB of RAM, but there is no option to do this. Right now, anything above 2 vCPUs and 4GB of RAM actually results in the error message “The requested MEMORY configuration is above your limit” even though the AWS documentation at https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html mentions that Fargate supports tasks up to 4vCPUs and 30GB of RAM. Please increase our limits to allow the full range of Fargate task resources.\"\r\n","bodyHTML":"<p dir=\"auto\">We've determined that AWS is applying an artificial ECS resource limit to newer accounts - and it appears to be set to 2 vCPU and 4GB of RAM. There are further clues pointing to this limit in <a href=\"https://forum.numer.ai/t/some-troubleshooting-for-setting-up-numerai-compute/3623/8\" rel=\"nofollow\">this thread</a>.</p>\n<p dir=\"auto\">This is lower than what we tend to specify for the Elastic Deploy Runner - especially so that it can handle larger module deployments and the more resource intensive run-all command, which explains the error message.</p>\n<p dir=\"auto\">This means that if you recently had a ref arch deployed that uses Fargate for the ECS Deploy Runner and are encountering this error, you have two main workarounds available right now:</p>\n<ul dir=\"auto\">\n<li>Change your memory requests in your ECS Deploy Runner terragrunt.hcl configs to 4096 down from any value that is currently higher (container_memory = 4096). Ensure you are not requesting any more than 2 vCPU. Plan and apply your changes in each account to rebuild the EDR with lower memory requests and then resume building / planning from there.</li>\n<li>Go through AWS Support. If you want to go through AWS Support - follow these instructions:</li>\n</ul>\n<ol dir=\"auto\">\n<li>Go to support center</li>\n<li>Create a new case in category “Service limit increase”</li>\n<li>Set “Limit type” to “Fargate”</li>\n<li>Set Request fields by choosing the region you operate in, the “Limit” field to “Concurrent Task Limit”, and set “New limit value” to 500</li>\n<li>In the use case description, say: \"NOTE: We actually want to increase the Fargate resource limits to allow tasks using 4 vCPUs and 8GB of RAM, but there is no option to do this. Right now, anything above 2 vCPUs and 4GB of RAM actually results in the error message “The requested MEMORY configuration is above your limit” even though the AWS documentation at <a href=\"https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html\" rel=\"nofollow\">https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html</a> mentions that Fargate supports tasks up to 4vCPUs and 30GB of RAM. Please increase our limits to allow the full range of Fargate task resources.\"</li>\n</ol>"}}} />
14+
15+
</CenterLayout>
16+
17+
18+
<!-- ##DOCS-SOURCER-START
19+
{
20+
"sourcePlugin": "github-discussions",
21+
"hash": "e9a49fc43a5536f6dde48c2eee5fbc19"
22+
}
23+
##DOCS-SOURCER-END -->

Diff for: docs/discussions/knowledge-base/101.mdx

+23
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
---
2+
hide_table_of_contents: true
3+
hide_title: true
4+
custom_edit_url: null
5+
---
6+
7+
import CenterLayout from "/src/components/CenterLayout"
8+
import GitHub from "/src/components/GitHub"
9+
10+
<CenterLayout>
11+
<span className="searchCategory">Knowledge Base</span>
12+
<h1>How can I migrate from a single-account to a multi-account Ref Arch in the future?</h1>
13+
<GitHub discussion={{"id":"D_kwDOF8slf84AORie","number":101,"author":{"login":"rhoboat"},"title":"How can I migrate from a single-account to a multi-account Ref Arch in the future?","body":"Gruntwork used to offer the single-account Ref Arch. If I have a setup like this today, how do I migrate to the multi-account set up?","bodyHTML":"<p dir=\"auto\">Gruntwork used to offer the single-account Ref Arch. If I have a setup like this today, how do I migrate to the multi-account set up?</p>","answer":{"body":"Unfortunately we don’t have a dedicated guide to switch a single account deployment to multi-account, nor is it something we officially support. In general, we recommend purchasing a multi-account Reference Architecture and migrating your existing workloads there, as the amount of work to migrate the existing infrastructure is fairly significant. It is likely going to be cheaper and faster to migrate your workloads to a new fresh Reference Architecture than it is to adapt your existing one.\r\n\r\nWith that in mind, my recommendation for adopting multi-account infrastructure is to start by creating the 3 supporting accounts (`security`, `logs`, and `shared`) using [the production example code](https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/master/examples/for-production/infrastructure-live) as an example.\r\n\r\nOnce those accounts are setup, then you can create a new application account to act as your demo account that is hooked to those (e.g., having the `security` account as the gateway for IAM; having the `shared` account host docker images and AMIs; etc), and then migrate your demo workloads to there.\r\n\r\nOnce you have a successful demo workload, then you can repeat the step for `stage` and `prod` until you have everything migrated over.\r\n\r\nHowever, be apprised that this DIY approach is basically deploying the Gruntwork multi-account Ref Arch from scratch, and it is pretty involved with a lot of nuance, such as the deployment order, cross-account permissions, and resource-sharing configs. If you're going this route, you might as well purchase a Reference Architecture from us and migrate to it.\r\n\r\ncredit to @yorinasub17","bodyHTML":"<p dir=\"auto\">Unfortunately we don’t have a dedicated guide to switch a single account deployment to multi-account, nor is it something we officially support. In general, we recommend purchasing a multi-account Reference Architecture and migrating your existing workloads there, as the amount of work to migrate the existing infrastructure is fairly significant. It is likely going to be cheaper and faster to migrate your workloads to a new fresh Reference Architecture than it is to adapt your existing one.</p>\n<p dir=\"auto\">With that in mind, my recommendation for adopting multi-account infrastructure is to start by creating the 3 supporting accounts (<code class=\"notranslate\">security</code>, <code class=\"notranslate\">logs</code>, and <code class=\"notranslate\">shared</code>) using <a href=\"https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/master/examples/for-production/infrastructure-live\">the production example code</a> as an example.</p>\n<p dir=\"auto\">Once those accounts are setup, then you can create a new application account to act as your demo account that is hooked to those (e.g., having the <code class=\"notranslate\">security</code> account as the gateway for IAM; having the <code class=\"notranslate\">shared</code> account host docker images and AMIs; etc), and then migrate your demo workloads to there.</p>\n<p dir=\"auto\">Once you have a successful demo workload, then you can repeat the step for <code class=\"notranslate\">stage</code> and <code class=\"notranslate\">prod</code> until you have everything migrated over.</p>\n<p dir=\"auto\">However, be apprised that this DIY approach is basically deploying the Gruntwork multi-account Ref Arch from scratch, and it is pretty involved with a lot of nuance, such as the deployment order, cross-account permissions, and resource-sharing configs. If you're going this route, you might as well purchase a Reference Architecture from us and migrate to it.</p>\n<p dir=\"auto\">credit to <a class=\"user-mention notranslate\" data-hovercard-type=\"user\" data-hovercard-url=\"/users/yorinasub17/hovercard\" data-octo-click=\"hovercard-link-click\" data-octo-dimensions=\"link_type:self\" href=\"https://github.com/yorinasub17\">@yorinasub17</a></p>"}}} />
14+
15+
</CenterLayout>
16+
17+
18+
<!-- ##DOCS-SOURCER-START
19+
{
20+
"sourcePlugin": "github-discussions",
21+
"hash": "fb3b28a22bd84a624e6f54ecafd96d8e"
22+
}
23+
##DOCS-SOURCER-END -->

Diff for: docs/discussions/knowledge-base/103.mdx

+23
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
---
2+
hide_table_of_contents: true
3+
hide_title: true
4+
custom_edit_url: null
5+
---
6+
7+
import CenterLayout from "/src/components/CenterLayout"
8+
import GitHub from "/src/components/GitHub"
9+
10+
<CenterLayout>
11+
<span className="searchCategory">Knowledge Base</span>
12+
<h1>Are CICD pipelines for applications supported in Gruntwork Pipelines?</h1>
13+
<GitHub discussion={{"id":"D_kwDOF8slf84AOSoj","number":103,"author":{"login":"marijakstrazdas"},"title":"Are CICD pipelines for applications supported in Gruntwork Pipelines?","body":"Are CICD pipelines for applications supported?","bodyHTML":"<p dir=\"auto\">Are CICD pipelines for applications supported?</p>","answer":{"body":"Gruntwork Pipelines only supports the infrastructure component of application CI/CD. At its core, Gruntwork Pipelines is a framework for securely deploying infrastructure to AWS without having to grant access to CI servers direct wide ranging permissions to your AWS account.\r\n\r\nTo elaborate further, Gruntwork Pipelines provides scripts and solutions that allow you to take a built artifact from traditional CI pipelines, and deploy it using Terragrunt or Terraform. The bulk of the pipeline is driven by CI servers (e.g. CircleCI, GitLab, etc), which allows you to use many of the off the shelf pipeline code that is available in the community to cater to many CI use cases for your application.\r\n\r\nFor example, you could extend a standard CircleCI pipeline that implements the following workflow with Gruntwork Pipelines:\r\n\r\n1. Run precommit checks\r\n2. Run unit tests of application\r\n3. [GRUNTWORK PIPELINES] Build docker image, tag with commit SHA, and push to ECR using AWS credentials in GW Pipelines, not in CircleCI.\r\n4. [GRUNTWORK PIPELINES] Checkout infrastructure code and update the docker image tag for an ECS service in the `terragrunt.hcl`\r\n5. [GRUNTWORK PIPELINES] Commit the updated `terragrunt.hcl`, push to `main`, and run `apply`.\r\n\r\nNote that Gruntwork Pipelines does not contain off the shelf workflows for you to use, as many workflows are highly dependent and tightly coupled with how you organize your infrastructure code.\r\n\r\nHowever, the Reference Architecture includes an off the shelf workflow that is compatible with the Reference Architecture, including template workflow configurations for the chosen CI server that can be installed on any application repo to be used to setup the above reference pipeline.","bodyHTML":"<p dir=\"auto\">Gruntwork Pipelines only supports the infrastructure component of application CI/CD. At its core, Gruntwork Pipelines is a framework for securely deploying infrastructure to AWS without having to grant access to CI servers direct wide ranging permissions to your AWS account.</p>\n<p dir=\"auto\">To elaborate further, Gruntwork Pipelines provides scripts and solutions that allow you to take a built artifact from traditional CI pipelines, and deploy it using Terragrunt or Terraform. The bulk of the pipeline is driven by CI servers (e.g. CircleCI, GitLab, etc), which allows you to use many of the off the shelf pipeline code that is available in the community to cater to many CI use cases for your application.</p>\n<p dir=\"auto\">For example, you could extend a standard CircleCI pipeline that implements the following workflow with Gruntwork Pipelines:</p>\n<ol dir=\"auto\">\n<li>Run precommit checks</li>\n<li>Run unit tests of application</li>\n<li>[GRUNTWORK PIPELINES] Build docker image, tag with commit SHA, and push to ECR using AWS credentials in GW Pipelines, not in CircleCI.</li>\n<li>[GRUNTWORK PIPELINES] Checkout infrastructure code and update the docker image tag for an ECS service in the <code class=\"notranslate\">terragrunt.hcl</code></li>\n<li>[GRUNTWORK PIPELINES] Commit the updated <code class=\"notranslate\">terragrunt.hcl</code>, push to <code class=\"notranslate\">main</code>, and run <code class=\"notranslate\">apply</code>.</li>\n</ol>\n<p dir=\"auto\">Note that Gruntwork Pipelines does not contain off the shelf workflows for you to use, as many workflows are highly dependent and tightly coupled with how you organize your infrastructure code.</p>\n<p dir=\"auto\">However, the Reference Architecture includes an off the shelf workflow that is compatible with the Reference Architecture, including template workflow configurations for the chosen CI server that can be installed on any application repo to be used to setup the above reference pipeline.</p>"}}} />
14+
15+
</CenterLayout>
16+
17+
18+
<!-- ##DOCS-SOURCER-START
19+
{
20+
"sourcePlugin": "github-discussions",
21+
"hash": "e253d35abb7a97f0cfd0ee13206902eb"
22+
}
23+
##DOCS-SOURCER-END -->

0 commit comments

Comments
 (0)