Developer Preview Only

GitLab on the AWS Cloud

Quick Start Reference Deployment

QS

January 2021
Darwin Sanoy, GitLab Inc.
Dmitry Kolomiets, Andy Wong, Andrew Gargan AWS Quick Start team

Visit our GitHub repository for source files and to post feedback, report bugs, or submit feature ideas for this Quick Start.

This Quick Start was created by GitLab Inc. in collaboration with Amazon Web Services (AWS). Quick Starts are automated reference deployments that use AWS CloudFormation templates to deploy key technologies on AWS, following AWS best practices.

Overview

This guide provides instructions for deploying the GitLab Hybrid Reference Architecture for EKS Quick Start on the AWS Cloud.

This Quick Start is for users who want a production grade GitLab instance deployed to AWS infrastructure.

GitLab is a Single, Open Solution for managing DevOps and other SDLCs.

Built-in Value of AWS Quick Starts

In addition to reference architecture compliance, this Quick Start has these additional value-add features:

Market Reach and Reuse

  • As an AWS Quick Start (Primary Use Case Initiated by AWS) - Production Grade HA-Required 3rd Party Software Deployable in the Fewest Possible Clicks to AWS.

  • As an AWS Service Catalog Item - AWS alternative to Service Now self service automation - could be used by large customers or partners to create parameter customized versions

  • As an AWS Control Tower Partner Solution - allowing vending of GitLab instances into AWS accounts managed by AWS Control Tower.

Quick Start Active Open Source Community

  • Quick Starts are managed as Open Source with special attention given by AWS Quick Start team and the vendor (GitLab in this case) whose software is the focus of the Quick Start.

  • All Quick Starts are quality tested using CI in many regions when they are changed or when taking new versions of dependent Quick Starts.

  • Relies on included Quick Start dependencies for EKS, PostgreSQL and others - these shared dependencies are used, debugged and improved as community maintained shared dependencies that underlie many AWS Quick Starts.

  • The value of the dependency framework is seen in that any of the EKS Quick Start functionality can be passed through to the GitLab EKS Quick Start easily. For instance, the support for EKS clusters on AWS Spot Compute can be passed through for non-production setups like training and demos. As an example, any of [these EKS Quick Start settings](https://aws-quickstart.github.io/quickstart-amazon-eks/#_parameter_reference) could be surfaced. The GitLab EKS Quick Start depends on a few others that would also pass through any improvement benefits.

Amazon may share user-deployment information with the AWS Partner that collaborated with AWS on the Quick Start.

GitLab on AWS

GitLab can be deployed to conform to GitLab’s Cloud Native Hybrid Architecture (for any size), which puts all container compatible elements of GitLab into an Amazon EKS cluster and anything else that can be, is implemented using AWS Services (PaaS).

GitLab can be configured to utilize many AWS services directly and has been tested with the ones used in this Quick Start Guide. GitLab is built using SOA Service Endpoints - this means that as long as an AWS service endpoint conforms to the relevant standards, you can generally wire GitLab up to it.

This Quick Start integrates with the following AWS services:

* May not be available in all regions. Check AWS Regional Services List

Cost

You are responsible for the cost of the AWS services used while running this Quick Start. There is no additional cost for using the Quick Start.

The AWS CloudFormation templates for Quick Starts include configuration parameters that you can customize. Some of the settings, such as the instance type, affect the cost of deployment. For cost estimates, see the pricing pages for each AWS service you use. Prices are subject to change.

After you deploy the Quick Start, create AWS Cost and Usage Reports to deliver billing metrics to an Amazon Simple Storage Service (Amazon S3) bucket in your account. These reports provide cost estimates based on usage throughout each month and aggregate the data at the end of the month. For more information, see What are AWS Cost and Usage Reports?

Software licenses

<license information>This Quick Start creates an instance of GitLab Enterprise Edition with a free license (unlicensed). Additional features available in Premium or Ultimate, can be enabled by purchasing by following the information in the Licensing and Subscription FAQ or by contacting Gitlab Sales for a trial license. Once your instance is running, you can use the admin section to add a license file as described in the Licensing and Subscription FAQ

Architecture

Deploying this Quick Start for a new virtual private cloud (VPC) with default parameters builds the following GitLab environment in the AWS Cloud.

Architecture Diagram

Architecture
Figure 1. Quick Start architecture for GitLab on AWS

Architecture Details

Table 1. Key to architecture and best practice compliance tags
Tag Meaning

[AWS-WA]

Compliant with AWS Well Architected - including favoring PaaS to externalize scaling and availability management and HA / Self-Healing for non-PaaS resources.

[GL-RAR]

Compliant with a GitLab Reference Architecture Requirement or Recommendation - only applied to items where the Reference Architectures actually express an opinion.

The result of this automation is a completely operational GitLab Instance.

As shown in Figure 1, the Quick Start sets up the following:

  • A highly available architecture that spans two or three Availability Zones. [AWS-WA]

  • A VPC configured with public and private subnets, according to AWS best practices, to provide you with your own virtual network on AWS.1 [AWS-WA]

  • An Amazon EKS cluster, which creates the Kubernetes control plane.2 [AWS-WA]

  • S3 buckets for GitLab object storage including Git-lfs [GL-RAR][AWS-WA]

  • In the public subnets:

    • Managed network address translation (NAT) gateways to allow outbound internet access for resources in the private subnets.1

  • In the private subnets:

    • A group of AutoScaling Kubernetes EC2 nodes.2 [AWS-WA]

    • An Amazon RDS cluster fot GitLab and Praefect databases. [GL-RAR][AWS-WA]

    • Elasticache for Redis PaaS. [GL-RAR][AWS-WA]

    • S3 Object storage PaaS for all non-git file systems and git-lfs [GL-RAR][AWS-WA]

    • Uses AWS ELB for internal load balancer - Load balancing PaaS. [GL-RAR][AWS-WA]

    • AWS ELB Load Balancing PaaS for external access. [AWS-WA]

    • An ASG with Gitaly nodes spread across Availability Zones. GitLab’s Cloud Native Hybrid Reference Architecture calls for Gitaly to be implemented on EC2 Instance compute and to utilize Non-NFS storage for the file system containing Git repositories. [GL-RAR][AWS-WA]

    • An ASG with Praefect nodes spread across Availability Zones. GitLab’s Cloud Native Hybrid Reference Architecture calls for Praefect to be implemented on EC2 Instance compute. [GL-RAR][AWS-WA]

    • Amazon CloudWatch Container Insights integration. [AWS-WA]

    • Amazon CloudWatch Agent for Instance compute. [AWS-WA]

  • Optionally (on by default):

    • In one public subnet, a Linux bastion host in an Auto Scaling group to allow inbound Secure Shell (SSH) access to Amazon Elastic Compute Cloud (Amazon EC2) instances in private subnets. The bastion host is also configured with the Kubernetes kubectl command line interface (CLI) and the helm v3 CLI for managing the Kubernetes cluster as well as the AWS CLI V2 for general AWS management. [AWS-WA]

    • Amazon Route53 Hosted zone for DNS configuration. [AWS-WA]

    • AWS Certificate Manager TLS certificate. [AWS-WA]

    • Amazon Simple Email Service domain for outgoing email messages. [AWS-WA]

    • IaC parameters and outputs stored in AWS Parameter Store. [AWS-WA]

1The template that deploys the Quick Start into an existing VPC skips the components and prompts you for your existing VPC configuration.

2The template that deploys the Quick Start into an existing EKS cluster skips the components and prompts you for your existing EKS configuration.

Bundled Provisioning of Container Registry, Backup, Log Aggregation and Monitoring

  • Uses Amazon Linux 2 throughout - which benefits from Amazon specific optimizations for performance and compute density for both EKS and EC2 hosts. [AWS-WA]

  • All GitLab logs are automatically aggregated into CloudWatch logs via CloudWatch Instance Agent and CloudWatch Container Insights. [AWS-WA]

  • EKS workload monitoring via CloudWatch Container Insights. [AWS-WA]

  • GitLab backup is preconfigured.

  • Prometheus is always deployed with the GitLab instance.

  • Setting "ConfigureGrafana" also deploys and configures Grafana. [AWS-WA]

  • The container registry is automatically setup at registry.[subdomain] [AWS-WA]

Security

Application Secrets

  • Stores generated secrets in AWS Secrets Manager (e.g. generates a proper initial root password) [AWS-WA]

EKS Cluster Administration

The GitLab Kubernetes endpoint can be private, a bastion host must be deployed in order to do direct cluster management (it will be preloaded with AWS, EKS and Kubernetes management utilities and IAM permissions for cluster management). The bastion host is setup with SSH keys, but can also be accessed more securely using the preconfigured SSM Session Manager. Additionally, the bastion host has command audit logging enabled and the log is uploaded to CloudWatch.

Data Encryption

  • Data in Transit:

    • SSL for front end http. [AWS-WA]

    • SSL for GitLab to S3. [AWS-WA]

    • SSL for GitLab to PostgreSQL RDS. [AWS-WA]

    • SSL for GitLab to Redis Elasticache. [AWS-WA]

    • SSL for GitLab to Runner. [AWS-WA]

    • SSL Not Yet Available for GitLab to Praefect to Gitaly. [AWS-WA]

  • Data at Rest (AWS Managed Keys):

    • S3 Encryption [AWS-WA]

    • PostgreSQL RDS Encryption [AWS-WA]

    • Redis Elasticache Encryption [AWS-WA]

    • Gitaly EBS Encryption (Git File System) [AWS-WA]

    • Praefect Data (PostgreSQL RDS). [AWS-WA]

Database

GitLab Quickstart deploys Highly Available PostgreSQL database cluster using Amazon Aurora PostgreSQL Quickstart.

Depending on the projected size of your GitLab deployment you may want to adjust database instance size using DBInstanceClass parameter.

There are two databases deployed to the same cluster:

  • GitLab database

  • Praefect tracking database

Praefect requires a separate tracking database as described in Gitaly Cluster documentation.

More about external database configuration in GitLab documentation.

GitLab Storage

Git repository storage

  • EBS Volumes on Gitaly Cluster instances. ^[GL-RAR]

Object storage for GitLab storage types.

GitLab Quickstart creates the following S3 buckets:

  • ArtifactsBucket

  • LfsBucket

  • UploadsBucket

  • PackagesBucket

  • TerraformBucket

  • PseudonymizerBucket

  • RegistryBucket

  • BackupBucket

  • BackupTempBucket

S3 policies can be applied to these storages for retentions, storage tier management, cleanup and out of region replication.

Contents of the bucket encrypted by default with SSE-S3. Names of the buckets are generated by CloudFormation and exported as SSM parameters (see Exports section).

More about external object storage in GitLab documentation.

GitLab Backups

Scheduling Backups

Backup schedule is controlled by cron expression and default value is 0 1 * * * * (daily at 1am). You can set a different schedule using BackupSchedule parameter.

Content of the backups

Backups include GitLab database snapshot and contents of GitLab projects (Git repositories, Wiki pages). Backups do not include contents of S3 buckets (see Object storage for a list of buckets). The main reason behind this decision:

  • Contents of these buckets may be very large (pipeline artifacts, docker images, etc.) and that may affect stability and performance of the backup jobs

  • S3 is a durable storage

  • S3 storage policies also enable out of region replication and management of storage class migration to control costs for older data.

If needed, complete backup may be created using backup-utility as described in GitLab documentation.

Backup/Restore resources

Disk volume required for backups is about 2x larger than backup tarball itself. This is due to the fact that all resources have to be downloaded first and packaged to tarball file which also stored locally. Consider the size of you GitLab database and projects (mainly Git repositories) to set the size of the underlying EBS volumes appropriately using BackupVolumeSize parameter.

Default Quick Start configuration tested on backups of 20Gb average size, took about 30 minutes to create and upload to S3 bucket.

For large GitLab deployments you can also adjust CPU and memory requirements for backup and restore pods using BackupCpu and BackupMemory parameters.

More about backups in GitLab documentation.

GitLab Telemetry and Monitoring

CloudWatch Container Insights

GitLab Quickstart integrates EKS cluster with CloudWatch Container Insights to collect, aggregate, and summarise metrics and logs if ConfigureContainerInsights parameter is set to Yes.

Logs and metrics can be accessed from CloudWatch console:

CloudWatch Container Insights
Figure 2. CloudWatch Container Insights

Prometheus metrics

GitLab exposes Prometheus metrics under /-/metrics of the GitLab Ingress. Optional Grafana integration can be enabled by setting ConfigureGrafana parameter to ‘Yes’.

Grafana
Figure 3. Grafana

More about Grafana integration in GitLab documentation.

Amazon EKS Console

Amazon EKS Console gives you a single place to see the status of your Kubernetes clusters, applications, and associated cloud resources.

Please see the prerequisites for Amazon EKS Console access configuration in AWS documentation.

AWS EKS Console
Figure 4. AWS EKS Console

Exports

Upon successful GitLab deployment, the following SSM parameters and Secrets Manages secrets are exposed:

Table 2. SSM parameters
Name Type Description

/quickstart/gitlab/{env-name}/infra/domain-name

SSM

GitLab domain name

/quickstart/gitlab/{env-name}/infra/hosted-zone-id

SSM

GitLab Route53 hosted zone ID

/quickstart/gitlab/{env-name}/infra/hosted-zone-name

SSM

GitLab Route53 hosted zone name

/quickstart/gitlab/{env-name}/cluster/name

SSM

EKS Cluster name

/quickstart/gitlab/{env-name}/storage/buckets/artifacts

SSM

S3 Artifacts bucket name

/quickstart/gitlab/{env-name}/storage/buckets/backup

SSM

S3 Backup bucket name

/quickstart/gitlab/{env-name}/storage/buckets/backup-tmp

SSM

S3 Backup Temp bucket name

/quickstart/gitlab/{env-name}/storage/buckets/lfs

SSM

S3 LFS bucket name

/quickstart/gitlab/{env-name}/storage/buckets/packages

SSM

S3 Packages bucket name

/quickstart/gitlab/{env-name}/storage/buckets/pseudonymizer

SSM

S3 Pseudonymizer bucket name

/quickstart/gitlab/{env-name}/storage/buckets/registry

SSM

S3 Registry bucket name

/quickstart/gitlab/{env-name}/storage/buckets/terraform

SSM

S3 Terraform bucket name

/quickstart/gitlab/{env-name}/storage/buckets/uploads

SSM

S3 Uploads bucket name

Table 3. Secret Manager secrets
Name Type Description

quickstart/gitlab/{env-name}/infra/smtp-credentials

Secret

SMTP server credentials

/quickstart/gitlab/{env-name}/storage/credentials

Secret

S3 object storage access credentials

/quickstart/gitlab/{env-name}/secrets/rails

Secret

GitLab Rails secret

/quickstart/gitlab/{env-name}/secrets/initial-root-password

Secret

GitLab initial root password

Planning the deployment

Specialized knowledge

This deployment guide requires a moderate level of familiarity with AWS services. If you’re new to AWS, visit the Getting Started Resource Center and the AWS Training and Certification website. These sites provide materials for learning how to design, deploy, and operate your infrastructure and applications on the AWS Cloud.

GitLab Instance Version Selection

The version of GitLab that will be deployed depends on the GitLab Helm Chart Version you select when launching this Quick Start. Here is the documentation for Mapping GitLab Helm Chart Versions to GitLab Product Versions.

GitLab Migrations and Versions

If you are migrating an existing GitLab Instance to one built by this Quick Start, it is critical to have the new instance version match the old instance version during migration. GitLab version upgrades can be done on the old instance before migration or on the new instance after migration - but not during a migration using mismatched versions. This includes the versions of all underlying components such as PostgreSQL and Redis.

GitLab Internet Ingress

The GitLab instance will be configured with an Internet Ingress by default.

Planning Instances Types, Sizes and Numbers

This Quick Start is capable of creating GitLab Cloud Native Reference Architecture compliant GitLab instances. GitLab Reference Architectures have a "Cloud Native Hybrid" section that gives a manifest of required resources that can be used while deploying the template. For instance, here is the list

Planning Post-Installation Considerations

The following items need to be deployed after the GitLab instance is operational. They are noted here so that plans can be made around deploying them. Further instructions are given in Post-deployment steps.

Deploying GitLab Runners in EKS or Ec2

For a production deployment, the Kubernetes runner chart integrated into the Quick Start should not be deployed because it deploys to the same cluster as GitLab. This can create workload scaling challenges and should never be done with privileged mode enabled because it gives very high privileges to the cluster for any runner job. However, without privileged mode, AutoDevOps, Review Apps and DinD based container builds will not work (Kaniko based container builds can work) - so a separate GitLab Runner design and deployment is advised.

Alternative GitLab Runner deployment options include:

  1. For groups and projects that will be integrated into a Kubernetes cluster dedicated to development and/or production, the GitLab Runner can be deployed as a Managed App or using the Runner Helm Chart after the clusters are integrated.

  2. For non-Kubernetes integrated groups, projects or for shared runners for the entire instance, the GitLab HA Scaling Runner Vending Machine for AWS can be used to deploy runners of all OS types and Runner Executor types using an AWS ASG.

  3. a dedicated EKS cluster for a fleet of shared runners could also be implemented.

AWS account

If you don’t already have an AWS account, create one at https://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad.

Your AWS account is automatically signed up for all AWS services. You are charged only for the services you use.

Technical requirements

Before you launch the Quick Start, your account must be configured as specified in the following table. Otherwise, deployment might fail.

Resource quotas

If necessary, request service quota increases for the following resources. You might need to request increases if your existing deployment currently uses these resources and if this Quick Start deployment could result in exceeding the default quotas. The Service Quotas console displays your usage and quotas for some aspects of some services. For more information, see What is Service Quotas? and AWS service quotas.

Resource This deployment uses

VPCs

1

Elastic IP addresses

Matches Number of AZs

Security groups

8

AWS Identity and Access Management (IAM) roles

21

Auto Scaling groups

3

Various Ec2 instances (Gitaly, Praefect, EKS Nodes)

3x The Number of AZs

RDS Instances

Matches Number of AZs

Elasticache Instances

Matches Number of AZs

Route53 Hosted Zones

1

Supported Regions

This Quick Start supports the following Regions (regions where Amazon EKS is supported):

  • US East

    • us-east-1, N. Virginia

    • us-east-2, Ohio

  • US West

    • us-west-1, N. California

    • us-west-2, Oregon

  • Africa

    • af-south-1, Cape Town

  • Asia Pacific

    • ap-east-1, Hong Kong

    • ap-south-1, Mumbai

    • ap-northeast-2, Seoul

    • ap-southeast-1, Singapore

    • ap-southeast-2, Sydney

    • ap-northeast-1, Tokyo

  • Canada

    • ca-central-1, Central

  • China

    • cn-north-1, Beijing

    • cn-northwest-1, Ningxia

  • Europe

    • eu-central-1, Frankfurt

    • eu-west-1, Ireland

    • eu-west-2, London

    • eu-south-1, Milan

    • eu-west-3, Paris

    • eu-north-1, Stockholm

  • Middle East

    • me-south-1, Bahrain

  • South America

    • sa-east-1, São Paulo

Certain Regions are available on an opt-in basis. See Managing AWS Regions.

IAM permissions

Before launching the Quick Start, you must sign in to the AWS Management Console with IAM permissions for the resources that the templates deploy. The AdministratorAccess managed policy within IAM provides sufficient permissions, although your organization may choose to use a custom policy with more restrictions. For more information, see AWS managed policies for job functions.

Prepare your GitLab Inc. account

A GitLab.com account is not required to deploy this Quick Start. A GitLab license is only required to enable "licensed tier only" features.

Prepare your AWS account

  1. Create an EC2 Keypair in the region you will deploy. You will need to provide the keypair name as a parameter.

  2. If deploying into an existing VPC, create 2 or more dedicated subnets for this QuickStart. You will need to provide the subnet ids as parameters.

Ensure that the target AWS account has sufficient remaining service limit quota for items listed in Resource quotas. If limits need increasing, make requests using the AWS Service Quotas console and wait until they are all increased before deploying.

Deployment options

This Quick Start provides two deployment options:

  • Deploy GitLab into a new VPC. This option builds a new AWS environment consisting of the VPC, subnets, NAT gateways, security groups, bastion hosts, and other infrastructure components. It then deploys GitLab into this new VPC.

  • Deploy GitLab into an existing VPC. This option provisions GitLab in your existing AWS infrastructure.

The Quick Start provides separate templates for these options. It also lets you configure Classless Inter-Domain Routing (CIDR) blocks, instance types, Amazon EKS settings, and GitLab settings, as discussed later in this guide.

Deployment steps

Reference Architecture Compliance

This default configuration will be reference complaint in the number and configuration of instances and setup of SSL and DNS, but it will deploy as small of instances as possible to control costs for a functional POC. This means it will only be capable of matching published GitLab Reference Architecture benchmarks if you scale the instance resources to match the chosen benchmark. If you are ready for production scale now (including scaled POCs), please follow the guides indicated below to scale it to your desire initial number of users from the start.

Sign in to your AWS account

Deployment takes about 1 to 2 hours to complete.

  1. Sign in to your AWS account at https://aws.amazon.com with an IAM user role that has the necessary permissions. For details, see Planning the deployment earlier in this guide.

  2. Make sure that your AWS account is configured correctly, as discussed in the Technical requirements section.

  3. Ensure you have or know how to find the resource names, identifiers or arns discussed in the Technical requirements section.

Launch the Quick Start

You are responsible for the cost of the AWS services used while running this Quick Start reference deployment. There is no additional cost for using this Quick Start. For full details, see the pricing pages for each AWS service used by this Quick Start. Prices are subject to change.
  1. Choose one of the following options to launch the AWS CloudFormation template. For help with choosing an option, see Deployment options earlier in this guide.

Deploy GitLab into a new VPC on AWS

View template

Deploy GitLab into an existing VPC on AWS

View template

If you’re deploying GitLab into an existing VPC, make sure that your VPC has two private subnets in different Availability Zones for the workload instances and that the subnets aren’t shared. This Quick Start doesn’t support shared subnets.

These private subnets require NAT gateways in their route tables to allow the instances to download packages and software without exposing them to the internet. Also make sure that the domain name option in the DHCP options is configured as explained in DHCP options sets.

The Quick Start uses Kubernetes integration with Elastic Load Balancing which requires each private subnet to be tagged with kubernetes.io/role/internal-elb=true and each public subnet with kubernetes.io/role/elb=true.

You provide your VPC settings when you launch the Quick Start.

  1. Check the AWS Region that’s displayed in the upper-right corner of the navigation bar, and change it if necessary. This Region is where the network infrastructure for GitLab is built (or expected to be if deploying into an existing VPC). The template is launched in the us-east-1 Region by default.

Changing the region later will clear all form values.

For details on each parameter, see the Parameter reference section of this guide.

Any parameters in the form that are not mentioned below can be left at their default setting.

  1. On the Quick create stack page:

    • For Stack name set a unique name, but not too long. Note: All resources created will be prepended by this name.

    • For Number of Availability Zones pick the number that matches your HA requirements and is not more than the selected region has available.

    • For Availability Zones select at least 2 for non-production and 3 for production.

    • For SSH key pair name select an existing keypair in the region. If none exist, you must create one now and specify it here (refresh the browser for the newly created key pair to appear on the list).

    • For Provision a bastion host select Enabled.

    • For Allowed bastion external access CIDR use the internet to find your own public IP and enter it with a /32 CIDR (e.g. 123.231.123.231/32)

    • EKS public access endpoint can stay at the default of Disabled if you will use the Bastion host (recommended) for administering the GitLab cluster.

    • EKS node instance type can remain at the default for non-production and non-performance testing setups. For production or testing setups, select an instance type that meets the vCPU and Memory requirements of your chosen reference architecture user level - spread across all EKS nodes you select.

    • EKS node volume capacity can be left at the default of 100, no GitLab data files of any type are ever stored on these nodes.

    • For Number of EKS nodes pick a value that when multiplied by the resources of the EKS Node Instance type meets the vCPU and Memory requirements of your chosen reference architecture user level. It should be at least the same as the number of Availability Zones being configured.

    • For Maximum number of EKS nodes select a number equal to or greater than Number of EKS nodes. This limit should reflect what you feel would be excessive scaling that might be caused by factors other than proper load based scaling. 20% higher than Number of EKS nodes is a reasonable starting point.

    • If you will be using the cluster integrated Grafana for GitLab, for Configure Grafana select Yes.

    • For GitLab database instance class select an instance type that fulfills the vCPUs and Memory requirements of PostgreSQL across 3 instances. These must be of the special "db" type and must be specifically available for PostgreSQL as documented in DB Instances Engine Support.

    • For Database admin password provide a password that meets the complexity requirements on the Cloud Formation form.

    • For Cache mode production and performance testing setups must select External to ensure Redis is configured for Elasticache rather than placed in the EKS cluster. Redis in the EKS cluster is not currently supported by GitLab Reference Architecture.

    • For Number of cache replicas enter the number that matches the availability zones you are configuring for.

    • Cache node type can remain at the default for non-production and non-performance testing setups. For production or testing setups, specify the Cache instance node type. These must be of the special instance type "cache". Cache Instance Type List.

    • For GitLab DNS Name select a subdomain to which the gitlab host ("gitlab") will be a part of, but do not specify the gitlab host name. For example devopstools.ourcompany.com creates a gitlab instance at gitlab.devopstools.ourcompany.com

    • For Create Route53 hosted zone select Yes.

    • For Request AWS Certificate Manager SSL certificate select Yes.

    • For Outgoing SMTP domain select CreateNew.

    • For GitLab Helm chart version pick a version from GitLab version mappings.

    • For GitLab application version select the corresponding GitLab version for the chart version that was picked from GitLab version mappings.

    • For Number of Gitaly replicas set it the same as the number of Availability Zones being configured.

    • For Gitaly instance type, select an instance type that meets the vCPU and Memory requirements of your chosen reference architecture user level.

    • For Gitaly volume capacity - this is used for Git Repository storage and working and cache storage for Gitaly. Remember that overprovisioning storage size gives more IOPs on AWS. You may also elect to pay for higher IOPs levels. See EBS docs. Gitaly uses a lot of working storage and cache storage, so do at least double of your 1-2 year Git repository storage projection.

    • For Number of Praefect replicas set it the same as the number of Availability Zones being configured.

    • For Praefect instance type, select an instance type that meets the vCPU and Memory requirements of your chosen reference architecture user level.

    • For Quick Start S3 bucket region select us-east-1 (this does not need to match the region you are deploying to)

    • Check I acknowledge that AWS CloudFormation might create IAM resources with custom names.

    • Check I acknowledge that AWS CloudFormation might require the following capability: CAPABILITY_AUTO_EXPAND

Unless you are customizing this Quick Start’s templates for your own deployment projects, we recommend that you keep the default settings for the parameters labeled Quick Start S3 bucket name, Quick Start S3 bucket Region, and Quick Start S3 key prefix. Changing these parameter settings automatically updates code references to point to a new Quick Start location. For more information, see the AWS Quick Start Contributor’s Guide.
  1. Choose Create stack to deploy the stack.

If you did not specify a CreateSslCertificate = Yes, then skip this section.

If you choose Yes for both Create Route53 hosted zone and Request AWS Certificate Manager SSL certificate, you will need to create a delegated DNS subdomain WHILE THE STACK IS RUNNING Follow these steps to be sure your stack does not fail. In order to ensure the stack completes successfully, this should be done within 1 hour of the ACM process entering a wait state.

The waiting period starts when the child stack containing "…​GitLabStack…​Infrastructure…​" is waiting for creation of a resource called "SslCertificate"

These steps can be completed as soon as the subdomain Hosted Zone is created in Route53 - this happens well ahead of the ACM certificate wait state (which will not occur if you do these steps as soon as the hosted Zone is created).

There will be both a Public and a Private hosted zone created for your subdomain - it is important to obtain the nameserver records from the Public hosted zone.
  1. Monitor Route53 for the creation of a Hosted Zone with the domain you specified for GitLab DNS Name (For this example we will use devopstools.ourcompany.com).

  2. In the [AWS Route53 console](https://console.aws.amazon.com/route53/v2/hostedzones#), find the new hosted zone’s Public recordset for the subdomain and copy it’s nameservers list. In the screenshot "qsg.devops4the.win" is the hosted zone created by the QuickStart - copy the "Value/Route traffic to".

newhostedzone
  1. Edit the DNS records of the primary domain and add an NS record for the subdomain to point to the DNS servers. This is done in whatever system hosts the root domain’s primary name server records. (For this example that would be ourcompany.com)

This article discusses how to do it when the root domain DNS is also in Route53: [Creating a subdomain that uses Amazon Route 53 as the DNS service without migrating the parent domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html)

In the screenshot "devops4the.win" is the root DNS domain and is hosted in Route53.

adddnsdelegation
Not all domain registrars have the ability to create NS records for hosted zones ("subdomain DNS delegation"). If this is the case for your registrar, then you have the option to redirect the root domain to AWS for DNS and then use Route53 to create the subdomain DNS delegation.
  1. If your stack is actually waiting on this change, be very patient for DNS to propagate and for AWS ACM to attempt to validate the domain again. With all possible DNS propagation CloudFormation status update delays, this could take over an hour.

[An issue has been created](https://github.com/aws-quickstart/quickstart-eks-gitlab/issues/37) to improve this experience by allowing a host to be inserted into an existing AWS hosted zone.

Setting Client Based Up Name Resolution for Non-Custom Domain Setups

When you do not specify DomainName the QuickStart creates a random subdomain and hosted zone that can be used in your hosts file to access your instance.

  1. In the AWS Systems Manager console and click on (Parameter Store).

  2. In the search field, type /infra/domain-name to locate the parameter and copy the value to a temporary location.

  3. In the AWS Systems Manager console and click on (Parameter Store).

  4. In the search field, type /loadbalancer to locate the parameter and copy the value to a temporary location.

  5. Use nslookup in a console to get any one of the load balancer’s ip addresses and and copy the value to a temporary location.

  6. Edit your local hosts file to add the ip address pointed at the host name (swap out 111.111.111.111 with the IP address from above). Note "gitlab" added to the beginning. Replace full.subdomainname.from.parameterstore with your value.

111.111.111.111 gitlab.*full.subdomainname.from.parameterstore*

Over the course of time, the load balancer may retire this IP address - if this happens, repeat these steps to get an active load balancer IP to update the /etc/hosts file with.

  1. On the Configure stack options page, you can specify tags (key-value pairs) for resources in your stack and set advanced options. When you’re finished, choose Next.

  2. On the Review page, review and confirm the template settings. Under Capabilities, select the two check boxes to acknowledge that the template creates IAM resources and might require the ability to automatically expand macros.

  3. Choose Create stack to deploy the stack.

  4. Monitor the status of the stack. When the status is CREATE_COMPLETE, the GitLab deployment is ready.

  5. Use the values displayed in the Outputs tab for the stack, as shown in Figure 5, to view the created resources.

cfn_outputs
Figure 5. GitLab outputs after successful deployment

Retrieving Credentials and Logging In to GitLab

  1. Go to the AWS Secrets manager console in the region where GitLab was deployed and click on the secret ending in initial-root-password

  2. Click the Retrieve secret value button.

  3. Decode the base64 encoded password in bash with (the password will be in your copy / paste buffer):

Bash command
echo {base64-encoded-password} | base64 -d | pbcopy
PowerShell command
[Text.Encoding]::Utf8.GetString([Convert]::FromBase64String('{base64-encoded-password}')) | clip
  1. Copy the emitted password (or use your operating system’s CLI command to pipe the above command output directly to the clipboard. Bash: | pbcopy PowerShell: | clip)

  2. Use an internet browser to access the GitLab instance URL.

  3. Login with the username 'root' and paste the password in.

  • If you get the error "Invalid Login or password." you probably didn’t use either the username 'root' for the username.

  • If you get the error "This site can’t be reached" you are probably not using the actual host name you put in your hosts file.

  • If you get the message "default backend - 404" you probably forgot to prefix the domain with "gitlab." in your hosts file and in your browser.

Accessing Kubernetes

Using the bastion host for cluster administration has the following advantages: all the kubernetes utilities preinstalled, it has permissions to the cluster and it is the only way to access the cluster if you setup the QuickStart to not expose a public endpoint for the cluster. If using the bastion host, complete these steps on that machine.

Getting a Console on the Bastion Host

  1. Open the Ec2 Instance Console in the region that GitLab was deployed to.

  2. Locate the instance whose name is EKSBastion

  3. Right click it and select Connect

  4. On the Connect to instance…​ page select the Session Manager tab.

  5. Click the orange Connect button.

  6. Wait for several seconds to be placed in a shell console.

  7. Switch to the root user where kubectl is preconfigured.

  sudo su
  1. Test with this command:

  /usr/local/bin/kubectl get pods --all-namespaces

Enabling AWS Simple Email Services (SES) Outbound Email

AWS Simple Email Services are purposely created in "sandbox" mode to prevent spam abuse using massively scaled email services. You must take a manual action to enable it.

  1. While logged into AWS, go to the Simple Email Services (SES) console.

  2. On the left navigation, under Email Sending, click Sending Statistics

  3. On the right pane, next to Production Access, visually validate that it says Sandbox.

  4. Click Edit your account details (button).

  5. Next to Enable Production Access, click Yes.

  6. For Mail type leave the default of Transactional.

  7. For Website URL enter your GitLab instance url.

  8. Under Use case description, type Testing GitLab instance outbound email.

  9. Under Additional contact addresses type your real email address.

  10. Find I agree to the AWS Service Terms and AUP, and check the box to the right.

  11. Click Submit for review.

Notice the new banner near the top containing Your account details are currently under review.

Accessing Grafana

  1. Go to the AWS Secrets manager console in the region where GitLab was deployed and click on the secret ending in initial-grafana-password

  2. Click the Retrieve secret value button.

  3. Decode the base64 encoded password in bash with (the password will be in your copy / paste buffer):

Bash command
echo {base64-encoded-password} | base64 -d | pbcopy
PowerShell command
[Text.Encoding]::Utf8.GetString([Convert]::FromBase64String('{base64-encoded-password}')) | clip
  1. Visit YourGitLabURL/-/grafana

  2. For user specify 'root'

  3. Use the password retrieved above.

Test the deployment

Post-deployment steps

When deployed for production grade implementations, this Quick Start does not setup GitLab Runners for you. To setup runners you can consult the GitLab Runner Documentation or leverage the GitLab HA Scaling Runner Vending Machine for AWS

Configuring GitLab for Kubernetes Cluster Provisioning on EKS

GitLab can be configured to provision EKS clusters into AWS accounts. It requires configuration of an AWS IAM Role (and possible IAM User) for GitLab Authentication in an AWS Account. Each account to which clusters will be provisioned also require at least one EKS Provisioning IAM Role to be defined. Follow this documentation on GitLab Integration Configuration for EKS Provisioning.

Configuring GitLab to integrate with Existing EKS Clusters

A GitLab instance of any type (does not have to be running on Kubernetes) can integrate to a Kubernetes cluster for Review Apps and AutoDevOps to preproduction and production environments. For production deployments, the cluster containing your GitLab instance should not be used for this purpose due to the level of privileges required to deploy Review Apps and AutoDevOps to the cluster.

SRE practices for using GitLab on AWS

There are not any specific things to account for in operating GitLab on AWS. When GitLab relies on AWS services like CloudWatch and S3, then the AWS specific practices for those services are applicable - but as long as these services are correctly integrated, they are abstracted in GitLab. Services configuration may also provide benefits that are not anticipated by GitLab. For instance, using S3 storage policies for replicating backups to another region.

GitLab has distinctive SRE management concerns that will need to be monitored and adjusted. Aspects of GitLab operations can be impacted by instance size choices, provisioned IOPs and other cloud level implementation decisions.

GitLab provides the GitLab Performance Tool (gpt) and Reference Architecture performance benchmarks created by the tool for the reference of GitLab Instance SREs. If your instance will be highly scaled, you should run the gpt tool against it for a baseline performance. This will help with planning scaling.

Please consult the GitLab Documentation for general operations and usage information.

Log monitoring Using CloudWatch Logs

aws logs tail --since 1d --follow /aws/log/path

Performance monitoring

Using CloudWatch Metrics

CloudWatch metrics are collected for instances and containers. These metrics can be used for performance analysis, graphing, alarms and events in AWS CloudWatch. As per standard CloudWatch capabilities alarms and events can interact with many other AWS services for notifications or automated actions.

Using Integrated Prometheus

The Quick Start wires up GitLab to Prometheus deployed to the cluster to expose all GitLab surfaced application metrics. The Grafana deployment option enables "in-instance" grafana capabilities with these metrics.

Security

The infrastructure that GitLab is deployed on must be secured according to that infrastructure’s security best practices. GitLab has reasonable security out of the box, but as with all complex products it can be configured with tighter security. Some practices are outline in GitLab instance: security best practices

Public Internet Access

If the GitLab instance will be on the public internet, the industry advised security precautions and due diligences of public Internet services should be applied to it, including, but not limited to, GitLab updates and patching and infrastructure updates and patching. Leveraging AWS hardened services for the front end can help improve the security posture (for example, AWS Load Balancers, DNS and edge network services, SES for SMTP).

The Quick Start does enable a more secure administrative mode by enabling the EKS cluster to be configured without a public endpoint and then configuring the Bastion host. The bastion host used automatically contains all the cluster administration tools like Kubernetes, EKSCTL, AWS CLI and Helm. The SSM agent is also preinstalled and SSM session manager permissions are configured. This means that even the Bastion host does not need to have a port publicly exposed in order to get a console session - either through the AWS web console or a workstation installation of the AWS CLI with the SSM extensions.

For production setups, GitLab Runner should not be deployed to the cluster that runs GitLab, especially in Privileged mode.

GitLab publishes new releases - including security hotfixing - on the "Releases" category of the general blog.

FAQ

Q. I encountered a CREATE_FAILED error when I launched the Quick Start.

A. If AWS CloudFormation fails to create the stack, relaunch the template with Rollback on failure set to Disabled. This setting is under Advanced in the AWS CloudFormation console on the Configure stack options page. With this setting, the stack’s state is retained and the instance is left running, so you can troubleshoot the issue. (For Windows, look at the log files in %ProgramFiles%\Amazon\EC2ConfigService and C:\cfn\log.)

When you set Rollback on failure to Disabled, you continue to incur AWS charges for this stack. Delete the stack when you finish troubleshooting.

For additional information, see Troubleshooting AWS CloudFormation on the AWS website.

Q. I encountered a size limitation error when I deployed the AWS CloudFormation templates.

A. Launch the Quick Start templates from the links in this guide or from another S3 bucket. If you deploy the templates from a local copy on your computer or from a location other than an S3 bucket, you might encounter template size limitations. For more information, see AWS CloudFormation quotas on the AWS website.

Troubleshooting

<Steps for troubleshooting the deployment go here.>

Parameter reference

Unless you are customizing the Quick Start templates for your own deployment projects, we recommend that you keep the default settings for the parameters labeled Quick Start S3 bucket name, Quick Start S3 bucket Region, and Quick Start S3 key prefix. Changing these parameter settings automatically updates code references to point to a new Quick Start location. For more information, see the AWS Quick Start Contributor’s Guide.

Launch into a new VPC

Table 4. Basic configuration
Parameter label (name) Default value Description

Environment name (EnvironmentName)

Blank string

Name of the GitLab environment (leave empty to use dynamically generated name). Environment name is used to generate unique resource names. Multiple GitLab environments with different names can be deployed in the same region.

Availability Zones (AvailabilityZones)

Requires input

List of Availability Zones to use for the subnets in the VPC. Pick only as many as match what was selected for Number of Availability Zones (NumberofAZs).

SSH key pair name (KeyPairName)

Requires input

Name of an existing public/private key pair, which allows you to securely connect to your instance after it launches. If the selection list is blank, you must create a keypair in the target region, refresh this form and select it.

Table 5. VPC network configuration
Parameter label (name) Default value Description

Number of Availability Zones (NumberOfAZs)

2

Number of Availability Zones to use in the VPC. This must match your selections in the list of Availability Zones parameter.

VPC CIDR (VPCCIDR)

10.0.0.0/16

CIDR block for the VPC.

Private subnet 1 CIDR (PrivateSubnet1CIDR)

10.0.0.0/19

CIDR block for private subnet 1 located in Availability Zone 1.

Private subnet 2 CIDR (PrivateSubnet2CIDR)

10.0.32.0/19

CIDR block for private subnet 2 located in Availability Zone 2.

Private subnet 3 CIDR (PrivateSubnet3CIDR)

10.0.64.0/19

CIDR block for private subnet 3, located in Availability Zone 3.

Public subnet 1 CIDR (PublicSubnet1CIDR)

10.0.128.0/20

CIDR block for the public (DMZ) subnet 1 located in Availability Zone 1.

Public subnet 2 CIDR (PublicSubnet2CIDR)

10.0.144.0/20

CIDR block for the public (DMZ) subnet 2 located in Availability Zone 2.

Public subnet 3 CIDR (PublicSubnet3CIDR)

10.0.160.0/20

CIDR block for the public (DMZ) subnet 3, located in Availability Zone 3.

Table 6. Bastion host configuration
Parameter label (name) Default value Description

Provision a bastion host (ProvisionBastionHost)

Disabled

Choose "Enabled" to provision a bastion host to access EKS cluster. Bastion host includes kubectl and helm tools already configured for EKS cluster access. Bastion host is recommended for EKS cluster management in production environments as a more secure option than a public EKS API endpoint. Select a larger instance size to do performance testing from the bastion host using GitLab Performance Tool (gpt).

Bastion host instance type (BastionInstanceType)

t3.micro

Instance type for the bastion host.

Allowed bastion external access CIDR (RemoteAccessCIDR)

Requires input

CIDR IP range that is permitted to access the bastions. We recommend that you set this value to a trusted IP range. If the BastionHost is enabled, then this parameter is required

Table 7. Amazon EKS cluster configuration
Parameter label (name) Default value Description

EKS public access endpoint (EKSPublicAccessEndpoint)

Disabled

Configure access to the Kubernetes API server endpoint from outside of your VPC. Public API endpoint is not recommended for production environments. Use bastion host for cluster management instead.

Additional EKS admin ARN (IAM user) (AdditionalEKSAdminUserArn)

Blank string

(Optional) IAM user Amazon Resource Name (ARN) to be granted administrative access to the EKS cluster.

Additional EKS admin ARN (IAM role) (AdditionalEKSAdminRoleArn)

Blank string

(Optional) IAM role Amazon Resource Name (ARN) to be granted administrative access to the EKS cluster.

EKS node instance type (NodeInstanceType)

t3.large

EC2 instance type of EKS nodes.

EKS node volume capacity (NodeVolumeSize)

100

Capacity of EBS volume used by EKS nodes, in Gb. Note that in default configuration no GitLab data is stored on EBS volumes.

Number of EKS nodes (NumberOfNodes)

3

Number of Amazon EKS node instances. Should have a minimum value matching the number of Availability Zones configured for GitLab deployment.

Maximum number of EKS nodes (MaxNumberOfNodes)

6

Maximum number of Amazon EKS node instances. The default is three.

Configure Container Insights (ConfigureContainerInsights)

Yes

Choose "No" to disable integration with CloudWatch Container Insights. Container insights ensure pod performance metrics and logs are collected to CloudWatch - including Gitlab application logs.

Provision and integrate Grafana (ConfigureGrafana)

No

Choose "Yes" to provision Grafana to EKS and enable GitLab integration with it.

Table 8. Amazon Aurora Postgres configuration
Parameter label (name) Default value Description

GitLab PostgreSQL version (DBEngineVersion)

12.4

Select Database Engine Version. The version must be supported by your target version of GitLab: https://docs.gitlab.com/omnibus/settings/database.html#gitlab-140-and-later And by the Aurora RDS Instance type you choose: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html

GitLab database instance class (DBInstanceClass)

db.r5.large

The name of the compute and memory capacity class of the database instance. This Quickstart deploys Multi-AZ Postgres replicas in different Availability Zones. The instance class is applied to both primary and standby instances. The instance class you choose must support the needed PostgreSQL version: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html

GitLab database name (DBName)

gitlab

Name of the GitLab Postgres database.

Praefect database name (DBPraefectName)

praefect

Name of the Praefect Postgres database.

Database admin username (DBUserName)

gitlab

The database admin account username.

Database port (DBPort)

5432

The port the instance will listen for connections on.

Table 9. GitLab cache configuration
Parameter label (name) Default value Description

Where to provision Redis (CacheMode)

External

'BuiltIn' will install Redis in the EKS cluster while 'External' provisions Amazon ElastiCache Redis. GitLab Redis in Kubernetes clusters is not GitLab reference architecture compliant and only for training or testing setups.

Number of cache replicas (CacheNodes)

2

Provide the number of cache replicas (applicable for both BuiltIn and External cache modes). For BuiltIn cache mode this is the number of replica Redis pods (in addition to primary pod). For External mode this is the total number of nodes used for Redis. This should match the number of Availability Zones you are configuring for.

Cache node type (CacheNodeType)

cache.t3.medium

If you chose to use External cache above, provide cache node type. List of acceptable instance types: https://aws.amazon.com/elasticache/pricing/

Table 10. GitLab infrastructure configuration
Parameter label (name) Default value Description

GitLab DNS name (DomainName)

Requires input

The domain name for the GitLab server.

Create Route 53 hosted zone (CreateHostedZone)

No

Choose "Yes" if you want to create Amazon Route 53 to manage DNS for GitLab domain.

Request AWS Certificate Manager SSL certificate (CreateSslCertificate)

No

Choose "Yes" if you want to request AWS Certificate Manager SSL certificate for GitLab domain.

Table 11. GitLab SMTP configuration
Parameter label (name) Default value Description

Outgoing SMTP domain (SMTPDomain)

Disabled

Choose "CreateNew" if you want to create Amazon Simple Email Service domain to send out GitLab notification email messages.

SMTP server host name (SMTPHostName)

Blank string

If you chose to use existing SMTP domain above, provide SMTP server host name.

SMTP server port (SMTPPort)

587

If you chose to use existing SMTP domain above, provide SMTP server port.

SMTP server user name (SMTPUsername)

Blank string

If you chose to use existing SMTP domain above, provide SMTP server username.

SMTP server password (SMTPPassword)

Blank string

If you chose to use existing SMTP domain above, provide SMTP server password.

Table 12. GitLab Helm chart configuration
Parameter label (name) Default value Description

Kubernetes namespace creation mode (HelmChartNamespaceCreate)

CreateNew

Create new or use existing Kubernetes namespace for GitLab chart deployment.

Kubernetes namespace for GitLab Helm chart (HelmChartNamespace)

gitlab

Kubernetes namespace to deploy GitLab chart to.

GitLab Helm chart name (HelmChartName)

gitlab

Name of Helm GitLab deployment.

GitLab Helm chart version (HelmChartVersion)

4.12.3

Version of GitLab Helm chart GitLab for deployment. See https://docs.gitlab.com/charts/installation/version_mappings.html.

GitLab application version (GitLabVersion)

13.12.3

Version of GitLab application - must correspond to helm chart version above. See https://docs.gitlab.com/charts/installation/version_mappings.html.

Table 13. GitLab Git repository storage configuration
Parameter label (name) Default value Description

Number of Gitaly replicas (NumberOfGitalyReplicas)

3

Number of Gitaly replicas to deploy in GitLab cluster. The replicas will be distributed across Availability Zones selected.

Gitaly instance type (GitalyInstanceType)

t3.medium

Gitaly EC2 instance type. Select an instance from this list: https://aws.amazon.com/ec2/instance-types/

Gitaly volume capacity (GitalyVolumeSize)

50

Capacity of EBS volume used by Gitaly replicas (Git repository storage), in Gb. Note that this storage is used for Git repositories only. All other GitLab storage types are stored in S3 buckets.

Number of Praefect replicas (NumberOfPraefectReplicas)

3

Praefect coordinates Gitaly cluster replication. Number of Praefect replicas to deploy in GitLab cluster. The replicas will be distributed across Availability Zones selected. 3 are required because Prafect replicas perform voting for data consistency and so an odd number are needed.

Praefect instance type (PraefectInstanceType)

t3.medium

Praefect EC2 instance type. Select an instance from this list: https://aws.amazon.com/ec2/instance-types/

Table 14. GitLab object storage configuration
Parameter label (name) Default value Description

Object storage encryption algorithm (ObjectStorageSSEAlgorithm)

AES256

GitLab will be configured to use object storage for everything that is capable of using it (artifacts, packages, lfs, etc.). Encryption algorithm for GitLab object storage artifacts.

KMS key ID (ObjectStorageKMSKeyID)

none

Provide KMS key ID to be used for encryption if KMS encryption is selected.

Object storage backup schedule (BackupSchedule)

0 1 * * *

cron expression that is used to run GitLab backup jobs (default is daily at 1am).

Object storage backup volume capacity (BackupVolumeSize)

10

Capacity of EBS volume used for GitLab backups, in Gb.

Table 15. GitLab Runner configuration
Parameter label (name) Default value Description

Configure GitLab Runner (ConfigureRunner)

No

Choose "Yes" to enable deployment of test GitLab Runner inside EKS cluster.

GitLab Runner Helm chart name (RunnerChartName)

runner

Name of Helm GitLab Runner deployment.

GitLab Runner chart version (RunnerChartVersion)

0.27.0

Version of GitLab Runner Helm chart for deployment.

Default runner image (RunnerImage)

ubuntu:20.04

Default GitLab Runner image.

Max number of concurrent jobs (MaximumConcurrentJobs)

10

The maximum number of concurrent jobs.

Use privileged mode (PrivilegedMode)

No

Choose "Yes" to run all containers with the privileged flag enabled. This will allow the docker:dind image to run if you need to run Docker. For test purposes only. Not recommended for production environments.

Table 16. AWS Quick Start configuration
Parameter label (name) Default value Description

Quick Start S3 bucket name (QSS3BucketName)

aws-quickstart

S3 bucket name for the Quick Start assets. This string can include numbers, lowercase letters, uppercase letters, and hyphens (-). It cannot start or end with a hyphen (-).

Quick Start S3 key prefix (QSS3KeyPrefix)

quickstart-eks-gitlab/

S3 key prefix for the Quick Start assets. Quick Start key prefix can include numbers, lowercase letters, uppercase letters, hyphens (-), and forward slash (/).

Quick Start S3 bucket region (QSS3BucketRegion)

us-east-1

The AWS Region where the Quick Start S3 bucket (QSS3BucketName) is hosted. When using your own bucket, you must specify this value.

Per-account shared resources (PerAccountSharedResources)

AutoDetect

Choose "No" if you already deployed another EKS Quick Start stack in your AWS account.

Per-Region shared resources (PerRegionSharedResources)

AutoDetect

Choose "No" if you already deployed another EKS Quick Start stack in your Region.

Launch into an existing VPC

Table 17. Basic configuration
Parameter label (name) Default value Description

Environment name (EnvironmentName)

Blank string

Name of the GitLab environment (leave empty to use dynamically generated name). Environment name is used to generate unique resource names. Multiple GitLab environments with different names can be deployed in the same region.

SSH key pair name (KeyPairName)

Requires input

Name of an existing public/private key pair, which allows you to securely connect to your instance after it launches. If the selection list is blank, you must create a keypair in the target region, refresh this form and select it.

Table 18. VPC network configuration
Parameter label (name) Default value Description

VPC ID (VPCID)

Requires input

ID of your existing VPC (e.g., vpc-0343606e).

VPC CIDR (VPCCIDR)

Requires input

CIDR block for the VPC.

Private subnet 1 ID (PrivateSubnet1ID)

Requires input

ID of the private subnet in Availability Zone 1 of your existing VPC (e.g., subnet-fe9a8b32).

Private subnet 2 ID (PrivateSubnet2ID)

Requires input

ID of the private subnet in Availability Zone 2 of your existing VPC (e.g., subnet-be8b01ea).

Private subnet 3 ID (PrivateSubnet3ID)

Requires input

ID of the private subnet in Availability Zone 3 of your existing VPC (e.g., subnet-abd39039).

Public subnet 1 ID (PublicSubnet1ID)

Requires input

ID of the public subnet in Availability Zone 1 of your existing VPC (e.g., subnet-a0246ccd).

Public subnet 2 ID (PublicSubnet2ID)

Requires input

ID of the public subnet in Availability Zone 2 of your existing VPC (e.g., subnet-b1236eea).

Public subnet 3 ID (PublicSubnet3ID)

Requires input

ID of the public subnet in Availability Zone 3 of your existing VPC (e.g., subnet-c3456aba).

Table 19. Bastion host configuration
Parameter label (name) Default value Description

Provision a bastion host (ProvisionBastionHost)

Disabled

Choose "Enabled" to provision a bastion host to access EKS cluster. Bastion host includes kubectl and helm tools already configured for EKS cluster access. Bastion host is recommended for EKS cluster management in production environments as a more secure option than a public EKS API endpoint. Select a larger instance size to do performance testing from the bastion host using GitLab Performance Tool (gpt).

Bastion host instance type (BastionInstanceType)

t3.micro

Instance type for the bastion host.

Allowed bastion external access CIDR (RemoteAccessCIDR)

Requires input

CIDR IP range that is permitted to access the bastions. We recommend that you set this value to a trusted IP range.

Table 20. Amazon EKS cluster configuration
Parameter label (name) Default value Description

EKS public access endpoint (EKSPublicAccessEndpoint)

Disabled

Configure access to the Kubernetes API server endpoint from outside of your VPC. Public API endpoint is not recommended for production environments. Use bastion host for cluster management instead.

Additional EKS admin ARN (IAM user) (AdditionalEKSAdminUserArn)

Blank string

(Optional) IAM user Amazon Resource Name (ARN) to be granted administrative access to the EKS cluster.

Additional EKS admin ARN (IAM role) (AdditionalEKSAdminRoleArn)

Blank string

(Optional) IAM role Amazon Resource Name (ARN) to be granted administrative access to the EKS cluster.

EKS node instance type (NodeInstanceType)

t3.large

EC2 instance type of EKS nodes.

EKS node volume capacity (NodeVolumeSize)

100

Capacity of EBS volume used by EKS nodes, in Gb. Note that in default configuration no GitLab data is stored on EBS volumes.

Number of EKS nodes (NumberOfNodes)

3

Number of Amazon EKS node instances. Should have a minimum value matching the number of Availability Zones configured for GitLab deployment.

Maximum number of EKS nodes (MaxNumberOfNodes)

6

Maximum number of Amazon EKS node instances. The default is three.

Configure Container Insights (ConfigureContainerInsights)

Yes

Choose "No" to disable integration with CloudWatch Container Insights. Container insights ensure pod performance metrics and logs are collected to CloudWatch - including Gitlab application logs.

Provision and integrate Grafana (ConfigureGrafana)

No

Choose "Yes" to provision Grafana to EKS and enable GitLab integration with it.

Table 21. Amazon Aurora Postgres configuration
Parameter label (name) Default value Description

GitLab PostgreSQL version (DBEngineVersion)

12.4

Select Database Engine Version. The version must be supported by your target version of GitLab: https://docs.gitlab.com/omnibus/settings/database.html#gitlab-140-and-later And by the Aurora RDS Instance type you choose: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html

GitLab database instance class (DBInstanceClass)

db.r5.large

The name of the compute and memory capacity class of the database instance. This Quickstart deploys Multi-AZ Postgres replicas in different Availability Zones. The instance class is applied to both primary and standby instances. The instance class you choose must support the needed PostgreSQL version: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html

GitLab database name (DBName)

gitlab

Name of the GitLab Postgres database.

Praefect database name (DBPraefectName)

praefect

Name of the Praefect Postgres database.

Database admin username (DBUserName)

gitlab

The database admin account username.

Database port (DBPort)

5432

The port the instance will listen for connections on.

Table 22. GitLab cache configuration
Parameter label (name) Default value Description

Where to provision Redis (CacheMode)

External

'BuiltIn' will install Redis in the EKS cluster while 'External' provisions Amazon ElastiCache Redis. GitLab Redis in Kubernetes clusters is not GitLab reference architecture compliant and only for training or testing setups.

Number of cache replicas (CacheNodes)

2

Provide the number of cache replicas (applicable for both BuiltIn and External cache modes). For BuiltIn cache mode this is the number of replica Redis pods (in addition to primary pod). For External mode this is the total number of nodes used for Redis. This should match the number of Availability Zones you are configuring for.

Cache node type (CacheNodeType)

cache.t3.medium

If you chose to use External cache above, provide cache node type. List of acceptable instance types: https://aws.amazon.com/elasticache/pricing/

Table 23. GitLab infrastructure configuration
Parameter label (name) Default value Description

GitLab DNS name (DomainName)

Requires input

The domain name for the GitLab server.

Create Route 53 hosted zone (CreateHostedZone)

No

Choose "Yes" if you want to create Amazon Route 53 to manage DNS for GitLab domain.

Request AWS Certificate Manager SSL certificate (CreateSslCertificate)

No

Choose "Yes" if you want to request AWS Certificate Manager SSL certificate for GitLab domain.

Table 24. GitLab SMTP configuration
Parameter label (name) Default value Description

Outgoing SMTP domain (SMTPDomain)

Disabled

Choose "CreateNew" if you want to create Amazon Simple Email Service domain to send out GitLab notification email messages.

SMTP server host name (SMTPHostName)

Blank string

If you chose to use existing SMTP domain above, provide SMTP server host name.

SMTP server port (SMTPPort)

587

If you chose to use existing SMTP domain above, provide SMTP server port.

SMTP server user name (SMTPUsername)

Blank string

If you chose to use existing SMTP domain above, provide SMTP server username.

SMTP server password (SMTPPassword)

Blank string

If you chose to use existing SMTP domain above, provide SMTP server password.

Table 25. GitLab Helm chart configuration
Parameter label (name) Default value Description

Kubernetes namespace creation mode (HelmChartNamespaceCreate)

CreateNew

Create new or use existing Kubernetes namespace for GitLab chart deployment.

Kubernetes namespace for GitLab Helm chart (HelmChartNamespace)

gitlab

Kubernetes namespace to deploy GitLab chart to.

GitLab Helm chart name (HelmChartName)

gitlab

Name of Helm GitLab deployment.

GitLab Helm chart version (HelmChartVersion)

4.12.3

Version of GitLab Helm chart GitLab for deployment. See https://docs.gitlab.com/charts/installation/version_mappings.html.

GitLab application version (GitLabVersion)

13.12.3

Version of GitLab application - must correspond to helm chart version above. See https://docs.gitlab.com/charts/installation/version_mappings.html.

Table 26. GitLab Git repository storage configuration
Parameter label (name) Default value Description

Number of Gitaly replicas (NumberOfGitalyReplicas)

3

Number of Gitaly replicas to deploy in GitLab cluster. The replicas will be distributed across Availability Zones selected.

Gitaly instance type (GitalyInstanceType)

t3.medium

Gitaly EC2 instance type. Select an instance from this list: https://aws.amazon.com/ec2/instance-types/

Gitaly volume capacity (GitalyVolumeSize)

50

Capacity of EBS volume used by Gitaly replicas (Git repository storage), in Gb. Note that this storage is used for Git repositories only. All other GitLab storage types are stored in S3 buckets.

Number of Praefect replicas (NumberOfPraefectReplicas)

3

Praefect coordinates Gitaly cluster replication. Number of Praefect replicas to deploy in GitLab cluster. The replicas will be distributed across Availability Zones selected. 3 are required because Prafect replicas perform voting for data consistency and so an odd number are needed.

Praefect instance type (PraefectInstanceType)

t3.medium

Praefect EC2 instance type. Select an instance from this list: https://aws.amazon.com/ec2/instance-types/

Table 27. GitLab object storage configuration
Parameter label (name) Default value Description

Object storage encryption algorithm (ObjectStorageSSEAlgorithm)

AES256

GitLab will be configured to use object storage for everything that is capable of using it (artifacts, packages, lfs, etc.). Encryption algorithm for GitLab object storage artifacts.

KMS key ID (ObjectStorageKMSKeyID)

none

Provide KMS key ID to be used for encryption if KMS encryption is selected.

Object storage backup schedule (BackupSchedule)

0 1 * * *

cron expression that is used to run GitLab backup jobs (default is daily at 1am).

Object storage backup volume capacity (BackupVolumeSize)

10

Capacity of EBS volume used for GitLab backups, in Gb.

Table 28. GitLab Runner configuration
Parameter label (name) Default value Description

Configure GitLab Runner (ConfigureRunner)

No

Choose "Yes" to enable deployment of test GitLab Runner inside EKS cluster.

GitLab Runner Helm chart name (RunnerChartName)

runner

Name of Helm GitLab Runner deployment.

GitLab Runner chart version (RunnerChartVersion)

0.27.0

Version of GitLab Runner Helm chart for deployment.

Default runner image (RunnerImage)

ubuntu:20.04

Default GitLab Runner image.

Max number of concurrent jobs (MaximumConcurrentJobs)

10

The maximum number of concurrent jobs.

Use privileged mode (PrivilegedMode)

No

Choose "Yes" to run all containers with the privileged flag enabled. This will allow the docker:dind image to run if you need to run Docker. For test purposes only. Not recommended for production environments.

Table 29. AWS Quick Start configuration
Parameter label (name) Default value Description

Quick Start S3 bucket name (QSS3BucketName)

aws-quickstart

S3 bucket name for the Quick Start assets. This string can include numbers, lowercase letters, uppercase letters, and hyphens (-). It cannot start or end with a hyphen (-).

Quick Start S3 key prefix (QSS3KeyPrefix)

quickstart-eks-gitlab/

S3 key prefix for the Quick Start assets. Quick Start key prefix can include numbers, lowercase letters, uppercase letters, hyphens (-), and forward slash (/).

Quick Start S3 bucket region (QSS3BucketRegion)

us-east-1

The AWS Region where the Quick Start S3 bucket (QSS3BucketName) is hosted. When using your own bucket, you must specify this value.

Per-account shared resources (PerAccountSharedResources)

AutoDetect

Choose "No" if you already deployed another EKS Quick Start stack in your AWS account.

Per-Region shared resources (PerRegionSharedResources)

AutoDetect

Choose "No" if you already deployed another EKS Quick Start stack in your Region.

Send us feedback

To post feedback, submit feature ideas, or report bugs, use the Issues section of the GitHub repository for this Quick Start. To submit code, see the Quick Start Contributor’s Guide.

Quick Start reference deployments

GitHub repository

Visit our GitHub repository to download the templates and scripts for this Quick Start, to post your comments, and to share your customizations with others.


Notices

This document is provided for informational purposes only. It represents AWS’s current product offerings and practices as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services, each of which is provided “as is” without warranty of any kind, whether expressed or implied. This document does not create any warranties, representations, contractual commitments, conditions, or assurances from AWS, its affiliates, suppliers, or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

The software included with this paper is licensed under the Apache License, version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the accompanying "license" file. This code is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either expressed or implied. See the License for specific language governing permissions and limitations.