Truefoundry Google Cloud Cluster Classic Module
| Name | Version |
|---|---|
| terraform | ~> 1.4 |
| 6.47 | |
| google-beta | 6.47 |
| Name | Version |
|---|---|
| 6.47 | |
| google-beta | 6.47 |
No modules.
| Name | Type |
|---|---|
| google-beta_google_container_cluster.cluster | resource |
| google_compute_firewall.fix_webhooks | resource |
| google_container_node_pool.control_plane_pool | resource |
| google_container_node_pool.critical_pool | resource |
| google_container_cluster.existing_cluster | data source |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| allowed_ip_ranges | Allowed IP ranges to connect to the master endpoint | list(string) |
[ |
no |
| cluster_endpoint_public_access | Enable public access to the cluster endpoint. Set to true to allow access from public CIDRs, which is required for TrueFoundry modules running locally. | bool |
true |
no |
| cluster_master_ipv4_cidr_block | IPv4 CIDR block for the master nodes | string |
n/a | yes |
| cluster_name | Name of the cluster. If use_existing_cluster is enabled cluster_name is used to fetch details of existing cluster | string |
n/a | yes |
| cluster_nap_node_config | Configuration for the NAP (Node Auto Provisioning) node pool. This includes: - disk_size_gb: Size of the disk attached to each node (default: "300") - disk_type: Type of disk attached to each node (pd-standard, pd-balanced, pd-ssd) (default: "pd-balanced") - enable_secure_boot: Secure Boot helps ensure that the system only runs authentic software (default: true) - enable_integrity_monitoring: Enables monitoring and attestation of the boot integrity (default: true) - autoscaling_profile: Profile for autoscaling optimization (default: "OPTIMIZE_UTILIZATION") - max_cpu: Maximum CPU cores allowed per node (default: 1024) - max_memory: Maximum memory in MB allowed per node (default: 8172) - auto_repair: Flag to enable auto repair for the nodes (default: true) - auto_upgrade: Flag to enable auto upgrade for the nodes (default: true) - max_surge: Maximum number of nodes that can be created beyond the current size during updates (default: 1) - max_unavailable: Maximum number of nodes that can be unavailable during updates (default: 0) See GKE docs for more information: https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning |
object({ |
{} |
no |
| cluster_network_id | Existing VPC network ID for the cluster | string |
n/a | yes |
| cluster_networking_mode | Networking mode for the cluster. Values can be VPC_NATIVE (recommended) or ROUTES. VPC_NATIVE is default after google-beta 5.0.0 | string |
"VPC_NATIVE" |
no |
| cluster_node_locations | Availability zones for nodes - should match the region | list(string) |
n/a | yes |
| cluster_secondary_range_name | VPC secondary range name for pods | string |
"" |
no |
| cluster_subnet_id | Existing subnet ID for the cluster | string |
n/a | yes |
| control_plane_enabled | Enable dedicated control plane nodes for the cluster | bool |
false |
no |
| control_plane_pool_config | Configuration for the control plane node pool | object({ |
{} |
no |
| critical_pool_config | Configuration for the critical workloads node pool | object({ |
{} |
no |
| default_node_pool_config | Configuration for the default node pool | object({ |
{} |
no |
| deletion_protection | Enable deletion protection for the cluster | bool |
false |
no |
| enable_container_image_streaming | Enable/disable container image streaming | bool |
true |
no |
| enable_eol_maintenance_exclusion | Enable automatic End-of-Life (EOL) maintenance exclusion for the GKE cluster. When set to true (default), this automatically adds maintenance exclusions that prevent automatic minor version upgrades and node upgrades during the end-of-life period for the specified Kubernetes version. This helps maintain cluster stability by preventing automatic upgrades that could potentially cause issues during EOL periods. This exclusion is scoped to NO_MINOR_UPGRADES. This will prevent control plane upgrades, but will allow node patch level upgrades. The EOL maintenance exclusions are version-specific and include: - Kubernetes 1.32: EOL from 2024-06-01 to 2026-04-11 - Kubernetes 1.33: EOL from 2024-06-01 to 2026-08-03 - Kubernetes 1.34: EOL from 2024-06-01 to 2026-10-01 When disabled (false), only user-defined maintenance exclusions from the maintenance_policy variable will be applied. This gives you full control over maintenance scheduling. For more information on GKE release schedules and EOL dates, see: https://cloud.google.com/kubernetes-engine/docs/release-schedule |
bool |
true |
no |
| enable_gce_persistent_disk_csi_driver | Enable/disable GCE Persistent Disk CSI driver | bool |
true |
no |
| enable_gcp_filestore_csi_driver | Enable/disable GCP Filestore CSI driver | bool |
false |
no |
| enable_gcs_fuse_csi_driver | Enable/disable GCS Fuse CSI driver | bool |
false |
no |
| kubernetes_version | Kubernetes version for the GKE cluster | string |
"1.33" |
no |
| logging_config | Configuration for cluster logging components | object({ |
{ |
no |
| maintenance_recurring_window_policy | Recurring maintenance window for the GKE cluster When set to true (default), this automatically adds a recurring maintenance window for the GKE cluster. This helps maintain cluster stability by preventing automatic upgrades that could potentially cause issues during EOL periods. The recurring maintenance window default is set to every Saturday and Sunday from 9:00 AM to 9:00 AM. When enable_eol_maintenance_exclusion is set to true, this window is used for patch upgrades. GKE may apply critical upgrades outside of this window. (https://cloud.google.com/kubernetes-engine/docs/concepts/maintenance-windows-and-exclusions#security-patching) See https://cloud.google.com/kubernetes-engine/docs/how-to/maintenance-windows-and-exclusions#maintenance-window-existing_cluster for more information. |
object({ |
{ |
no |
| max_pods_per_node | Maximum number of pods per node in this cluster. | string |
"32" |
no |
| network_tags | A list of network tags to add to all instances | list(string) |
[] |
no |
| oauth_scopes | OAuth scopes to attach to the cluster | list(string) |
[ |
no |
| project | GCP project ID | string |
n/a | yes |
| region | GCP region for the cluster | string |
n/a | yes |
| services_secondary_range_name | VPC secondary range name for services | string |
"" |
no |
| shared_vpc | Flag to enable shared VPC for the cluster | bool |
false |
no |
| tags | A map of tags to add to all resources. Tags are key-value pairs used for grouping and filtering | map(string) |
{} |
no |
| use_existing_cluster | Flag to enable the use of an existing GKE cluster or create a new one | bool |
false |
no |
| Name | Description |
|---|---|
| cluster_endpoint | Endpoint for your Kubernetes API server |
| cluster_id | The id of the GKE cluster |
| cluster_master_version | Master version for the cluster |
| cluster_name | The name of the GKE cluster |
| cluster_secondary_range_name | Cluster secondary range name for pod IPs |
| services_secondary_range_name | Cluster secondry range name for service IPs |