From bc36eb69d0fefa0648710e9f96b5f8763594f366 Mon Sep 17 00:00:00 2001 From: Ahmet Soormally Date: Sun, 30 Nov 2025 09:20:27 +0000 Subject: [PATCH 1/8] docs(connect): add client documentation and operations mode --- docs/cli/grpc-service/generate.mdx | 26 +++- docs/connect/client.mdx | 210 +++++++++++++++++++++++++++++ docs/connect/overview.mdx | 21 ++- docs/docs.json | 1 + 4 files changed, 249 insertions(+), 9 deletions(-) create mode 100644 docs/connect/client.mdx diff --git a/docs/cli/grpc-service/generate.mdx b/docs/cli/grpc-service/generate.mdx index 9a782a0c..07f45b62 100644 --- a/docs/cli/grpc-service/generate.mdx +++ b/docs/cli/grpc-service/generate.mdx @@ -10,6 +10,10 @@ description: "Generate a protobuf definition for a gRPC service from a GraphQL s The `generate` command generates a protobuf definition and mapping file for a gRPC service from a GraphQL schema, which can be used to implement a gRPC service and can be used for the composition. +It supports two modes: +1. **Schema Mode**: Generates a protobuf definition that mirrors the GraphQL schema. Used for implementing Subgraphs in gRPC (Connect Backend). +2. **Operations Mode**: Generates a protobuf definition based on a set of GraphQL operations. Used for generating client SDKs (Connect Client). + ## Usage ```bash @@ -30,6 +34,11 @@ wgc grpc-service generate [options] [service-name] | `-o, --output ` | The output directory for the protobuf schema | `.` | | `-p, --package-name ` | The name of the proto package | `service.v1` | | `-g, --go-package ` | Adds an `option go_package` to the proto file | None | +| `-w, --with-operations ` | Path to directory containing GraphQL operation files. Enables **Operations Mode**. | None | +| `-l, --proto-lock ` | Path to existing proto lock file. | `/service.proto.lock.json` | +| `--custom-scalar-mapping ` | Custom scalar type mappings as JSON string. Example: `{"DateTime":"google.protobuf.Timestamp"}` | None | +| `--custom-scalar-mapping-file ` | Path to JSON file containing custom scalar type mappings. | None | +| `--max-depth ` | Maximum recursion depth for processing nested selections. | `50` | ## Description @@ -37,12 +46,23 @@ This command generates a protobuf definition for a gRPC service from a GraphQL s ## Examples -### Generate a protobuf definition for a gRPC service from a GraphQL schema +### Generate a protobuf definition for a gRPC service from a GraphQL schema (Schema Mode) ```bash wgc grpc-service generate -i ./schema.graphql -o ./service MyService ``` +### Generate a protobuf definition from operations (Operations Mode) + +```bash +wgc grpc-service generate \ + -i ./schema.graphql \ + -o ./gen \ + --with-operations ./operations \ + --package-name my.service.v1 \ + MyService +``` + ### Define a custom package name ```bash @@ -60,10 +80,10 @@ wgc grpc-service generate -i ./schema.graphql -o ./service MyService --go-packag The command generates multiple files in the output directory: - `service.proto`: The protobuf definition for the gRPC service -- `service.mapping.json`: The mapping file for the gRPC service +- `service.mapping.json`: The mapping file for the gRPC service (Schema Mode only) - `service.proto.lock.json`: The lock file for the protobuf definition -The generated protobuf definition can be used to implement a gRPC service in any language that supports protobuf. +The generated protobuf definition can be used to implement a gRPC service in any language that supports protobuf, or to generate client SDKs. The mapping and the protobuf definition is needed for the composition part. diff --git a/docs/connect/client.mdx b/docs/connect/client.mdx new file mode 100644 index 00000000..144b284f --- /dev/null +++ b/docs/connect/client.mdx @@ -0,0 +1,210 @@ +--- +title: "Connect Client" +description: "Generate type-safe clients and OpenAPI specs from GraphQL operations" +icon: "code" +--- + +# Connect Client + + + **Alpha Feature**: The Connect Client capability is currently in alpha. APIs and functionality may change as we gather feedback. + + +Connect Client enables you to generate type-safe gRPC/Connect clients and OpenAPI specifications directly from your GraphQL operations. This allows you to consume your Federated (or monolithic) Graph using standard gRPC tooling in any language, or expose REST APIs via OpenAPI without writing manual adapters. + +## Overview + +While **Connect Backend** (gRPC Subgraphs) focuses on implementing subgraphs using gRPC, **Connect Client** focuses on the consumer side. It allows you to define GraphQL operations (Queries, Mutations and soon Subscriptions) and compile them into a Protobuf service definition. + +The Cosmo Router acts as a bridge. It serves your generated operations via the Connect protocol, executes them against your Federated Graph, and maps the GraphQL response back to typed Protobuf messages. + +## Workflow + +The typical workflow involves defining your operations, configuring the Router to serve them, and then distributing the Protobuf or OpenAPI definition to consumers who generate their client SDKs. + +```mermaid +sequenceDiagram + autonumber + participant Provider as API Provider + participant CLI as Cosmo CLI (wgc) + participant Consumer as API Consumer + participant Client as Client SDK + participant Router as Cosmo Router + participant Graph as Federated Graph + + Note over Provider, Router: Setup & Configuration + Provider->>Provider: Define GraphQL Operations (.graphql) + Provider->>CLI: Generate Protobuf (wgc grpc-service generate) + CLI->>Provider: service.proto + lock file + Provider->>Router: Configure & Start Router with operations + + Note over Provider, Consumer: Distribution + Provider->>Consumer: Distribute service.proto / OpenAPI spec + + Note over Consumer, Client: Client Development + Consumer->>Consumer: Generate Client SDK (buf/protoc) + Consumer->>Client: Integrate SDK into App + + Note over Client, Graph: Runtime + Client->>Router: Send RPC Request (Connect/gRPC) + Router->>Graph: Execute GraphQL Operation + Graph-->>Router: GraphQL Response + Router-->>Client: Protobuf Response + + Note over Provider, Router: Observe + Router->>Provider: OTEL Metrics / Traces (GraphQL & RPC) +``` + +## Usage Example + +### 1. Define GraphQL Operations + +Create a directory for your operations, e.g., `operations/`: + +```graphql operations/GetEmployee.graphql +query GetEmployee($id: ID!) { + employee(id: $id) { + id + details { + forename + surname + } + } +} +``` + +### 2. Generate Protobuf + +Run the `wgc grpc-service generate` command with the `--with-operations` flag. You must also provide the schema SDL to validate the operations. + + + Each collection of operations represents a distinct Protobuf service. You can organize your operations into different directories (packages) to create multiple services, giving you the flexibility to expose specific subsets of your graph to different consumers or applications. + + + + It is recommended to output the generated proto file to the same directory as your operations to keep them together. + + +```bash +wgc grpc-service generate \ + --input schema.graphql \ + --output ./operations \ + --with-operations ./operations \ + --package-name "myorg.employee.v1" \ + MyService +``` + +This command generates a `service.proto` file and a `service.proto.lock.json` file in the `./operations` directory. + +### 3. Configure and Start Router + +Enable the ConnectRPC server in your `config.yaml` and point the `services` provider to the directory containing your generated `service.proto`. + +```yaml config.yaml +# ConnectRPC configuration +connect_rpc: + enabled: true + server: + listen_addr: "0.0.0.0:8081" + services_provider_id: "fs-services" + +# Storage providers for services directory +storage_providers: + file_system: + - id: "fs-services" + # Path to the directory containing your generated service.proto and operations + path: "./operations" +``` + +Start the router. It is now ready to accept requests for the operations defined in `service.proto`. + +### 4. Generate Client SDK + +Use [buf](https://buf.build/) or `protoc` to generate the client code for your application. + +Example `buf.gen.yaml` for Go: + +```yaml buf.gen.yaml +version: v1 +plugins: + - plugin: buf.build/connectrpc/go + out: gen/go + opt: paths=source_relative + - plugin: buf.build/protocolbuffers/go + out: gen/go + opt: paths=source_relative +``` + +Run the generation: + +```bash +buf generate operations/service.proto +``` + +### 5. Use the Client + +You can now use the generated client to call your GraphQL API via the Router. The Router acts as the server implementation for your generated service. + +```go +package main + +import ( + "context" + "net/http" + "log" + + "connectrpc.com/connect" + employeev1 "example/gen/go/myorg/employee/v1" + "example/gen/go/myorg/employee/v1/employeev1connect" +) + +func main() { + // Point to your Cosmo Router's ConnectRPC listener + client := employeev1connect.NewMyServiceClient( + http.DefaultClient, + "http://localhost:8081", + ) + + req := connect.NewRequest(&employeev1.GetEmployeeRequest{ + Id: "1", + }) + + resp, err := client.GetEmployee(context.Background(), req) + if err != nil { + log.Fatal(err) + } + + log.Printf("Employee: %s %s", + resp.Msg.Employee.Details.Forename, + resp.Msg.Employee.Details.Surname, + ) +} +``` + +## Observability + +The Cosmo Router provides built-in [observability features](/router/metrics-and-monitoring) that work seamlessly with Connect Client. Because the Router translates RPC calls into GraphQL operations, you get detailed metrics and tracing for both layers. + +- **GraphQL Metrics**: Track the performance, error rates, and usage of your underlying GraphQL operations (`GetEmployee`, etc.). +- **Request Tracing**: Trace the entire flow from the incoming RPC request, through the GraphQL engine, to your subgraphs and back. +- **Standard Protocols**: Export data using OpenTelemetry (OTLP) or Prometheus to your existing monitoring stack (Grafana, Datadog, Cosmo Cloud, etc.). + +Since the Router is aware of the operation mapping, it can attribute metrics correctly to the specific GraphQL operation being executed, giving you full visibility into your client's usage patterns. + +## Forward Compatibility & Lock Files + +When you generate your Protobuf definition, the CLI creates a `service.proto.lock.json` file. **You should commit this file to your version control system.** + +This lock file maintains a history of your operations and their field assignments. When you modify your operations (e.g., add new fields, reorder fields, or add new operations), the CLI uses the lock file to ensure: + +1. **Stable Field Numbers**: Existing fields retain their unique Protobuf field numbers, even if you reorder them in the GraphQL query. +2. **Safe Evolution**: You can safely evolve your client requirements without breaking existing clients. + +This mechanism allows you to iterate on your GraphQL operations—adding data requirements or new features—while maintaining binary compatibility for deployed clients. + +## Roadmap + +The following features are planned for future releases of Connect Client: + +1. **OpenAPI Generation**: Enhanced support for generating OpenAPI specifications including descriptions, summaries, deprecated fields, and tags. +2. **Subscription Support**: Ability to consume GraphQL subscriptions as gRPC streams, enabling real-time data updates over the Connect protocol. diff --git a/docs/connect/overview.mdx b/docs/connect/overview.mdx index a4e0a954..3b340815 100644 --- a/docs/connect/overview.mdx +++ b/docs/connect/overview.mdx @@ -12,13 +12,21 @@ One of the biggest downsides of Apollo Federation is that backend developers mus How does this work? You define an Apollo-compatible Subgraph Schema, compile it into a protobuf definition, and implement it in your favorite gRPC stack, such as Go, Java, C#, or many others. No specific framework or GraphQL knowledge is required. It is really just gRPC! -## Key Benefits +## Key Capabilities -* **All Cosmo platform benefits** — including breaking change detection, centralized telemetry, and governance out of the box -* **Federation without GraphQL servers** — backend teams implement gRPC contracts instead of GraphQL resolvers -* **Language flexibility** — leverage gRPC code generation across nearly all ecosystems, including those with poor GraphQL server libraries -* **Reduced migration effort** — wrap existing APIs (like REST or SOAP) without writing full subgraphs, lowering the cost of moving from monoliths to federation -* **Developer experience** — straightforward request/response semantics, with the router handling GraphQL query planning and batching +### Connect Backend (gRPC Subgraphs) + +Implement Federated Subgraphs using gRPC instead of GraphQL resolvers. +- **No GraphQL servers required**: Backend teams implement standard gRPC services. +- **Language flexibility**: Use any language with gRPC support (Go, Java, Rust, C#, etc.). +- **Reduced complexity**: The Router handles query planning; your service handles simple RPCs. + +### Connect Client (Typed Clients) + +Generate type-safe clients from your GraphQL operations. +- **Type Safety**: Generate SDKs for iOS, Android, Web, and Backend services. +- **OpenAPI**: Generate OpenAPI specs from your GraphQL operations. +- **Performance**: Use the efficient Connect/gRPC protocol to talk to your GraphQL API. ## Deployment Models @@ -58,6 +66,7 @@ Both approaches remove the need to build GraphQL servers while maintaining the b The following documentation explains how to build and deploy services and plugins: +- **[Connect Client](/connect/client)** — Generate type-safe clients and OpenAPI specs from GraphQL operations. - **[Router Plugins](/router/gRPC/plugins)** — Documentation for developing, configuring, and deploying plugins that run inside the router - **[gRPC Services](/router/gRPC/grpc-services)** — Documentation for the complete lifecycle of building, deploying, and managing independent gRPC services diff --git a/docs/docs.json b/docs/docs.json index 2abfffc7..14ffb877 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -68,6 +68,7 @@ "group": "Cosmo Connect", "pages": [ "connect/overview", + "connect/client", "connect/plugins", "connect/grpc-services" ] From e37f2c9250bbb03f13321ff3a6c77293bf4709c8 Mon Sep 17 00:00:00 2001 From: Ahmet Soormally Date: Sun, 30 Nov 2025 09:31:54 +0000 Subject: [PATCH 2/8] grpc subgraphs --- docs/connect/client.mdx | 2 +- docs/connect/overview.mdx | 21 +++++++++++++-------- 2 files changed, 14 insertions(+), 9 deletions(-) diff --git a/docs/connect/client.mdx b/docs/connect/client.mdx index 144b284f..34476b2f 100644 --- a/docs/connect/client.mdx +++ b/docs/connect/client.mdx @@ -14,7 +14,7 @@ Connect Client enables you to generate type-safe gRPC/Connect clients and OpenAP ## Overview -While **Connect Backend** (gRPC Subgraphs) focuses on implementing subgraphs using gRPC, **Connect Client** focuses on the consumer side. It allows you to define GraphQL operations (Queries, Mutations and soon Subscriptions) and compile them into a Protobuf service definition. +While **Connect gRPC Services** (gRPC Subgraphs) focuses on implementing subgraphs using gRPC, **Connect Client** focuses on the consumer side. It allows you to define GraphQL operations (Queries, Mutations and soon Subscriptions) and compile them into a Protobuf service definition. The Cosmo Router acts as a bridge. It serves your generated operations via the Connect protocol, executes them against your Federated Graph, and maps the GraphQL response back to typed Protobuf messages. diff --git a/docs/connect/overview.mdx b/docs/connect/overview.mdx index 3b340815..292186cc 100644 --- a/docs/connect/overview.mdx +++ b/docs/connect/overview.mdx @@ -14,7 +14,7 @@ How does this work? You define an Apollo-compatible Subgraph Schema, compile it ## Key Capabilities -### Connect Backend (gRPC Subgraphs) +### Connect gRPC Services (gRPC Subgraphs) Implement Federated Subgraphs using gRPC instead of GraphQL resolvers. - **No GraphQL servers required**: Backend teams implement standard gRPC services. @@ -32,15 +32,16 @@ Generate type-safe clients from your GraphQL operations. ```mermaid graph LR - client["Clients (Web / Mobile / Server)"] --> routerCore["Router Core"] + client["GraphQL Clients
(Web / Mobile)"] --> routerCore["Router Core"] + connectClient["Connect Clients
(Generated SDKs)"] --> routerCore subgraph routerBox["Cosmo Router"] routerCore - plugin["Router Plugin
(Cosmo Connect)"] + plugin["Router Plugin
(Connect gRPC Service)"] end routerCore --> subA["GraphQL Subgraph"] - routerCore --> grpcSvc["gRPC Service
(Cosmo Connect)"] + routerCore --> grpcSvc["gRPC Service
(Connect gRPC Service)"] grpcSvc --> restA["REST / HTTP APIs"] grpcSvc --> soapA["Databases"] @@ -51,16 +52,20 @@ graph LR %% Styling classDef grpcFill fill:#ea4899,stroke:#ea4899,stroke-width:1.5px,color:#ffffff; classDef pluginFill fill:#ea4899,stroke:#ea4899,stroke-width:1.5px,color:#ffffff; + classDef clientFill fill:#3b82f6,stroke:#3b82f6,stroke-width:1.5px,color:#ffffff; + class grpcSvc grpcFill; class plugin pluginFill; + class connectClient clientFill; ``` -Cosmo Connect supports two ways to integrate gRPC into your federated graph: +Cosmo Connect supports three main integration patterns: -- **[Router Plugins](/connect/plugins)** — run as local processes managed by the router. Ideal for simple deployments where you want the lowest latency and do not need separate CI/CD or scaling. -- **[gRPC Services](/connect/grpc-services)** — independent deployments in any language. Suitable when you need full lifecycle control, team ownership boundaries, and independent scaling. +1. **[Connect Client](/connect/client)** — Generated clients that speak the Connect protocol to the Router. +2. **[Router Plugins](/connect/plugins)** — gRPC services running as local processes managed by the router. +3. **[gRPC Services](/connect/grpc-services)** — Independent gRPC services implementing subgraphs. -Both approaches remove the need to build GraphQL servers while maintaining the benefits of federation. +Both plugin and service approaches remove the need to build GraphQL servers while maintaining the benefits of federation. Connect Client removes the need to manually write GraphQL queries in your application code. ## Implementation Docs From 1e50b68806b3fceca6f99431c904be67650452ba Mon Sep 17 00:00:00 2001 From: Ahmet Soormally Date: Mon, 1 Dec 2025 11:25:12 +0000 Subject: [PATCH 3/8] adding more use-cases to the roadmap --- docs/connect/client.mdx | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/connect/client.mdx b/docs/connect/client.mdx index 34476b2f..d96dde1c 100644 --- a/docs/connect/client.mdx +++ b/docs/connect/client.mdx @@ -208,3 +208,5 @@ The following features are planned for future releases of Connect Client: 1. **OpenAPI Generation**: Enhanced support for generating OpenAPI specifications including descriptions, summaries, deprecated fields, and tags. 2. **Subscription Support**: Ability to consume GraphQL subscriptions as gRPC streams, enabling real-time data updates over the Connect protocol. +3. **Multiple Root Fields**: Support for executable operations containing multiple root selection set fields, allowing more complex queries in a single operation. +4. **Field Aliases**: Support for GraphQL aliases to control the shape of the API surface, enabling customized field names in the generated Protobuf definitions. From b1c8ab86f39f150a5af26d5f80bfaa32c703d4b4 Mon Sep 17 00:00:00 2001 From: Ahmet Soormally Date: Mon, 1 Dec 2025 11:35:59 +0000 Subject: [PATCH 4/8] directory structure and file organisation --- docs/connect/client.mdx | 160 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 146 insertions(+), 14 deletions(-) diff --git a/docs/connect/client.mdx b/docs/connect/client.mdx index d96dde1c..9fe0d1c8 100644 --- a/docs/connect/client.mdx +++ b/docs/connect/client.mdx @@ -59,9 +59,9 @@ sequenceDiagram ### 1. Define GraphQL Operations -Create a directory for your operations, e.g., `operations/`: +Create a directory for your operations, e.g., `services/`: -```graphql operations/GetEmployee.graphql +```graphql services/GetEmployee.graphql query GetEmployee($id: ID!) { employee(id: $id) { id @@ -88,13 +88,13 @@ Run the `wgc grpc-service generate` command with the `--with-operations` flag. Y ```bash wgc grpc-service generate \ --input schema.graphql \ - --output ./operations \ - --with-operations ./operations \ + --output ./services \ + --with-operations ./services \ --package-name "myorg.employee.v1" \ MyService ``` -This command generates a `service.proto` file and a `service.proto.lock.json` file in the `./operations` directory. +This command generates a `service.proto` file and a `service.proto.lock.json` file in the `./services` directory. ### 3. Configure and Start Router @@ -113,7 +113,7 @@ storage_providers: file_system: - id: "fs-services" # Path to the directory containing your generated service.proto and operations - path: "./operations" + path: "./services" ``` Start the router. It is now ready to accept requests for the operations defined in `service.proto`. @@ -125,20 +125,27 @@ Use [buf](https://buf.build/) or `protoc` to generate the client code for your a Example `buf.gen.yaml` for Go: ```yaml buf.gen.yaml -version: v1 +version: v2 +managed: + enabled: true + override: + - file_option: go_package_prefix + value: github.com/wundergraph/cosmo/router-tests/testdata/connectrpc/client plugins: - - plugin: buf.build/connectrpc/go - out: gen/go - opt: paths=source_relative - - plugin: buf.build/protocolbuffers/go - out: gen/go - opt: paths=source_relative + - remote: buf.build/protocolbuffers/go + out: client + opt: + - paths=source_relative + - remote: buf.build/connectrpc/go + out: client + opt: + - paths=source_relative ``` Run the generation: ```bash -buf generate operations/service.proto +buf generate services/service.proto ``` ### 5. Use the Client @@ -181,6 +188,131 @@ func main() { } ``` +## Directory Structure & Organization + +The Cosmo Router uses a convention-based directory structure to automatically discover and load Connect RPC services. This approach co-locates proto files with their GraphQL operations for easy management. + +### Configuration + +Configure the router to point to your root services directory using a storage provider: + +```yaml config.yaml +connect_rpc: + enabled: true + server: + listen_addr: "0.0.0.0:8081" + services_provider_id: "fs-services" + +storage_providers: + file_system: + - id: "fs-services" + path: "./services" # Root services directory +``` + +The router will recursively walk the `services` directory and automatically discover all proto files and their associated GraphQL operations. + +### Recommended Structure + +For organization purposes, we recommend keeping all services in a root `services` directory, with subdirectories for packages and individual services. + +#### Single Service per Package + +When you have one service per package, you can organize files directly in the package directory: + +``` +services/ +└── employee.v1/ # Package directory + ├── employee.proto # Proto definition + ├── employee.proto.lock.json + ├── GetEmployee.graphql # Operation files + ├── UpdateEmployee.graphql + └── DeleteEmployee.graphql +``` + +Or with operations in a subdirectory: + +``` +services/ +└── employee.v1/ + ├── employee.proto + ├── employee.proto.lock.json + └── operations/ + ├── GetEmployee.graphql + ├── UpdateEmployee.graphql + └── DeleteEmployee.graphql +``` + +#### Multiple Services per Package + +When multiple services share the same proto package, organize them in service subdirectories: + +``` +services/ +└── company.v1/ # Package directory + ├── EmployeeService/ # First service + │ ├── employee.proto # package company.v1; service EmployeeService + │ ├── employee.proto.lock.json + │ └── operations/ + │ ├── GetEmployee.graphql + │ └── UpdateEmployee.graphql + └── DepartmentService/ # Second service, same package + ├── department.proto # package company.v1; service DepartmentService + ├── department.proto.lock.json + └── operations/ + ├── GetDepartment.graphql + └── ListDepartments.graphql +``` + +### Flexible Organization + +The router determines service identity by proto package declarations, not directory names. This gives you flexibility in organizing your files: + +``` +services/ +├── hr-services/ +│ ├── employee.proto # package company.v1; service EmployeeService +│ └── GetEmployee.graphql +└── admin-services/ + ├── department.proto # package company.v1; service DepartmentService + └── GetDepartment.graphql +``` + + + **Service Uniqueness**: The combination of proto package name and service name must be unique. For example, you can have multiple services with the same package name (e.g., `company.v1`) as long as they have different service names (`EmployeeService`, `DepartmentService`). The router uses the package + service combination to identify and route requests. + + +### Discovery Rules + +The router follows these rules when discovering services: + +1. **Recursive Discovery**: The router recursively walks the services directory to find all `.proto` files +2. **Proto Association**: Each `.proto` file discovered becomes a service endpoint +3. **Operation Association**: All `.graphql` files in the same directory (or subdirectories) are associated with the nearest parent `.proto` file +4. **Nested Proto Limitation**: If a `.proto` file is found in a directory, any `.proto` files in subdirectories are **not** discovered (the parent proto takes precedence) + +#### Example: Nested Proto Files + +``` +services/ +└── employee.v1/ + ├── employee.proto # ✅ Discovered as a service + ├── GetEmployee.graphql # ✅ Associated with employee.proto + └── nested/ + ├── other.proto # ❌ NOT discovered (parent proto found first) + └── UpdateEmployee.graphql # ✅ Still associated with employee.proto +``` + + + **Avoid Nested Proto Files**: Do not place `.proto` files in subdirectories of a directory that already contains a `.proto` file. The nested proto files will not be discovered by the router. + + +### Best Practices + +1. **Use Semantic Versioning**: Include version numbers in package names (e.g., `employee.v1`, `employee.v2`) to support API evolution +2. **Co-locate Operations**: Keep GraphQL operations close to their proto definitions for easier maintenance +3. **Consistent Naming**: Use clear, descriptive names for packages and services that reflect their purpose +4. **Lock File Management**: Always commit `.proto.lock.json` files to version control to maintain field number stability + ## Observability The Cosmo Router provides built-in [observability features](/router/metrics-and-monitoring) that work seamlessly with Connect Client. Because the Router translates RPC calls into GraphQL operations, you get detailed metrics and tracing for both layers. From 55319df9aecc805388aad717926405f98d0d22f5 Mon Sep 17 00:00:00 2001 From: Ahmet Soormally Date: Wed, 17 Dec 2025 12:34:31 +0000 Subject: [PATCH 5/8] connect client and overview enhancements --- docs/connect/client.mdx | 363 +++++++++++++++++++++++++++++++++++--- docs/connect/overview.mdx | 77 ++++++-- 2 files changed, 407 insertions(+), 33 deletions(-) diff --git a/docs/connect/client.mdx b/docs/connect/client.mdx index 9fe0d1c8..7ad3b218 100644 --- a/docs/connect/client.mdx +++ b/docs/connect/client.mdx @@ -1,22 +1,40 @@ --- -title: "Connect Client" +title: "Connect RPC" description: "Generate type-safe clients and OpenAPI specs from GraphQL operations" icon: "code" --- -# Connect Client +# Connect RPC - **Alpha Feature**: The Connect Client capability is currently in alpha. APIs and functionality may change as we gather feedback. + **Alpha Feature**: The Connect RPC capability is currently in alpha. APIs and functionality may change as we gather feedback. -Connect Client enables you to generate type-safe gRPC/Connect clients and OpenAPI specifications directly from your GraphQL operations. This allows you to consume your Federated (or monolithic) Graph using standard gRPC tooling in any language, or expose REST APIs via OpenAPI without writing manual adapters. +Connect RPC enables you to generate type-safe gRPC/Connect clients and OpenAPI specifications directly from your +GraphQL operations. This allows you to consume your Federated (or monolithic) Graph using standard gRPC tooling in any +language, or expose REST APIs via OpenAPI without writing manual adapters. -## Overview +## What is ConnectRPC? -While **Connect gRPC Services** (gRPC Subgraphs) focuses on implementing subgraphs using gRPC, **Connect Client** focuses on the consumer side. It allows you to define GraphQL operations (Queries, Mutations and soon Subscriptions) and compile them into a Protobuf service definition. +ConnectRPC enables Platform Engineering (platform teams) to distribute GraphQL-backed APIs as governed, versioned API +products using protobuf. -The Cosmo Router acts as a bridge. It serves your generated operations via the Connect protocol, executes them against your Federated Graph, and maps the GraphQL response back to typed Protobuf messages. +## How does ConnectRPC work? + +ConnectRPC turns GraphQL APIs into distributable, protocol-agnostic API products by introducing a clear, governed workflow: + +1. **Define GraphQL Operations (Trusted Documents)** + +API Providers define a fixed set of GraphQL queries and mutations that represent the supported API surface. + +2. **Generate Protobuf Service Definitions** + +Using the wgc CLI, convert GraphQL operations into protobuf service definitions. + +3. **Distribute as Versioned API Products** + +Distribute OpenAPI and proto files via Git, your developer portal, or schema registry of your choice to enable API +consumers to generate type-safe clients in their favourite language. ## Workflow @@ -96,6 +114,49 @@ wgc grpc-service generate \ This command generates a `service.proto` file and a `service.proto.lock.json` file in the `./services` directory. +#### Command Options + +| Option | Description | Default | +|--------|-------------|---------| +| `-i, --input ` | **(Required)** GraphQL schema file (SDL) | - | +| `-w, --with-operations ` | **(Required for ConnectRPC)** Directory containing GraphQL operation files (`.graphql`, `.gql`, `.graphqls`, `.gqls`). Each operation becomes an RPC method | - | +| `-o, --output ` | Output directory for generated files | `.` | +| `-p, --package-name ` | Proto package name | `service.v1` | +| `-g, --go-package ` | Adds `option go_package` to proto file | - | +| `-l, --proto-lock ` | Path to proto lock file for field number stability | `/service.proto.lock.json` | +| `--custom-scalar-mapping ` | Custom scalar mappings as JSON string | - | +| `--custom-scalar-mapping-file ` | Path to JSON file with custom scalar mappings | - | +| `--max-depth ` | Maximum recursion depth for nested selections/fragments | `50` | + +#### How It Works + +ConnectRPC uses **operations-based generation** (enabled with the `-w` flag): + +- Generates proto from GraphQL operation files (queries, mutations, subscriptions) +- Each operation becomes an RPC method in the service +- Operation files are processed recursively from the specified directory +- Files are sorted alphabetically for deterministic output +- **Query operations automatically marked with `NO_SIDE_EFFECTS` idempotency level** +- Subscriptions marked as server streaming RPCs +- Generates `service.proto` and `service.proto.lock.json` + +#### Important Requirements + + +**Operation Naming**: Operation names must be in PascalCase (e.g., `GetEmployeeById`, `UpdateEmployeeMood`). This ensures exact matching between GraphQL operation names and RPC method names. + +- ❌ Invalid: `get_employee`, `GETEMPLOYEE`, `getEmployee` +- ✅ Valid: `GetEmployee`, `CreatePost`, `OnMessageAdded` + + + +**One Operation Per File**: Each `.graphql` file must contain exactly one named operation. This is required for deterministic proto schema generation. Multiple operations in a single file will cause an error. + + + +**No Root-Level Aliases**: Field aliases at the root level are not supported as they break proto schema generation consistency (each GraphQL field must map to exactly one proto field name). Aliases on nested fields are allowed. + + ### 3. Configure and Start Router Enable the ConnectRPC server in your `config.yaml` and point the `services` provider to the directory containing your generated `service.proto`. @@ -122,36 +183,194 @@ Start the router. It is now ready to accept requests for the operations defined Use [buf](https://buf.build/) or `protoc` to generate the client code for your application. -Example `buf.gen.yaml` for Go: +#### Understanding buf.gen.yaml + +The `buf.gen.yaml` file configures code generation from proto files. Here's what each section does: + +- **`version: v2`** - Uses Buf v2 configuration format +- **`managed.enabled: true`** - Enables managed mode for consistent code generation +- **`managed.override`** - Customizes generation options (e.g., Go module path prefix) +- **`plugins`** - Defines code generators to run: + - `local` - Uses locally installed plugin + - `remote` - Uses Buf Schema Registry plugins + - `out` - Output directory for generated code + - `opt` - Plugin-specific options + +#### Language-Specific Examples -```yaml buf.gen.yaml + + +```bash +# Install dependencies +npm install @bufbuild/protoc-gen-es @connectrpc/protoc-gen-connect-es + +# Create buf.gen.yaml +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - local: ./node_modules/.bin/protoc-gen-es + out: gen/ts + opt: target=ts + - local: ./node_modules/.bin/protoc-gen-connect-es + out: gen/ts + opt: target=ts +EOF + +# Generate +buf generate services/service.proto +``` + + + +```bash +# Install dependencies +go install google.golang.org/protobuf/cmd/protoc-gen-go@latest +go install connectrpc.com/connect/cmd/protoc-gen-connect-go@latest + +# Create buf.gen.yaml +cat > buf.gen.yaml << EOF version: v2 managed: enabled: true override: - file_option: go_package_prefix - value: github.com/wundergraph/cosmo/router-tests/testdata/connectrpc/client + value: github.com/yourorg/yourproject/gen/go plugins: - - remote: buf.build/protocolbuffers/go - out: client - opt: - - paths=source_relative - - remote: buf.build/connectrpc/go - out: client - opt: - - paths=source_relative + - local: protoc-gen-go + out: gen/go + opt: paths=source_relative + - local: protoc-gen-connect-go + out: gen/go + opt: paths=source_relative +EOF + +# Generate +buf generate services/service.proto +``` + + + +```bash +# Add to Package.swift: +# .package(url: "https://github.com/apple/swift-protobuf.git", from: "1.25.0") +# .package(url: "https://github.com/connectrpc/connect-swift.git", from: "0.12.0") + +# Create buf.gen.yaml +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - remote: buf.build/apple/swift + out: gen/swift + - remote: buf.build/connectrpc/swift + out: gen/swift +EOF + +# Generate +buf generate services/service.proto ``` + + + +```bash +# Create buf.gen.yaml +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - remote: buf.build/grpc/kotlin + out: gen/kotlin + - remote: buf.build/connectrpc/kotlin + out: gen/kotlin +EOF -Run the generation: +# Generate +buf generate services/service.proto +``` + + ```bash +# Install dependencies +pip install grpcio-tools + +# Using protoc directly +protoc --python_out=gen/python \ + --grpc_python_out=gen/python \ + services/service.proto + +# Or using buf +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - remote: buf.build/protocolbuffers/python + out: gen/python + - remote: buf.build/grpc/python + out: gen/python +EOF + buf generate services/service.proto ``` + + + +#### Generate OpenAPI Specification + +Generate OpenAPI 3.0 specification for REST-like documentation and tooling: + +```bash +# Install the OpenAPI plugin +go install connectrpc.com/connect/cmd/protoc-gen-connect-openapi@latest + +# Create buf.gen.yaml for OpenAPI +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - local: protoc-gen-connect-openapi + out: gen/openapi + opt: + - format=yaml +EOF + +# Generate +buf generate services/service.proto +``` + +**Use the generated OpenAPI specification with:** + +- **Swagger UI** - Interactive API documentation +- **Postman** - API testing and collaboration +- **Redoc** - Beautiful API documentation +- **Speakeasy** - Advanced SDK generation with retry logic, pagination, etc. +- **Stainless** - Production-grade SDK generation ### 5. Use the Client You can now use the generated client to call your GraphQL API via the Router. The Router acts as the server implementation for your generated service. + + +```typescript +import { createPromiseClient } from "@connectrpc/connect"; +import { createConnectTransport } from "@connectrpc/connect-web"; +import { EmployeeService } from "./gen/employee/v1/service_connect"; + +// Create transport +const transport = createConnectTransport({ + baseUrl: "http://localhost:8081", +}); + +// Create client +const client = createPromiseClient(EmployeeService, transport); + +// Make request +const response = await client.getEmployee({ + id: "1", +}); + +console.log(response.employee?.details?.forename); // "John" +``` + + + ```go package main @@ -181,12 +400,116 @@ func main() { log.Fatal(err) } - log.Printf("Employee: %s %s", + log.Printf("Employee: %s %s", resp.Msg.Employee.Details.Forename, resp.Msg.Employee.Details.Surname, ) } ``` + + + +```swift +import Connect + +let client = ProtocolClient( + httpClient: URLSessionHTTPClient(), + config: ProtocolClientConfig( + host: "http://localhost:8081", + networkProtocol: .connect + ) +) + +let request = Employee_V1_GetEmployeeRequest.with { + $0.id = "1" +} + +let response = try await client.getEmployee(request: request) +print(response.employee.details.forename) // "John" +``` + + + +```kotlin +import com.connectrpc.ProtocolClient +import employee.v1.EmployeeServiceClient + +val client = ProtocolClient( + httpClient = OkHttpClient(), + ProtocolClientConfig( + host = "http://localhost:8081", + serializationStrategy = ProtobufStrategy() + ) +) + +val employeeClient = EmployeeServiceClient(client) + +val request = GetEmployeeRequest.newBuilder() + .setId("1") + .build() + +val response = employeeClient.getEmployee(request) +println(response.employee.details.forename) // "John" +``` + + + +## Protocol Support + +ConnectRPC supports multiple protocols and formats with automatic transcoding, allowing clients to use their preferred +protocol while the Router handles translation to GraphQL operations. + +### Supported Protocols + +#### 1. Connect Protocol with JSON + +The Connect protocol is modern, efficient, and works over HTTP/1.1 or HTTP/2. The `Connect-Protocol-Version` header is optional. + +**POST Request:** +```bash +curl -X POST http://localhost:8081/employee.v1.EmployeeService/GetEmployee \ + -H "Content-Type: application/json" \ + -H "Connect-Protocol-Version: 1" \ + -d '{"id": "1"}' +``` + +**GET Request (for queries - enables CDN caching):** +```bash +curl --get \ + --data-urlencode 'encoding=json' \ + --data-urlencode 'message={"id":"1"}' \ + http://localhost:8081/employee.v1.EmployeeService/GetEmployee +``` + +#### 2. gRPC Protocol + +Binary protobuf over HTTP/2 for maximum efficiency. + +```bash +grpcurl -plaintext \ + -proto ./services/service.proto \ + -d '{"id": "1"}' \ + localhost:8081 \ + employee.v1.EmployeeService/GetEmployee +``` + +#### 3. gRPC-Web Protocol + +Browser-compatible gRPC with binary protobuf. + +```bash +buf curl --protocol grpcweb \ + --schema ./services/service.proto \ + --data '{"id": "1"}' \ + http://localhost:8081/employee.v1.EmployeeService/GetEmployee +``` + +### Key Features + +- **JSON Support**: All protocols support JSON encoding—no binary proto required +- **GET for Queries**: Query operations can use HTTP GET with `encoding` and `message` query parameters, enabling CDN caching for better performance +- **POST for Mutations**: Mutation operations must use HTTP POST +- **Flexible Headers**: The `Connect-Protocol-Version` header is optional for Connect protocol requests ## Directory Structure & Organization diff --git a/docs/connect/overview.mdx b/docs/connect/overview.mdx index 292186cc..7eec512c 100644 --- a/docs/connect/overview.mdx +++ b/docs/connect/overview.mdx @@ -6,27 +6,78 @@ icon: "circle-info" ## Cosmo Connect -Cosmo Connect allows you to use GraphQL Federation without requiring backend teams to run GraphQL servers or frameworks. +Cosmo Connect lets backend teams participate in a federated GraphQL architecture without learning, writing, or +operating GraphQL. Backend services integrate into the graph by implementing RPC stubs, while frontend teams and API +consumers interact with the graph through generated, type-safe, versioned SDKs or OpenAPI—without writing GraphQL +queries. -One of the biggest downsides of Apollo Federation is that backend developers must adopt GraphQL and migrate their existing REST or gRPC services to a Federation-compatible framework. Cosmo Connect solves this problem by compiling GraphQL into gRPC and moving the complexity of the query language into the Router (API Gateway). +### The Challenge -How does this work? You define an Apollo-compatible Subgraph Schema, compile it into a protobuf definition, and implement it in your favorite gRPC stack, such as Go, Java, C#, or many others. No specific framework or GraphQL knowledge is required. It is really just gRPC! +One of the biggest barriers to GraphQL adoption is the learning curve. While GraphQL offers tremendous benefits for API +composition and flexibility, requiring every developer to learn GraphQL concepts, schema design, and query optimization +can slow down adoption and create friction across an organization. + +At the same time, fully realizing the benefits of GraphQL often implies deep changes to existing systems. Rewriting +or restructuring an entire backend stack to adopt GraphQL-native patterns is rarely practical for mature architectures +built around existing services, protocols and team boundaries. + +### The Solution + +Cosmo Connect solves this problem by providing two complementary capabilities: + +1. **Connect gRPC Subgraphs** - Backend teams can implement federated subgraphs using standard gRPC instead of GraphQL +resolvers +2. **Connect RPC** - API consumers can use type-safe RPC clients instead of writing GraphQL queries + +The Router handles all GraphQL complexity, acting as a protocol mediation layer between familiar RPC interfaces and +your GraphQL supergraph. ## Key Capabilities ### Connect gRPC Services (gRPC Subgraphs) Implement Federated Subgraphs using gRPC instead of GraphQL resolvers. -- **No GraphQL servers required**: Backend teams implement standard gRPC services. -- **Language flexibility**: Use any language with gRPC support (Go, Java, Rust, C#, etc.). -- **Reduced complexity**: The Router handles query planning; your service handles simple RPCs. -### Connect Client (Typed Clients) +- **No GraphQL servers required**: Backend teams implement standard gRPC services +- **Language flexibility**: Use any language with gRPC support (Go, Java, Rust, C#, etc.) +- **Reduced complexity**: The Router handles query planning; your service handles simple RPCs +- **Familiar patterns**: Use protobuf definitions and standard gRPC tooling + +**How it works**: Define an Apollo-compatible Subgraph Schema, compile it into a protobuf definition using the `wgc` +CLI, and implement it in your favorite gRPC stack. No specific framework or GraphQL knowledge is required—it's really +just gRPC! + +### Connect RPC (Typed Clients) + +Generate OpenAPI Specifications and client SDKs from your GraphQL operations without writing GraphQL queries. + +- **Type Safety**: Generate SDKs for iOS, Android, Web, and Backend services using standard protobuf tooling +- **OpenAPI Support**: Generate OpenAPI 3.0 specifications for REST-like documentation and tooling +- **Multi-Protocol**: Use gRPC, Connect, gRPC-Web, or plain HTTP/JSON +- **No GraphQL Required**: API consumers never need to learn GraphQL query syntax +- **Governed API Products**: Distribute versioned proto files as API products via Git, developer portals, or schema +registries + +**How it works**: Define GraphQL operations (Trusted Documents), generate protobuf service definitions using the `wgc` +CLI, and distribute proto files to consumers who generate type-safe clients in their preferred language. + +#### Architecture + +The ConnectRPC server in the Cosmo Router supports multiple client protocols, allowing you to consume your GraphQL API +using various RPC and HTTP protocols: + +```mermaid +flowchart LR + A1[gRPC Client
binary proto] --> B[ConnectRPC Server] + A2[Connect Client
JSON or binary] --> B + A3[gRPC-Web Client
binary proto] --> B + A4[HTTP Client
JSON GET/POST] --> B + + B <-->|GraphQL| C[GraphQL Router
Cosmo Supergraph] +``` -Generate type-safe clients from your GraphQL operations. -- **Type Safety**: Generate SDKs for iOS, Android, Web, and Backend services. -- **OpenAPI**: Generate OpenAPI specs from your GraphQL operations. -- **Performance**: Use the efficient Connect/gRPC protocol to talk to your GraphQL API. +This architecture enables clients to use their preferred protocol while the Router handles translation to GraphQL +operations. ## Deployment Models @@ -61,7 +112,7 @@ graph LR Cosmo Connect supports three main integration patterns: -1. **[Connect Client](/connect/client)** — Generated clients that speak the Connect protocol to the Router. +1. **[Connect RPC](/connect/client)** — Generated clients that speak the Connect protocol to the Router. 2. **[Router Plugins](/connect/plugins)** — gRPC services running as local processes managed by the router. 3. **[gRPC Services](/connect/grpc-services)** — Independent gRPC services implementing subgraphs. @@ -71,7 +122,7 @@ Both plugin and service approaches remove the need to build GraphQL servers whil The following documentation explains how to build and deploy services and plugins: -- **[Connect Client](/connect/client)** — Generate type-safe clients and OpenAPI specs from GraphQL operations. +- **[Connect RPC](/connect/client)** — Generate type-safe clients and OpenAPI specs from GraphQL operations. - **[Router Plugins](/router/gRPC/plugins)** — Documentation for developing, configuring, and deploying plugins that run inside the router - **[gRPC Services](/router/gRPC/grpc-services)** — Documentation for the complete lifecycle of building, deploying, and managing independent gRPC services From 5f064c267aef74e92c81d8c63b63daa9e3897c45 Mon Sep 17 00:00:00 2001 From: Ahmet Soormally Date: Wed, 17 Dec 2025 12:46:00 +0000 Subject: [PATCH 6/8] simplifying the overview page --- docs/connect/overview.mdx | 116 +++++++++----------------------------- 1 file changed, 27 insertions(+), 89 deletions(-) diff --git a/docs/connect/overview.mdx b/docs/connect/overview.mdx index 7eec512c..001e6621 100644 --- a/docs/connect/overview.mdx +++ b/docs/connect/overview.mdx @@ -6,80 +6,38 @@ icon: "circle-info" ## Cosmo Connect -Cosmo Connect lets backend teams participate in a federated GraphQL architecture without learning, writing, or -operating GraphQL. Backend services integrate into the graph by implementing RPC stubs, while frontend teams and API -consumers interact with the graph through generated, type-safe, versioned SDKs or OpenAPI—without writing GraphQL -queries. +Cosmo Connect lets you build and consume federated GraphQL without requiring everyone to learn GraphQL. Backend teams can integrate services using familiar RPC patterns, while frontend teams can use generated, type-safe SDKs instead of writing GraphQL queries. ### The Challenge -One of the biggest barriers to GraphQL adoption is the learning curve. While GraphQL offers tremendous benefits for API -composition and flexibility, requiring every developer to learn GraphQL concepts, schema design, and query optimization -can slow down adoption and create friction across an organization. +GraphQL adoption often faces two key barriers: -At the same time, fully realizing the benefits of GraphQL often implies deep changes to existing systems. Rewriting -or restructuring an entire backend stack to adopt GraphQL-native patterns is rarely practical for mature architectures -built around existing services, protocols and team boundaries. +1. **Learning curve** - Requiring every developer to learn GraphQL concepts, schema design, and query optimization slows adoption +2. **Migration complexity** - Rewriting existing backend services to adopt GraphQL-native patterns is rarely practical for mature architectures ### The Solution -Cosmo Connect solves this problem by providing two complementary capabilities: +Cosmo Connect removes these barriers by letting teams work with familiar tools: -1. **Connect gRPC Subgraphs** - Backend teams can implement federated subgraphs using standard gRPC instead of GraphQL -resolvers -2. **Connect RPC** - API consumers can use type-safe RPC clients instead of writing GraphQL queries +- **Backend teams** can implement federated subgraphs using standard gRPC instead of GraphQL resolvers +- **Frontend teams** can use type-safe RPC clients instead of writing GraphQL queries +- **The Router** handles all GraphQL complexity as a protocol mediation layer -The Router handles all GraphQL complexity, acting as a protocol mediation layer between familiar RPC interfaces and -your GraphQL supergraph. +## What You Can Build -## Key Capabilities - -### Connect gRPC Services (gRPC Subgraphs) - -Implement Federated Subgraphs using gRPC instead of GraphQL resolvers. - -- **No GraphQL servers required**: Backend teams implement standard gRPC services -- **Language flexibility**: Use any language with gRPC support (Go, Java, Rust, C#, etc.) -- **Reduced complexity**: The Router handles query planning; your service handles simple RPCs -- **Familiar patterns**: Use protobuf definitions and standard gRPC tooling - -**How it works**: Define an Apollo-compatible Subgraph Schema, compile it into a protobuf definition using the `wgc` -CLI, and implement it in your favorite gRPC stack. No specific framework or GraphQL knowledge is required—it's really -just gRPC! - -### Connect RPC (Typed Clients) - -Generate OpenAPI Specifications and client SDKs from your GraphQL operations without writing GraphQL queries. - -- **Type Safety**: Generate SDKs for iOS, Android, Web, and Backend services using standard protobuf tooling -- **OpenAPI Support**: Generate OpenAPI 3.0 specifications for REST-like documentation and tooling -- **Multi-Protocol**: Use gRPC, Connect, gRPC-Web, or plain HTTP/JSON -- **No GraphQL Required**: API consumers never need to learn GraphQL query syntax -- **Governed API Products**: Distribute versioned proto files as API products via Git, developer portals, or schema -registries - -**How it works**: Define GraphQL operations (Trusted Documents), generate protobuf service definitions using the `wgc` -CLI, and distribute proto files to consumers who generate type-safe clients in their preferred language. - -#### Architecture - -The ConnectRPC server in the Cosmo Router supports multiple client protocols, allowing you to consume your GraphQL API -using various RPC and HTTP protocols: - -```mermaid -flowchart LR - A1[gRPC Client
binary proto] --> B[ConnectRPC Server] - A2[Connect Client
JSON or binary] --> B - A3[gRPC-Web Client
binary proto] --> B - A4[HTTP Client
JSON GET/POST] --> B - - B <-->|GraphQL| C[GraphQL Router
Cosmo Supergraph] -``` - -This architecture enables clients to use their preferred protocol while the Router handles translation to GraphQL -operations. + + + Generate type-safe SDKs from GraphQL operations. + + + Build federated subgraphs using standard gRPC instead of GraphQL resolvers. + + + Extend the router with gRPC services that run as managed local processes. + + -## Deployment Models +## Architecture ```mermaid graph LR @@ -88,17 +46,14 @@ graph LR subgraph routerBox["Cosmo Router"] routerCore - plugin["Router Plugin
(Connect gRPC Service)"] + plugin["Router Plugin"] end routerCore --> subA["GraphQL Subgraph"] - routerCore --> grpcSvc["gRPC Service
(Connect gRPC Service)"] - - grpcSvc --> restA["REST / HTTP APIs"] - grpcSvc --> soapA["Databases"] + routerCore --> grpcSvc["gRPC Service"] - plugin --> restA - plugin --> soapA + grpcSvc --> backend["Backend Systems"] + plugin --> backend %% Styling classDef grpcFill fill:#ea4899,stroke:#ea4899,stroke-width:1.5px,color:#ffffff; @@ -110,28 +65,11 @@ graph LR class connectClient clientFill; ``` -Cosmo Connect supports three main integration patterns: - -1. **[Connect RPC](/connect/client)** — Generated clients that speak the Connect protocol to the Router. -2. **[Router Plugins](/connect/plugins)** — gRPC services running as local processes managed by the router. -3. **[gRPC Services](/connect/grpc-services)** — Independent gRPC services implementing subgraphs. - -Both plugin and service approaches remove the need to build GraphQL servers while maintaining the benefits of federation. Connect Client removes the need to manually write GraphQL queries in your application code. - -## Implementation Docs - -The following documentation explains how to build and deploy services and plugins: - -- **[Connect RPC](/connect/client)** — Generate type-safe clients and OpenAPI specs from GraphQL operations. -- **[Router Plugins](/router/gRPC/plugins)** — Documentation for developing, configuring, and deploying plugins that run inside the router -- **[gRPC Services](/router/gRPC/grpc-services)** — Documentation for the complete lifecycle of building, deploying, and managing independent gRPC services - -These docs assume you're familiar with the concepts above and are ready to implement your first service or plugin. +The diagram shows how Cosmo Connect integrates with your existing architecture. Traditional GraphQL clients and Connect-generated clients both communicate with the Router, which federates queries across GraphQL subgraphs, gRPC services, and router plugins. ## Getting Started -The following tutorials walk you through step-by-step examples of building your first integration. -Unlike the implementation docs, which cover the full technical reference, these focus on quick setup and hands-on learning: +Choose your path based on what you want to build: From 00ddc7b43bf97077626289518992e7c35dad5ce9 Mon Sep 17 00:00:00 2001 From: Ahmet Soormally Date: Wed, 17 Dec 2025 12:55:15 +0000 Subject: [PATCH 7/8] layered diagram --- docs/connect/client.mdx | 137 ++++++++++++++++++++++++++++++++-------- 1 file changed, 111 insertions(+), 26 deletions(-) diff --git a/docs/connect/client.mdx b/docs/connect/client.mdx index 7ad3b218..c6fb1bda 100644 --- a/docs/connect/client.mdx +++ b/docs/connect/client.mdx @@ -36,42 +36,127 @@ Using the wgc CLI, convert GraphQL operations into protobuf service definitions. Distribute OpenAPI and proto files via Git, your developer portal, or schema registry of your choice to enable API consumers to generate type-safe clients in their favourite language. -## Workflow +## Architecture & Workflow -The typical workflow involves defining your operations, configuring the Router to serve them, and then distributing the Protobuf or OpenAPI definition to consumers who generate their client SDKs. +### System Context + +ConnectRPC enables API providers to expose GraphQL operations as type-safe RPC services that consumers can integrate using standard gRPC/Connect tooling. + +```mermaid +graph TB + subgraph "API Provider Organization" + Provider[API Provider Team] + CLI[Cosmo CLI] + Router[Cosmo Router] + Graph[Federated GraphQL Graph] + end + + subgraph "API Consumer Organization" + Consumer[API Consumer Team] + Client[Client Application] + end + + Provider -->|1. Define Operations| CLI + CLI -->|2. Generate Proto| Provider + Provider -->|3. Configure| Router + Router -->|Executes| Graph + Provider -.->|4. Distribute Proto/OpenAPI| Consumer + Consumer -->|5. Generate SDK| Client + Client -->|6. RPC Requests| Router + Router -.->|7. Observability| Provider + + style Provider fill:#e1f5ff + style Consumer fill:#fff4e1 + style Router fill:#f0e1ff + style Graph fill:#e1ffe1 +``` + +### Container View + +This diagram shows how the Cosmo Router acts as a protocol bridge between RPC clients and your GraphQL federation. + +```mermaid +graph TB + subgraph "Development Time" + Ops[GraphQL Operations
.graphql files] + WGC[wgc CLI] + Proto[service.proto
+ lock file] + + Ops -->|Input| WGC + WGC -->|Generates| Proto + end + + subgraph "Distribution" + Proto -.->|Shared via Git/Portal| SDK[SDK Generator
buf/protoc] + Proto -.->|Alternative| OpenAPI[OpenAPI Spec
Swagger/Postman] + end + + subgraph "Runtime" + SDK -->|Generates| ClientCode[Type-Safe Client Code] + ClientCode -->|RPC Calls
Connect/gRPC/gRPC-Web| RouterRPC[Cosmo Router
:8081 RPC Server] + RouterRPC -->|Translates to| RouterGQL[GraphQL Engine] + RouterGQL -->|Executes| Subgraphs[Federated Subgraphs] + Subgraphs -->|Response| RouterGQL + RouterGQL -->|Protobuf Response| RouterRPC + RouterRPC -->|Returns| ClientCode + end + + subgraph "Observability" + RouterRPC -.->|OTEL Metrics & Traces| Monitoring[Monitoring Stack
Grafana/Datadog] + end + + style Ops fill:#e1f5ff + style Proto fill:#f0e1ff + style ClientCode fill:#fff4e1 + style RouterRPC fill:#ffe1e1 + style Monitoring fill:#e1ffe1 +``` + +### Component Interaction + +This detailed view shows the complete request flow from client to federated graph and back. ```mermaid sequenceDiagram autonumber - participant Provider as API Provider - participant CLI as Cosmo CLI (wgc) - participant Consumer as API Consumer - participant Client as Client SDK - participant Router as Cosmo Router + participant Client as Client App
(Generated SDK) + participant RPC as Router RPC Server
:8081 + participant Engine as GraphQL Engine participant Graph as Federated Graph - - Note over Provider, Router: Setup & Configuration - Provider->>Provider: Define GraphQL Operations (.graphql) - Provider->>CLI: Generate Protobuf (wgc grpc-service generate) - CLI->>Provider: service.proto + lock file - Provider->>Router: Configure & Start Router with operations + participant OTEL as Observability
(OTEL/Prometheus) + + Note over Client,Graph: Request Flow + Client->>RPC: RPC Request
(Connect/gRPC/gRPC-Web) + RPC->>RPC: Validate & Decode Protobuf + RPC->>Engine: Map to GraphQL Operation + Engine->>Graph: Execute Query/Mutation + Graph-->>Engine: GraphQL Response + Engine->>RPC: Transform to Protobuf + RPC-->>Client: RPC Response - Note over Provider, Consumer: Distribution - Provider->>Consumer: Distribute service.proto / OpenAPI spec + Note over RPC,OTEL: Observability + RPC->>OTEL: Emit Metrics & Traces
(GraphQL + RPC layers) +``` - Note over Consumer, Client: Client Development - Consumer->>Consumer: Generate Client SDK (buf/protoc) - Consumer->>Client: Integrate SDK into App +### Key Phases - Note over Client, Graph: Runtime - Client->>Router: Send RPC Request (Connect/gRPC) - Router->>Graph: Execute GraphQL Operation - Graph-->>Router: GraphQL Response - Router-->>Client: Protobuf Response +The ConnectRPC workflow consists of three main phases: - Note over Provider, Router: Observe - Router->>Provider: OTEL Metrics / Traces (GraphQL & RPC) -``` +1. **Setup Phase** (API Provider) + - Define GraphQL operations as `.graphql` files + - Generate Protobuf definitions using `wgc grpc-service generate` + - Configure Router with generated proto and operations + +2. **Distribution Phase** (Provider → Consumer) + - Share `service.proto` via Git, developer portal, or schema registry + - Optionally generate and share OpenAPI specifications + - Consumers generate type-safe client SDKs in their language + +3. **Runtime Phase** (Consumer) + - Client applications make RPC calls using generated SDKs + - Router translates RPC requests to GraphQL operations + - Router returns responses in Protobuf format + - Full observability across both RPC and GraphQL layers ## Usage Example From a51d11b70f891e267b2c5665775225a985737a4a Mon Sep 17 00:00:00 2001 From: Ahmet Soormally Date: Wed, 17 Dec 2025 20:42:56 +0000 Subject: [PATCH 8/8] c4 system context --- docs/connect/client.mdx | 448 +++++++++++++++++++++++----------------- 1 file changed, 253 insertions(+), 195 deletions(-) diff --git a/docs/connect/client.mdx b/docs/connect/client.mdx index c6fb1bda..8bb2e1bd 100644 --- a/docs/connect/client.mdx +++ b/docs/connect/client.mdx @@ -4,14 +4,13 @@ description: "Generate type-safe clients and OpenAPI specs from GraphQL operatio icon: "code" --- -# Connect RPC - - **Alpha Feature**: The Connect RPC capability is currently in alpha. APIs and functionality may change as we gather feedback. + **Alpha Feature**: The Connect RPC capability is currently in alpha. APIs and functionality may change as we gather + feedback. Connect RPC enables you to generate type-safe gRPC/Connect clients and OpenAPI specifications directly from your -GraphQL operations. This allows you to consume your Federated (or monolithic) Graph using standard gRPC tooling in any +GraphQL operations. This allows you to consume your GraphQL API using standard gRPC tooling in any language, or expose REST APIs via OpenAPI without writing manual adapters. ## What is ConnectRPC? @@ -40,129 +39,108 @@ consumers to generate type-safe clients in their favourite language. ### System Context -ConnectRPC enables API providers to expose GraphQL operations as type-safe RPC services that consumers can integrate using standard gRPC/Connect tooling. +ConnectRPC enables Platform Engineering teams to expose GraphQL operations as governed, type-safe RPC services that API Providers can distribute and API Consumers can integrate using standard gRPC/Connect tooling. + +The following diagram shows the high-level architecture and the three key user types involved in the ConnectRPC workflow: ```mermaid graph TB - subgraph "API Provider Organization" - Provider[API Provider Team] - CLI[Cosmo CLI] - Router[Cosmo Router] - Graph[Federated GraphQL Graph] + subgraph "Internal Organization" + subgraph "Platform Engineering" + PE[Platform Engineering] + Router[Cosmo Router] + Graph[GraphQL API] + Portal[Developer Portal/Registry] + end + + subgraph "API Providers" + Provider[API Providers] + CLI[Cosmo CLI] + Proto[Proto/OpenAPI Specs] + SDK[Pre-generated SDKs] + end end - subgraph "API Consumer Organization" - Consumer[API Consumer Team] + subgraph "API Consumers" + Consumer[API Consumers] Client[Client Application] end - Provider -->|1. Define Operations| CLI - CLI -->|2. Generate Proto| Provider - Provider -->|3. Configure| Router + PE -->|Manages| Router + PE -->|Manages| Portal Router -->|Executes| Graph - Provider -.->|4. Distribute Proto/OpenAPI| Consumer - Consumer -->|5. Generate SDK| Client - Client -->|6. RPC Requests| Router - Router -.->|7. Observability| Provider - + Provider -->|Define Operations| CLI + CLI -->|Generate| Proto + Provider -->|Configure Operations| Router + Proto -.->|Publish| Portal + Proto -.->|Optionally Generate| SDK + SDK -.->|Publish| Portal + Portal -.->|Discover Proto or Download SDK| Consumer + Consumer -->|Generate SDK or Use Pre-built| Client + Client -->|RPC Requests| Router + style PE fill:#ffe1f5 style Provider fill:#e1f5ff style Consumer fill:#fff4e1 style Router fill:#f0e1ff style Graph fill:#e1ffe1 + style Portal fill:#f0e1ff ``` -### Container View +#### Three User Types -This diagram shows how the Cosmo Router acts as a protocol bridge between RPC clients and your GraphQL federation. +1. **Platform Engineering** (Internal) + - Manages the Cosmo Router infrastructure that executes GraphQL operations + - Maintains the GraphQL API + - Operates the Developer Portal or Schema Registry where APIs are published -```mermaid -graph TB - subgraph "Development Time" - Ops[GraphQL Operations
.graphql files] - WGC[wgc CLI] - Proto[service.proto
+ lock file] - - Ops -->|Input| WGC - WGC -->|Generates| Proto - end - - subgraph "Distribution" - Proto -.->|Shared via Git/Portal| SDK[SDK Generator
buf/protoc] - Proto -.->|Alternative| OpenAPI[OpenAPI Spec
Swagger/Postman] - end - - subgraph "Runtime" - SDK -->|Generates| ClientCode[Type-Safe Client Code] - ClientCode -->|RPC Calls
Connect/gRPC/gRPC-Web| RouterRPC[Cosmo Router
:8081 RPC Server] - RouterRPC -->|Translates to| RouterGQL[GraphQL Engine] - RouterGQL -->|Executes| Subgraphs[Federated Subgraphs] - Subgraphs -->|Response| RouterGQL - RouterGQL -->|Protobuf Response| RouterRPC - RouterRPC -->|Returns| ClientCode - end - - subgraph "Observability" - RouterRPC -.->|OTEL Metrics & Traces| Monitoring[Monitoring Stack
Grafana/Datadog] - end - - style Ops fill:#e1f5ff - style Proto fill:#f0e1ff - style ClientCode fill:#fff4e1 - style RouterRPC fill:#ffe1e1 - style Monitoring fill:#e1ffe1 -``` +2. **API Providers** (Internal) + - Define GraphQL operations as `.graphql` files representing the API surface + - Use the Cosmo CLI `wgc` to generate Protobuf and OpenAPI specifications + - Configure the Router with their operations + - Optionally generate pre-built SDKs for consumers + - Publish proto/OpenAPI specs and SDKs to a Portal/Registry + +3. **API Consumers** (Internal or External) + - Discover available APIs through the Portal/Registry + - Choose to either generate their own SDKs from proto/OpenAPI specs or download pre-built SDKs + - Integrate the SDK into their client applications + - Make RPC requests (Connect/gRPC/gRPC-Web) to the Router + +#### Key Workflow -### Component Interaction +- API Providers define operations and generate specifications using the Cosmo CLI `wgc` +- Specifications (and optionally SDKs) are published to a Portal/Registry managed by Platform Engineering +- API Consumers discover and consume these APIs with full type safety +- All RPC requests flow through the Cosmo Router, which translates them to GraphQL operations -This detailed view shows the complete request flow from client to federated graph and back. +### Request Flow + +This diagram shows the complete request flow from client to GraphQL API and back, illustrating how the Cosmo Router acts as a protocol bridge. ```mermaid sequenceDiagram autonumber participant Client as Client App
(Generated SDK) - participant RPC as Router RPC Server
:8081 - participant Engine as GraphQL Engine - participant Graph as Federated Graph - participant OTEL as Observability
(OTEL/Prometheus) + participant RPC as ConnectRPC Server + participant API as GraphQL API - Note over Client,Graph: Request Flow Client->>RPC: RPC Request
(Connect/gRPC/gRPC-Web) - RPC->>RPC: Validate & Decode Protobuf - RPC->>Engine: Map to GraphQL Operation - Engine->>Graph: Execute Query/Mutation - Graph-->>Engine: GraphQL Response - Engine->>RPC: Transform to Protobuf + RPC->>RPC: Validate & Decode Request + RPC->>API: Execute GraphQL Operation + API-->>RPC: GraphQL Response + RPC->>RPC: Encode Response RPC-->>Client: RPC Response - - Note over RPC,OTEL: Observability - RPC->>OTEL: Emit Metrics & Traces
(GraphQL + RPC layers) ``` -### Key Phases - -The ConnectRPC workflow consists of three main phases: - -1. **Setup Phase** (API Provider) - - Define GraphQL operations as `.graphql` files - - Generate Protobuf definitions using `wgc grpc-service generate` - - Configure Router with generated proto and operations - -2. **Distribution Phase** (Provider → Consumer) - - Share `service.proto` via Git, developer portal, or schema registry - - Optionally generate and share OpenAPI specifications - - Consumers generate type-safe client SDKs in their language +## Quickstart -3. **Runtime Phase** (Consumer) - - Client applications make RPC calls using generated SDKs - - Router translates RPC requests to GraphQL operations - - Router returns responses in Protobuf format - - Full observability across both RPC and GraphQL layers - -## Usage Example + + **Prerequisites**: This quickstart assumes you already have our demo environment up and running with subgraphs and a federated graph. If not, head over to our [Cosmo Cloud Onboarding guide](/getting-started/cosmo-cloud-onboarding#create-the-demo) to get your environment set up first. + ### 1. Define GraphQL Operations -Create a directory for your operations, e.g., `services/`: +Create a directory to host a collection of your operations, e.g., `services/`. Each file should contain a single named operation. ```graphql services/GetEmployee.graphql query GetEmployee($id: ID!) { @@ -176,16 +154,16 @@ query GetEmployee($id: ID!) { } ``` -### 2. Generate Protobuf - -Run the `wgc grpc-service generate` command with the `--with-operations` flag. You must also provide the schema SDL to validate the operations. - - Each collection of operations represents a distinct Protobuf service. You can organize your operations into different directories (packages) to create multiple services, giving you the flexibility to expose specific subsets of your graph to different consumers or applications. + Each directory of operation files will represent a distinct Protobuf service. You can organize your operations into different directories (packages) to create multiple services. +### 2. Generate Protobuf + +Generate a Protobuf service definition from your GraphQL operations. This command reads your GraphQL schema and operation files, then creates a `.proto` file where each GraphQL operation becomes an RPC method. + - It is recommended to output the generated proto file to the same directory as your operations to keep them together. + It is recommended to output the generated proto file to the same directory as your operations to keep them together because the router will load them together as a bundle. ```bash @@ -193,54 +171,33 @@ wgc grpc-service generate \ --input schema.graphql \ --output ./services \ --with-operations ./services \ - --package-name "myorg.employee.v1" \ - MyService + --package-name "employee.v1" \ + HRService ``` -This command generates a `service.proto` file and a `service.proto.lock.json` file in the `./services` directory. - -#### Command Options - -| Option | Description | Default | -|--------|-------------|---------| -| `-i, --input ` | **(Required)** GraphQL schema file (SDL) | - | -| `-w, --with-operations ` | **(Required for ConnectRPC)** Directory containing GraphQL operation files (`.graphql`, `.gql`, `.graphqls`, `.gqls`). Each operation becomes an RPC method | - | -| `-o, --output ` | Output directory for generated files | `.` | -| `-p, --package-name ` | Proto package name | `service.v1` | -| `-g, --go-package ` | Adds `option go_package` to proto file | - | -| `-l, --proto-lock ` | Path to proto lock file for field number stability | `/service.proto.lock.json` | -| `--custom-scalar-mapping ` | Custom scalar mappings as JSON string | - | -| `--custom-scalar-mapping-file ` | Path to JSON file with custom scalar mappings | - | -| `--max-depth ` | Maximum recursion depth for nested selections/fragments | `50` | - -#### How It Works +**What this does:** +- `--input schema.graphql` - Uses your GraphQL schema to understand types +- `--output ./services` - Outputs the generated proto file to the services directory +- `--with-operations ./services` - Reads GraphQL operations from the services directory +- `--package-name "employee.v1"` - Sets the proto package name +- `HRService` - Names the generated service -ConnectRPC uses **operations-based generation** (enabled with the `-w` flag): +This command generates a `service.proto` file and a `service.proto.lock.json` file in the `./services` directory. The lock file ensures forward compatibility — see [Forward Compatibility & Lock Files](#forward-compatibility--lock-files) for details. -- Generates proto from GraphQL operation files (queries, mutations, subscriptions) -- Each operation becomes an RPC method in the service -- Operation files are processed recursively from the specified directory -- Files are sorted alphabetically for deterministic output -- **Query operations automatically marked with `NO_SIDE_EFFECTS` idempotency level** -- Subscriptions marked as server streaming RPCs -- Generates `service.proto` and `service.proto.lock.json` + + For a complete list of command options and advanced configuration, see the [CLI documentation](/cli/grpc-service/generate). + -#### Important Requirements +#### Key Generation Details - -**Operation Naming**: Operation names must be in PascalCase (e.g., `GetEmployeeById`, `UpdateEmployeeMood`). This ensures exact matching between GraphQL operation names and RPC method names. +- **Query operations** are marked as idempotent (safe to retry) and can use HTTP GET for CDN caching +- **Mutation operations** require HTTP POST and are not cached +- **Subscription operations** are generated as server-streaming RPCs (router support coming soon) +- Operation files are processed alphabetically for deterministic proto generation -- ❌ Invalid: `get_employee`, `GETEMPLOYEE`, `getEmployee` -- ✅ Valid: `GetEmployee`, `CreatePost`, `OnMessageAdded` - - - -**One Operation Per File**: Each `.graphql` file must contain exactly one named operation. This is required for deterministic proto schema generation. Multiple operations in a single file will cause an error. - - - -**No Root-Level Aliases**: Field aliases at the root level are not supported as they break proto schema generation consistency (each GraphQL field must map to exactly one proto field name). Aliases on nested fields are allowed. - + + **Operation Requirements**: Operations must use PascalCase naming (e.g., `GetEmployee`), one operation per file, and no root-level aliases. See [Operation Requirements](#operation-requirements) for details. + ### 3. Configure and Start Router @@ -262,11 +219,43 @@ storage_providers: path: "./services" ``` + + **Docker Deployment**: If running the router in Docker, ensure the `./services` directory is mounted as a volume so the router can access your proto files and operations. + + Start the router. It is now ready to accept requests for the operations defined in `service.proto`. -### 4. Generate Client SDK +**Verify successful loading** by checking the router logs: -Use [buf](https://buf.build/) or `protoc` to generate the client code for your application. +``` +INFO discovering services {"services_dir": "./services"} +INFO discovered service {"full_name": "employee.v1.HRService", "package": "employee.v1", "service": "HRService", "dir": "./services", "proto_files": 1, "operation_files": 1} +INFO service discovery complete {"total_services": 1, "services_dir": "./services"} +INFO loading operations for service {"service": "employee.v1.HRService", "file_count": 1} +INFO loaded operations for service {"service": "employee.v1.HRService", "operation_count": 1} +INFO registering services {"package_count": 1, "service_count": 1, "total_methods": 1} +INFO services loaded {"packages": 1, "services": 1, "operations": 1, "duration": "45.2ms"} +INFO starting ConnectRPC server {"listen_addr": "0.0.0.0:8081", "services_dir": "./services", "graphql_endpoint": "http://localhost:3002/graphql"} +INFO HTTP/2 (h2c) support enabled +INFO ConnectRPC server ready {"addr": "0.0.0.0:8081"} +``` + +If you see these logs, your service is successfully loaded and ready to accept requests. + +**Test the API** with a quick curl request: + +```bash +curl -X POST http://localhost:8081/employee.v1.HRService/GetEmployee \ + -H "Content-Type: application/json" \ + -H "Connect-Protocol-Version: 1" \ + -d '{"id": "1"}' +``` + +You should receive a JSON response with employee data, confirming your service is working correctly. + +### 4. Generate Client Artifacts + +Use [buf](https://buf.build/) or `protoc` to generate client SDKs and OpenAPI specifications from your proto file. #### Understanding buf.gen.yaml @@ -427,9 +416,46 @@ buf generate services/service.proto - **Speakeasy** - Advanced SDK generation with retry logic, pagination, etc. - **Stainless** - Production-grade SDK generation -### 5. Use the Client +### 5. Test the API -You can now use the generated client to call your GraphQL API via the Router. The Router acts as the server implementation for your generated service. +You can test your ConnectRPC API using various CLI tools. The Router supports multiple protocols with automatic transcoding. + + + +```bash +# Connect protocol with JSON encoding +curl -X POST http://localhost:8081/employee.v1.EmployeeService/GetEmployee \ + -H "Content-Type: application/json" \ + -H "Connect-Protocol-Version: 1" \ + -d '{"id": "1"}' +``` + + + +```bash +# Connect protocol with GET request (enables CDN caching for queries) +curl --get \ + --data-urlencode 'encoding=json' \ + --data-urlencode 'message={"id":"1"}' \ + http://localhost:8081/employee.v1.EmployeeService/GetEmployee +``` + + + +```bash +# gRPC protocol with binary protobuf +grpcurl -plaintext \ + -proto ./services/service.proto \ + -d '{"id": "1"}' \ + localhost:8081 \ + employee.v1.EmployeeService/GetEmployee +``` + + + +### 6. Use in Your Application + +Integrate the generated client SDK into your application code. The examples below show both binary (protobuf) and JSON encoding modes. @@ -438,12 +464,12 @@ import { createPromiseClient } from "@connectrpc/connect"; import { createConnectTransport } from "@connectrpc/connect-web"; import { EmployeeService } from "./gen/employee/v1/service_connect"; -// Create transport +// Binary mode (default): uses protobuf encoding for efficiency const transport = createConnectTransport({ baseUrl: "http://localhost:8081", + // Default: useBinaryFormat: true (binary protobuf) }); -// Create client const client = createPromiseClient(EmployeeService, transport); // Make request @@ -452,6 +478,14 @@ const response = await client.getEmployee({ }); console.log(response.employee?.details?.forename); // "John" + +// --- JSON Mode --- +// For JSON encoding instead of binary protobuf: +// const jsonTransport = createConnectTransport({ +// baseUrl: "http://localhost:8081", +// useBinaryFormat: false, // Use JSON encoding +// }); +// const jsonClient = createPromiseClient(EmployeeService, jsonTransport); ``` @@ -470,7 +504,7 @@ import ( ) func main() { - // Point to your Cosmo Router's ConnectRPC listener + // Binary mode (default): uses protobuf encoding for efficiency client := employeev1connect.NewMyServiceClient( http.DefaultClient, "http://localhost:8081", @@ -489,52 +523,39 @@ func main() { resp.Msg.Employee.Details.Forename, resp.Msg.Employee.Details.Surname, ) -} -``` - - - -```swift -import Connect -let client = ProtocolClient( - httpClient: URLSessionHTTPClient(), - config: ProtocolClientConfig( - host: "http://localhost:8081", - networkProtocol: .connect - ) -) - -let request = Employee_V1_GetEmployeeRequest.with { - $0.id = "1" + // --- JSON Mode --- + // For JSON encoding, set Content-Type header: + // jsonReq := connect.NewRequest(&employeev1.GetEmployeeRequest{ + // Id: "1", + // }) + // jsonReq.Header().Set("Content-Type", "application/json") + // + // jsonResp, err := client.GetEmployee(context.Background(), jsonReq) + // // The Connect library automatically handles JSON encoding/decoding } - -let response = try await client.getEmployee(request: request) -print(response.employee.details.forename) // "John" ``` - -```kotlin -import com.connectrpc.ProtocolClient -import employee.v1.EmployeeServiceClient + +```python +from connect import ConnectClient +from gen.employee.v1 import employee_pb2 -val client = ProtocolClient( - httpClient = OkHttpClient(), - ProtocolClientConfig( - host = "http://localhost:8081", - serializationStrategy = ProtobufStrategy() - ) -) +# Binary mode (default): uses protobuf encoding +client = ConnectClient("http://localhost:8081") -val employeeClient = EmployeeServiceClient(client) +request = employee_pb2.GetEmployeeRequest(id="1") +response = client.GetEmployee(request) -val request = GetEmployeeRequest.newBuilder() - .setId("1") - .build() +print(response.employee.details.forename) # "John" -val response = employeeClient.getEmployee(request) -println(response.employee.details.forename) // "John" +# --- JSON Mode --- +# For JSON encoding, configure the client: +# json_client = ConnectClient( +# "http://localhost:8081", +# use_json=True # Use JSON encoding instead of binary +# ) ``` @@ -596,6 +617,53 @@ buf curl --protocol grpcweb \ - **POST for Mutations**: Mutation operations must use HTTP POST - **Flexible Headers**: The `Connect-Protocol-Version` header is optional for Connect protocol requests +## Operation Requirements + +When defining GraphQL operations for Connect RPC, you must follow these requirements to ensure successful proto generation and execution: + +### Operation Naming + +Operation names must be in **PascalCase** (e.g., `GetEmployeeById`, `UpdateEmployeeMood`). This ensures exact matching between GraphQL operation names and RPC method names. + +**Examples:** +- ❌ Invalid: `get_employee`, `GETEMPLOYEE`, `getEmployee` +- ✅ Valid: `GetEmployee`, `CreatePost`, `OnMessageAdded` + +### One Operation Per File + +Each `.graphql` file must contain exactly **one named operation**. This is required for deterministic proto schema generation. Multiple operations in a single file will cause an error. + +**Example structure:** +``` +services/ +├── GetEmployee.graphql # Contains only GetEmployee query +├── UpdateEmployee.graphql # Contains only UpdateEmployee mutation +└── DeleteEmployee.graphql # Contains only DeleteEmployee mutation +``` + +### No Root-Level Aliases + +Field aliases at the **root level** are not supported as they break proto schema generation consistency. Each GraphQL field must map to exactly one proto field name. + +**Examples:** +```graphql +# ❌ Invalid - root-level alias +query GetEmployee($id: ID!) { + emp: employee(id: $id) { # Alias at root level + id + name + } +} + +# ✅ Valid - no root-level aliases +query GetEmployee($id: ID!) { + employee(id: $id) { + id + fullName: name # Nested aliases are allowed + } +} +``` + ## Directory Structure & Organization The Cosmo Router uses a convention-based directory structure to automatically discover and load Connect RPC services. This approach co-locates proto files with their GraphQL operations for easy management. @@ -721,16 +789,6 @@ services/ 3. **Consistent Naming**: Use clear, descriptive names for packages and services that reflect their purpose 4. **Lock File Management**: Always commit `.proto.lock.json` files to version control to maintain field number stability -## Observability - -The Cosmo Router provides built-in [observability features](/router/metrics-and-monitoring) that work seamlessly with Connect Client. Because the Router translates RPC calls into GraphQL operations, you get detailed metrics and tracing for both layers. - -- **GraphQL Metrics**: Track the performance, error rates, and usage of your underlying GraphQL operations (`GetEmployee`, etc.). -- **Request Tracing**: Trace the entire flow from the incoming RPC request, through the GraphQL engine, to your subgraphs and back. -- **Standard Protocols**: Export data using OpenTelemetry (OTLP) or Prometheus to your existing monitoring stack (Grafana, Datadog, Cosmo Cloud, etc.). - -Since the Router is aware of the operation mapping, it can attribute metrics correctly to the specific GraphQL operation being executed, giving you full visibility into your client's usage patterns. - ## Forward Compatibility & Lock Files When you generate your Protobuf definition, the CLI creates a `service.proto.lock.json` file. **You should commit this file to your version control system.**