diff --git a/docs/cli/grpc-service/generate.mdx b/docs/cli/grpc-service/generate.mdx index 9a782a0c..07f45b62 100644 --- a/docs/cli/grpc-service/generate.mdx +++ b/docs/cli/grpc-service/generate.mdx @@ -10,6 +10,10 @@ description: "Generate a protobuf definition for a gRPC service from a GraphQL s The `generate` command generates a protobuf definition and mapping file for a gRPC service from a GraphQL schema, which can be used to implement a gRPC service and can be used for the composition. +It supports two modes: +1. **Schema Mode**: Generates a protobuf definition that mirrors the GraphQL schema. Used for implementing Subgraphs in gRPC (Connect Backend). +2. **Operations Mode**: Generates a protobuf definition based on a set of GraphQL operations. Used for generating client SDKs (Connect Client). + ## Usage ```bash @@ -30,6 +34,11 @@ wgc grpc-service generate [options] [service-name] | `-o, --output ` | The output directory for the protobuf schema | `.` | | `-p, --package-name ` | The name of the proto package | `service.v1` | | `-g, --go-package ` | Adds an `option go_package` to the proto file | None | +| `-w, --with-operations ` | Path to directory containing GraphQL operation files. Enables **Operations Mode**. | None | +| `-l, --proto-lock ` | Path to existing proto lock file. | `/service.proto.lock.json` | +| `--custom-scalar-mapping ` | Custom scalar type mappings as JSON string. Example: `{"DateTime":"google.protobuf.Timestamp"}` | None | +| `--custom-scalar-mapping-file ` | Path to JSON file containing custom scalar type mappings. | None | +| `--max-depth ` | Maximum recursion depth for processing nested selections. | `50` | ## Description @@ -37,12 +46,23 @@ This command generates a protobuf definition for a gRPC service from a GraphQL s ## Examples -### Generate a protobuf definition for a gRPC service from a GraphQL schema +### Generate a protobuf definition for a gRPC service from a GraphQL schema (Schema Mode) ```bash wgc grpc-service generate -i ./schema.graphql -o ./service MyService ``` +### Generate a protobuf definition from operations (Operations Mode) + +```bash +wgc grpc-service generate \ + -i ./schema.graphql \ + -o ./gen \ + --with-operations ./operations \ + --package-name my.service.v1 \ + MyService +``` + ### Define a custom package name ```bash @@ -60,10 +80,10 @@ wgc grpc-service generate -i ./schema.graphql -o ./service MyService --go-packag The command generates multiple files in the output directory: - `service.proto`: The protobuf definition for the gRPC service -- `service.mapping.json`: The mapping file for the gRPC service +- `service.mapping.json`: The mapping file for the gRPC service (Schema Mode only) - `service.proto.lock.json`: The lock file for the protobuf definition -The generated protobuf definition can be used to implement a gRPC service in any language that supports protobuf. +The generated protobuf definition can be used to implement a gRPC service in any language that supports protobuf, or to generate client SDKs. The mapping and the protobuf definition is needed for the composition part. diff --git a/docs/connect/client.mdx b/docs/connect/client.mdx new file mode 100644 index 00000000..8bb2e1bd --- /dev/null +++ b/docs/connect/client.mdx @@ -0,0 +1,810 @@ +--- +title: "Connect RPC" +description: "Generate type-safe clients and OpenAPI specs from GraphQL operations" +icon: "code" +--- + + + **Alpha Feature**: The Connect RPC capability is currently in alpha. APIs and functionality may change as we gather + feedback. + + +Connect RPC enables you to generate type-safe gRPC/Connect clients and OpenAPI specifications directly from your +GraphQL operations. This allows you to consume your GraphQL API using standard gRPC tooling in any +language, or expose REST APIs via OpenAPI without writing manual adapters. + +## What is ConnectRPC? + +ConnectRPC enables Platform Engineering (platform teams) to distribute GraphQL-backed APIs as governed, versioned API +products using protobuf. + +## How does ConnectRPC work? + +ConnectRPC turns GraphQL APIs into distributable, protocol-agnostic API products by introducing a clear, governed workflow: + +1. **Define GraphQL Operations (Trusted Documents)** + +API Providers define a fixed set of GraphQL queries and mutations that represent the supported API surface. + +2. **Generate Protobuf Service Definitions** + +Using the wgc CLI, convert GraphQL operations into protobuf service definitions. + +3. **Distribute as Versioned API Products** + +Distribute OpenAPI and proto files via Git, your developer portal, or schema registry of your choice to enable API +consumers to generate type-safe clients in their favourite language. + +## Architecture & Workflow + +### System Context + +ConnectRPC enables Platform Engineering teams to expose GraphQL operations as governed, type-safe RPC services that API Providers can distribute and API Consumers can integrate using standard gRPC/Connect tooling. + +The following diagram shows the high-level architecture and the three key user types involved in the ConnectRPC workflow: + +```mermaid +graph TB + subgraph "Internal Organization" + subgraph "Platform Engineering" + PE[Platform Engineering] + Router[Cosmo Router] + Graph[GraphQL API] + Portal[Developer Portal/Registry] + end + + subgraph "API Providers" + Provider[API Providers] + CLI[Cosmo CLI] + Proto[Proto/OpenAPI Specs] + SDK[Pre-generated SDKs] + end + end + + subgraph "API Consumers" + Consumer[API Consumers] + Client[Client Application] + end + + PE -->|Manages| Router + PE -->|Manages| Portal + Router -->|Executes| Graph + Provider -->|Define Operations| CLI + CLI -->|Generate| Proto + Provider -->|Configure Operations| Router + Proto -.->|Publish| Portal + Proto -.->|Optionally Generate| SDK + SDK -.->|Publish| Portal + Portal -.->|Discover Proto or Download SDK| Consumer + Consumer -->|Generate SDK or Use Pre-built| Client + Client -->|RPC Requests| Router + style PE fill:#ffe1f5 + style Provider fill:#e1f5ff + style Consumer fill:#fff4e1 + style Router fill:#f0e1ff + style Graph fill:#e1ffe1 + style Portal fill:#f0e1ff +``` + +#### Three User Types + +1. **Platform Engineering** (Internal) + - Manages the Cosmo Router infrastructure that executes GraphQL operations + - Maintains the GraphQL API + - Operates the Developer Portal or Schema Registry where APIs are published + +2. **API Providers** (Internal) + - Define GraphQL operations as `.graphql` files representing the API surface + - Use the Cosmo CLI `wgc` to generate Protobuf and OpenAPI specifications + - Configure the Router with their operations + - Optionally generate pre-built SDKs for consumers + - Publish proto/OpenAPI specs and SDKs to a Portal/Registry + +3. **API Consumers** (Internal or External) + - Discover available APIs through the Portal/Registry + - Choose to either generate their own SDKs from proto/OpenAPI specs or download pre-built SDKs + - Integrate the SDK into their client applications + - Make RPC requests (Connect/gRPC/gRPC-Web) to the Router + +#### Key Workflow + +- API Providers define operations and generate specifications using the Cosmo CLI `wgc` +- Specifications (and optionally SDKs) are published to a Portal/Registry managed by Platform Engineering +- API Consumers discover and consume these APIs with full type safety +- All RPC requests flow through the Cosmo Router, which translates them to GraphQL operations + +### Request Flow + +This diagram shows the complete request flow from client to GraphQL API and back, illustrating how the Cosmo Router acts as a protocol bridge. + +```mermaid +sequenceDiagram + autonumber + participant Client as Client App
(Generated SDK) + participant RPC as ConnectRPC Server + participant API as GraphQL API + + Client->>RPC: RPC Request
(Connect/gRPC/gRPC-Web) + RPC->>RPC: Validate & Decode Request + RPC->>API: Execute GraphQL Operation + API-->>RPC: GraphQL Response + RPC->>RPC: Encode Response + RPC-->>Client: RPC Response +``` + +## Quickstart + + + **Prerequisites**: This quickstart assumes you already have our demo environment up and running with subgraphs and a federated graph. If not, head over to our [Cosmo Cloud Onboarding guide](/getting-started/cosmo-cloud-onboarding#create-the-demo) to get your environment set up first. + + +### 1. Define GraphQL Operations + +Create a directory to host a collection of your operations, e.g., `services/`. Each file should contain a single named operation. + +```graphql services/GetEmployee.graphql +query GetEmployee($id: ID!) { + employee(id: $id) { + id + details { + forename + surname + } + } +} +``` + + + Each directory of operation files will represent a distinct Protobuf service. You can organize your operations into different directories (packages) to create multiple services. + + +### 2. Generate Protobuf + +Generate a Protobuf service definition from your GraphQL operations. This command reads your GraphQL schema and operation files, then creates a `.proto` file where each GraphQL operation becomes an RPC method. + + + It is recommended to output the generated proto file to the same directory as your operations to keep them together because the router will load them together as a bundle. + + +```bash +wgc grpc-service generate \ + --input schema.graphql \ + --output ./services \ + --with-operations ./services \ + --package-name "employee.v1" \ + HRService +``` + +**What this does:** +- `--input schema.graphql` - Uses your GraphQL schema to understand types +- `--output ./services` - Outputs the generated proto file to the services directory +- `--with-operations ./services` - Reads GraphQL operations from the services directory +- `--package-name "employee.v1"` - Sets the proto package name +- `HRService` - Names the generated service + +This command generates a `service.proto` file and a `service.proto.lock.json` file in the `./services` directory. The lock file ensures forward compatibility — see [Forward Compatibility & Lock Files](#forward-compatibility--lock-files) for details. + + + For a complete list of command options and advanced configuration, see the [CLI documentation](/cli/grpc-service/generate). + + +#### Key Generation Details + +- **Query operations** are marked as idempotent (safe to retry) and can use HTTP GET for CDN caching +- **Mutation operations** require HTTP POST and are not cached +- **Subscription operations** are generated as server-streaming RPCs (router support coming soon) +- Operation files are processed alphabetically for deterministic proto generation + + + **Operation Requirements**: Operations must use PascalCase naming (e.g., `GetEmployee`), one operation per file, and no root-level aliases. See [Operation Requirements](#operation-requirements) for details. + + +### 3. Configure and Start Router + +Enable the ConnectRPC server in your `config.yaml` and point the `services` provider to the directory containing your generated `service.proto`. + +```yaml config.yaml +# ConnectRPC configuration +connect_rpc: + enabled: true + server: + listen_addr: "0.0.0.0:8081" + services_provider_id: "fs-services" + +# Storage providers for services directory +storage_providers: + file_system: + - id: "fs-services" + # Path to the directory containing your generated service.proto and operations + path: "./services" +``` + + + **Docker Deployment**: If running the router in Docker, ensure the `./services` directory is mounted as a volume so the router can access your proto files and operations. + + +Start the router. It is now ready to accept requests for the operations defined in `service.proto`. + +**Verify successful loading** by checking the router logs: + +``` +INFO discovering services {"services_dir": "./services"} +INFO discovered service {"full_name": "employee.v1.HRService", "package": "employee.v1", "service": "HRService", "dir": "./services", "proto_files": 1, "operation_files": 1} +INFO service discovery complete {"total_services": 1, "services_dir": "./services"} +INFO loading operations for service {"service": "employee.v1.HRService", "file_count": 1} +INFO loaded operations for service {"service": "employee.v1.HRService", "operation_count": 1} +INFO registering services {"package_count": 1, "service_count": 1, "total_methods": 1} +INFO services loaded {"packages": 1, "services": 1, "operations": 1, "duration": "45.2ms"} +INFO starting ConnectRPC server {"listen_addr": "0.0.0.0:8081", "services_dir": "./services", "graphql_endpoint": "http://localhost:3002/graphql"} +INFO HTTP/2 (h2c) support enabled +INFO ConnectRPC server ready {"addr": "0.0.0.0:8081"} +``` + +If you see these logs, your service is successfully loaded and ready to accept requests. + +**Test the API** with a quick curl request: + +```bash +curl -X POST http://localhost:8081/employee.v1.HRService/GetEmployee \ + -H "Content-Type: application/json" \ + -H "Connect-Protocol-Version: 1" \ + -d '{"id": "1"}' +``` + +You should receive a JSON response with employee data, confirming your service is working correctly. + +### 4. Generate Client Artifacts + +Use [buf](https://buf.build/) or `protoc` to generate client SDKs and OpenAPI specifications from your proto file. + +#### Understanding buf.gen.yaml + +The `buf.gen.yaml` file configures code generation from proto files. Here's what each section does: + +- **`version: v2`** - Uses Buf v2 configuration format +- **`managed.enabled: true`** - Enables managed mode for consistent code generation +- **`managed.override`** - Customizes generation options (e.g., Go module path prefix) +- **`plugins`** - Defines code generators to run: + - `local` - Uses locally installed plugin + - `remote` - Uses Buf Schema Registry plugins + - `out` - Output directory for generated code + - `opt` - Plugin-specific options + +#### Language-Specific Examples + + + +```bash +# Install dependencies +npm install @bufbuild/protoc-gen-es @connectrpc/protoc-gen-connect-es + +# Create buf.gen.yaml +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - local: ./node_modules/.bin/protoc-gen-es + out: gen/ts + opt: target=ts + - local: ./node_modules/.bin/protoc-gen-connect-es + out: gen/ts + opt: target=ts +EOF + +# Generate +buf generate services/service.proto +``` + + + +```bash +# Install dependencies +go install google.golang.org/protobuf/cmd/protoc-gen-go@latest +go install connectrpc.com/connect/cmd/protoc-gen-connect-go@latest + +# Create buf.gen.yaml +cat > buf.gen.yaml << EOF +version: v2 +managed: + enabled: true + override: + - file_option: go_package_prefix + value: github.com/yourorg/yourproject/gen/go +plugins: + - local: protoc-gen-go + out: gen/go + opt: paths=source_relative + - local: protoc-gen-connect-go + out: gen/go + opt: paths=source_relative +EOF + +# Generate +buf generate services/service.proto +``` + + + +```bash +# Add to Package.swift: +# .package(url: "https://github.com/apple/swift-protobuf.git", from: "1.25.0") +# .package(url: "https://github.com/connectrpc/connect-swift.git", from: "0.12.0") + +# Create buf.gen.yaml +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - remote: buf.build/apple/swift + out: gen/swift + - remote: buf.build/connectrpc/swift + out: gen/swift +EOF + +# Generate +buf generate services/service.proto +``` + + + +```bash +# Create buf.gen.yaml +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - remote: buf.build/grpc/kotlin + out: gen/kotlin + - remote: buf.build/connectrpc/kotlin + out: gen/kotlin +EOF + +# Generate +buf generate services/service.proto +``` + + + +```bash +# Install dependencies +pip install grpcio-tools + +# Using protoc directly +protoc --python_out=gen/python \ + --grpc_python_out=gen/python \ + services/service.proto + +# Or using buf +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - remote: buf.build/protocolbuffers/python + out: gen/python + - remote: buf.build/grpc/python + out: gen/python +EOF + +buf generate services/service.proto +``` + + + +#### Generate OpenAPI Specification + +Generate OpenAPI 3.0 specification for REST-like documentation and tooling: + +```bash +# Install the OpenAPI plugin +go install connectrpc.com/connect/cmd/protoc-gen-connect-openapi@latest + +# Create buf.gen.yaml for OpenAPI +cat > buf.gen.yaml << EOF +version: v2 +plugins: + - local: protoc-gen-connect-openapi + out: gen/openapi + opt: + - format=yaml +EOF + +# Generate +buf generate services/service.proto +``` + +**Use the generated OpenAPI specification with:** + +- **Swagger UI** - Interactive API documentation +- **Postman** - API testing and collaboration +- **Redoc** - Beautiful API documentation +- **Speakeasy** - Advanced SDK generation with retry logic, pagination, etc. +- **Stainless** - Production-grade SDK generation + +### 5. Test the API + +You can test your ConnectRPC API using various CLI tools. The Router supports multiple protocols with automatic transcoding. + + + +```bash +# Connect protocol with JSON encoding +curl -X POST http://localhost:8081/employee.v1.EmployeeService/GetEmployee \ + -H "Content-Type: application/json" \ + -H "Connect-Protocol-Version: 1" \ + -d '{"id": "1"}' +``` + + + +```bash +# Connect protocol with GET request (enables CDN caching for queries) +curl --get \ + --data-urlencode 'encoding=json' \ + --data-urlencode 'message={"id":"1"}' \ + http://localhost:8081/employee.v1.EmployeeService/GetEmployee +``` + + + +```bash +# gRPC protocol with binary protobuf +grpcurl -plaintext \ + -proto ./services/service.proto \ + -d '{"id": "1"}' \ + localhost:8081 \ + employee.v1.EmployeeService/GetEmployee +``` + + + +### 6. Use in Your Application + +Integrate the generated client SDK into your application code. The examples below show both binary (protobuf) and JSON encoding modes. + + + +```typescript +import { createPromiseClient } from "@connectrpc/connect"; +import { createConnectTransport } from "@connectrpc/connect-web"; +import { EmployeeService } from "./gen/employee/v1/service_connect"; + +// Binary mode (default): uses protobuf encoding for efficiency +const transport = createConnectTransport({ + baseUrl: "http://localhost:8081", + // Default: useBinaryFormat: true (binary protobuf) +}); + +const client = createPromiseClient(EmployeeService, transport); + +// Make request +const response = await client.getEmployee({ + id: "1", +}); + +console.log(response.employee?.details?.forename); // "John" + +// --- JSON Mode --- +// For JSON encoding instead of binary protobuf: +// const jsonTransport = createConnectTransport({ +// baseUrl: "http://localhost:8081", +// useBinaryFormat: false, // Use JSON encoding +// }); +// const jsonClient = createPromiseClient(EmployeeService, jsonTransport); +``` + + + +```go +package main + +import ( + "context" + "net/http" + "log" + + "connectrpc.com/connect" + employeev1 "example/gen/go/myorg/employee/v1" + "example/gen/go/myorg/employee/v1/employeev1connect" +) + +func main() { + // Binary mode (default): uses protobuf encoding for efficiency + client := employeev1connect.NewMyServiceClient( + http.DefaultClient, + "http://localhost:8081", + ) + + req := connect.NewRequest(&employeev1.GetEmployeeRequest{ + Id: "1", + }) + + resp, err := client.GetEmployee(context.Background(), req) + if err != nil { + log.Fatal(err) + } + + log.Printf("Employee: %s %s", + resp.Msg.Employee.Details.Forename, + resp.Msg.Employee.Details.Surname, + ) + + // --- JSON Mode --- + // For JSON encoding, set Content-Type header: + // jsonReq := connect.NewRequest(&employeev1.GetEmployeeRequest{ + // Id: "1", + // }) + // jsonReq.Header().Set("Content-Type", "application/json") + // + // jsonResp, err := client.GetEmployee(context.Background(), jsonReq) + // // The Connect library automatically handles JSON encoding/decoding +} +``` + + + +```python +from connect import ConnectClient +from gen.employee.v1 import employee_pb2 + +# Binary mode (default): uses protobuf encoding +client = ConnectClient("http://localhost:8081") + +request = employee_pb2.GetEmployeeRequest(id="1") +response = client.GetEmployee(request) + +print(response.employee.details.forename) # "John" + +# --- JSON Mode --- +# For JSON encoding, configure the client: +# json_client = ConnectClient( +# "http://localhost:8081", +# use_json=True # Use JSON encoding instead of binary +# ) +``` + + + +## Protocol Support + +ConnectRPC supports multiple protocols and formats with automatic transcoding, allowing clients to use their preferred +protocol while the Router handles translation to GraphQL operations. + +### Supported Protocols + +#### 1. Connect Protocol with JSON + +The Connect protocol is modern, efficient, and works over HTTP/1.1 or HTTP/2. The `Connect-Protocol-Version` header is optional. + +**POST Request:** +```bash +curl -X POST http://localhost:8081/employee.v1.EmployeeService/GetEmployee \ + -H "Content-Type: application/json" \ + -H "Connect-Protocol-Version: 1" \ + -d '{"id": "1"}' +``` + +**GET Request (for queries - enables CDN caching):** +```bash +curl --get \ + --data-urlencode 'encoding=json' \ + --data-urlencode 'message={"id":"1"}' \ + http://localhost:8081/employee.v1.EmployeeService/GetEmployee +``` + +#### 2. gRPC Protocol + +Binary protobuf over HTTP/2 for maximum efficiency. + +```bash +grpcurl -plaintext \ + -proto ./services/service.proto \ + -d '{"id": "1"}' \ + localhost:8081 \ + employee.v1.EmployeeService/GetEmployee +``` + +#### 3. gRPC-Web Protocol + +Browser-compatible gRPC with binary protobuf. + +```bash +buf curl --protocol grpcweb \ + --schema ./services/service.proto \ + --data '{"id": "1"}' \ + http://localhost:8081/employee.v1.EmployeeService/GetEmployee +``` + +### Key Features + +- **JSON Support**: All protocols support JSON encoding—no binary proto required +- **GET for Queries**: Query operations can use HTTP GET with `encoding` and `message` query parameters, enabling CDN caching for better performance +- **POST for Mutations**: Mutation operations must use HTTP POST +- **Flexible Headers**: The `Connect-Protocol-Version` header is optional for Connect protocol requests + +## Operation Requirements + +When defining GraphQL operations for Connect RPC, you must follow these requirements to ensure successful proto generation and execution: + +### Operation Naming + +Operation names must be in **PascalCase** (e.g., `GetEmployeeById`, `UpdateEmployeeMood`). This ensures exact matching between GraphQL operation names and RPC method names. + +**Examples:** +- ❌ Invalid: `get_employee`, `GETEMPLOYEE`, `getEmployee` +- ✅ Valid: `GetEmployee`, `CreatePost`, `OnMessageAdded` + +### One Operation Per File + +Each `.graphql` file must contain exactly **one named operation**. This is required for deterministic proto schema generation. Multiple operations in a single file will cause an error. + +**Example structure:** +``` +services/ +├── GetEmployee.graphql # Contains only GetEmployee query +├── UpdateEmployee.graphql # Contains only UpdateEmployee mutation +└── DeleteEmployee.graphql # Contains only DeleteEmployee mutation +``` + +### No Root-Level Aliases + +Field aliases at the **root level** are not supported as they break proto schema generation consistency. Each GraphQL field must map to exactly one proto field name. + +**Examples:** +```graphql +# ❌ Invalid - root-level alias +query GetEmployee($id: ID!) { + emp: employee(id: $id) { # Alias at root level + id + name + } +} + +# ✅ Valid - no root-level aliases +query GetEmployee($id: ID!) { + employee(id: $id) { + id + fullName: name # Nested aliases are allowed + } +} +``` + +## Directory Structure & Organization + +The Cosmo Router uses a convention-based directory structure to automatically discover and load Connect RPC services. This approach co-locates proto files with their GraphQL operations for easy management. + +### Configuration + +Configure the router to point to your root services directory using a storage provider: + +```yaml config.yaml +connect_rpc: + enabled: true + server: + listen_addr: "0.0.0.0:8081" + services_provider_id: "fs-services" + +storage_providers: + file_system: + - id: "fs-services" + path: "./services" # Root services directory +``` + +The router will recursively walk the `services` directory and automatically discover all proto files and their associated GraphQL operations. + +### Recommended Structure + +For organization purposes, we recommend keeping all services in a root `services` directory, with subdirectories for packages and individual services. + +#### Single Service per Package + +When you have one service per package, you can organize files directly in the package directory: + +``` +services/ +└── employee.v1/ # Package directory + ├── employee.proto # Proto definition + ├── employee.proto.lock.json + ├── GetEmployee.graphql # Operation files + ├── UpdateEmployee.graphql + └── DeleteEmployee.graphql +``` + +Or with operations in a subdirectory: + +``` +services/ +└── employee.v1/ + ├── employee.proto + ├── employee.proto.lock.json + └── operations/ + ├── GetEmployee.graphql + ├── UpdateEmployee.graphql + └── DeleteEmployee.graphql +``` + +#### Multiple Services per Package + +When multiple services share the same proto package, organize them in service subdirectories: + +``` +services/ +└── company.v1/ # Package directory + ├── EmployeeService/ # First service + │ ├── employee.proto # package company.v1; service EmployeeService + │ ├── employee.proto.lock.json + │ └── operations/ + │ ├── GetEmployee.graphql + │ └── UpdateEmployee.graphql + └── DepartmentService/ # Second service, same package + ├── department.proto # package company.v1; service DepartmentService + ├── department.proto.lock.json + └── operations/ + ├── GetDepartment.graphql + └── ListDepartments.graphql +``` + +### Flexible Organization + +The router determines service identity by proto package declarations, not directory names. This gives you flexibility in organizing your files: + +``` +services/ +├── hr-services/ +│ ├── employee.proto # package company.v1; service EmployeeService +│ └── GetEmployee.graphql +└── admin-services/ + ├── department.proto # package company.v1; service DepartmentService + └── GetDepartment.graphql +``` + + + **Service Uniqueness**: The combination of proto package name and service name must be unique. For example, you can have multiple services with the same package name (e.g., `company.v1`) as long as they have different service names (`EmployeeService`, `DepartmentService`). The router uses the package + service combination to identify and route requests. + + +### Discovery Rules + +The router follows these rules when discovering services: + +1. **Recursive Discovery**: The router recursively walks the services directory to find all `.proto` files +2. **Proto Association**: Each `.proto` file discovered becomes a service endpoint +3. **Operation Association**: All `.graphql` files in the same directory (or subdirectories) are associated with the nearest parent `.proto` file +4. **Nested Proto Limitation**: If a `.proto` file is found in a directory, any `.proto` files in subdirectories are **not** discovered (the parent proto takes precedence) + +#### Example: Nested Proto Files + +``` +services/ +└── employee.v1/ + ├── employee.proto # ✅ Discovered as a service + ├── GetEmployee.graphql # ✅ Associated with employee.proto + └── nested/ + ├── other.proto # ❌ NOT discovered (parent proto found first) + └── UpdateEmployee.graphql # ✅ Still associated with employee.proto +``` + + + **Avoid Nested Proto Files**: Do not place `.proto` files in subdirectories of a directory that already contains a `.proto` file. The nested proto files will not be discovered by the router. + + +### Best Practices + +1. **Use Semantic Versioning**: Include version numbers in package names (e.g., `employee.v1`, `employee.v2`) to support API evolution +2. **Co-locate Operations**: Keep GraphQL operations close to their proto definitions for easier maintenance +3. **Consistent Naming**: Use clear, descriptive names for packages and services that reflect their purpose +4. **Lock File Management**: Always commit `.proto.lock.json` files to version control to maintain field number stability + +## Forward Compatibility & Lock Files + +When you generate your Protobuf definition, the CLI creates a `service.proto.lock.json` file. **You should commit this file to your version control system.** + +This lock file maintains a history of your operations and their field assignments. When you modify your operations (e.g., add new fields, reorder fields, or add new operations), the CLI uses the lock file to ensure: + +1. **Stable Field Numbers**: Existing fields retain their unique Protobuf field numbers, even if you reorder them in the GraphQL query. +2. **Safe Evolution**: You can safely evolve your client requirements without breaking existing clients. + +This mechanism allows you to iterate on your GraphQL operations—adding data requirements or new features—while maintaining binary compatibility for deployed clients. + +## Roadmap + +The following features are planned for future releases of Connect Client: + +1. **OpenAPI Generation**: Enhanced support for generating OpenAPI specifications including descriptions, summaries, deprecated fields, and tags. +2. **Subscription Support**: Ability to consume GraphQL subscriptions as gRPC streams, enabling real-time data updates over the Connect protocol. +3. **Multiple Root Fields**: Support for executable operations containing multiple root selection set fields, allowing more complex queries in a single operation. +4. **Field Aliases**: Support for GraphQL aliases to control the shape of the API surface, enabling customized field names in the generated Protobuf definitions. diff --git a/docs/connect/overview.mdx b/docs/connect/overview.mdx index a4e0a954..001e6621 100644 --- a/docs/connect/overview.mdx +++ b/docs/connect/overview.mdx @@ -6,67 +6,70 @@ icon: "circle-info" ## Cosmo Connect -Cosmo Connect allows you to use GraphQL Federation without requiring backend teams to run GraphQL servers or frameworks. +Cosmo Connect lets you build and consume federated GraphQL without requiring everyone to learn GraphQL. Backend teams can integrate services using familiar RPC patterns, while frontend teams can use generated, type-safe SDKs instead of writing GraphQL queries. -One of the biggest downsides of Apollo Federation is that backend developers must adopt GraphQL and migrate their existing REST or gRPC services to a Federation-compatible framework. Cosmo Connect solves this problem by compiling GraphQL into gRPC and moving the complexity of the query language into the Router (API Gateway). +### The Challenge -How does this work? You define an Apollo-compatible Subgraph Schema, compile it into a protobuf definition, and implement it in your favorite gRPC stack, such as Go, Java, C#, or many others. No specific framework or GraphQL knowledge is required. It is really just gRPC! +GraphQL adoption often faces two key barriers: -## Key Benefits +1. **Learning curve** - Requiring every developer to learn GraphQL concepts, schema design, and query optimization slows adoption +2. **Migration complexity** - Rewriting existing backend services to adopt GraphQL-native patterns is rarely practical for mature architectures -* **All Cosmo platform benefits** — including breaking change detection, centralized telemetry, and governance out of the box -* **Federation without GraphQL servers** — backend teams implement gRPC contracts instead of GraphQL resolvers -* **Language flexibility** — leverage gRPC code generation across nearly all ecosystems, including those with poor GraphQL server libraries -* **Reduced migration effort** — wrap existing APIs (like REST or SOAP) without writing full subgraphs, lowering the cost of moving from monoliths to federation -* **Developer experience** — straightforward request/response semantics, with the router handling GraphQL query planning and batching +### The Solution -## Deployment Models +Cosmo Connect removes these barriers by letting teams work with familiar tools: + +- **Backend teams** can implement federated subgraphs using standard gRPC instead of GraphQL resolvers +- **Frontend teams** can use type-safe RPC clients instead of writing GraphQL queries +- **The Router** handles all GraphQL complexity as a protocol mediation layer + +## What You Can Build + + + + Generate type-safe SDKs from GraphQL operations. + + + Build federated subgraphs using standard gRPC instead of GraphQL resolvers. + + + Extend the router with gRPC services that run as managed local processes. + + + +## Architecture ```mermaid graph LR - client["Clients (Web / Mobile / Server)"] --> routerCore["Router Core"] + client["GraphQL Clients
(Web / Mobile)"] --> routerCore["Router Core"] + connectClient["Connect Clients
(Generated SDKs)"] --> routerCore subgraph routerBox["Cosmo Router"] routerCore - plugin["Router Plugin
(Cosmo Connect)"] + plugin["Router Plugin"] end routerCore --> subA["GraphQL Subgraph"] - routerCore --> grpcSvc["gRPC Service
(Cosmo Connect)"] + routerCore --> grpcSvc["gRPC Service"] - grpcSvc --> restA["REST / HTTP APIs"] - grpcSvc --> soapA["Databases"] - - plugin --> restA - plugin --> soapA + grpcSvc --> backend["Backend Systems"] + plugin --> backend %% Styling classDef grpcFill fill:#ea4899,stroke:#ea4899,stroke-width:1.5px,color:#ffffff; classDef pluginFill fill:#ea4899,stroke:#ea4899,stroke-width:1.5px,color:#ffffff; + classDef clientFill fill:#3b82f6,stroke:#3b82f6,stroke-width:1.5px,color:#ffffff; + class grpcSvc grpcFill; class plugin pluginFill; + class connectClient clientFill; ``` -Cosmo Connect supports two ways to integrate gRPC into your federated graph: - -- **[Router Plugins](/connect/plugins)** — run as local processes managed by the router. Ideal for simple deployments where you want the lowest latency and do not need separate CI/CD or scaling. -- **[gRPC Services](/connect/grpc-services)** — independent deployments in any language. Suitable when you need full lifecycle control, team ownership boundaries, and independent scaling. - -Both approaches remove the need to build GraphQL servers while maintaining the benefits of federation. - -## Implementation Docs - -The following documentation explains how to build and deploy services and plugins: - -- **[Router Plugins](/router/gRPC/plugins)** — Documentation for developing, configuring, and deploying plugins that run inside the router -- **[gRPC Services](/router/gRPC/grpc-services)** — Documentation for the complete lifecycle of building, deploying, and managing independent gRPC services - -These docs assume you're familiar with the concepts above and are ready to implement your first service or plugin. +The diagram shows how Cosmo Connect integrates with your existing architecture. Traditional GraphQL clients and Connect-generated clients both communicate with the Router, which federates queries across GraphQL subgraphs, gRPC services, and router plugins. ## Getting Started -The following tutorials walk you through step-by-step examples of building your first integration. -Unlike the implementation docs, which cover the full technical reference, these focus on quick setup and hands-on learning: +Choose your path based on what you want to build: diff --git a/docs/docs.json b/docs/docs.json index 2abfffc7..14ffb877 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -68,6 +68,7 @@ "group": "Cosmo Connect", "pages": [ "connect/overview", + "connect/client", "connect/plugins", "connect/grpc-services" ]