Skip to content

Conversation

@jherr
Copy link
Contributor

@jherr jherr commented Dec 13, 2025

🎯 Changes

Adds a migration doc to help folks migrate from the first version of the API to the proposed newer version.

βœ… Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

πŸš€ Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

Release Notes

  • New Features

    • Unified ai() API for all AI interactions, replacing chat()
    • Tree-shakeable modular adapters for optimized bundle sizes
    • Image generation, text-to-speech, transcription, and experimental video generation capabilities
    • Structured output support with Zod schema validation
    • Non-streaming mode support
  • Breaking Changes

    • chat() renamed to ai()
    • Adapter naming updated: openai() β†’ openaiText(), anthropic() β†’ anthropicText(), etc.
    • Type renames: ChatOptions β†’ TextOptions
  • Documentation

    • New migration guide for upgrading to the updated API

✏️ Tip: You can customize this high-level summary in your review settings.

@jherr jherr requested a review from a team December 13, 2025 17:39
@coderabbitai
Copy link

coderabbitai bot commented Dec 13, 2025

Walkthrough

This pull request refactors TanStack AI from monolithic adapters to tree-shakeable modular adapters, renaming the primary entry point from chat() to ai(), renaming types from Chat* to Text*, and introducing specialized adapters for embeddings, image generation, text-to-speech, transcription, video, and summarization across OpenAI, Anthropic, Gemini, and Ollama providers. Comprehensive documentation and examples are updated to reflect the new unified API.

Changes

Cohort / File(s) Summary
Changeset & Core Type Renames
.changeset/two-bikes-kneel.md, docs/config.json
Records adapter refactoring for tree shaking; updates navigation to include migration guide and new guides; renames public API references (chat β†’ text, ChatOptions β†’ TextOptions, ChatCompletionChunk β†’ TextCompletionChunk, ChatStreamOptions* β†’ TextStreamOptions*)
Primary API & Documentation
docs/api/ai.md, README.md, docs/getting-started/*, docs/protocol/*
Renames primary entry point from chat() to ai(); updates imports and usage across all documentation; introduces toServerSentEventsStream and toStreamResponse as public exports; documents unified ai() workflow with adapter-specific variants
Adapter Documentation
docs/adapters/anthropic.md, docs/adapters/gemini.md, docs/adapters/ollama.md, docs/adapters/openai.md
Replaces monolithic adapter patterns with tree-shakeable variants (anthropicText, anthropicSummarize, etc.); documents new model identifiers, provider options, and specialized adapters (embed, image, TTS, summarization); expands API references and configuration examples
New Guides
docs/guides/image-generation.md, docs/guides/text-to-speech.md, docs/guides/transcription.md, docs/guides/video-generation.md, docs/guides/tree-shakeable-adapters.md, docs/guides/migration-guide.md
New documentation covering image generation, TTS, transcription, and video generation capabilities; comprehensive migration guide from legacy chat()/embedding()/summarize() to unified ai() with tree-shakeable adapters; tree-shakeable adapters overview
Guide Updates
docs/guides/agentic-cycle.md, docs/guides/client-tools.md, docs/guides/multimodal-content.md, docs/guides/per-model-type-safety.md, docs/guides/server-tools.md, docs/guides/streaming.md, docs/guides/tool-*.md
All guide examples updated to use ai() instead of chat(); adapter imports changed to new text variants (openaiText, anthropicText, etc.); multimodal examples refactored to use adapter factories
Reference Documentation
docs/reference/classes/BaseAdapter.md, docs/reference/functions/text.md, docs/reference/functions/textOptions.md, docs/reference/interfaces/AIAdapter.md, docs/reference/interfaces/TextCompletionChunk.md, docs/reference/interfaces/TextOptions.md, docs/reference/type-aliases/TextStreamOptions*.md, docs/reference/index.md
Updates parameter/return types from ChatOptions to TextOptions; renames interface and type references; documents new unified reference surface
OpenAI Adapters
packages/typescript/ai-openai/src/adapters/text.ts, packages/typescript/ai-openai/src/adapters/embed.ts, packages/typescript/ai-openai/src/adapters/image.ts, packages/typescript/ai-openai/src/adapters/summarize.ts, packages/typescript/ai-openai/src/adapters/tts.ts, packages/typescript/ai-openai/src/adapters/transcription.ts, packages/typescript/ai-openai/src/adapters/video.ts
New modular adapters for text, embedding, image generation, summarization, TTS, transcription, and video with corresponding factory functions (openaiText, createOpenaiText, openaiEmbed, etc.)
OpenAI Utilities & Configuration
packages/typescript/ai-openai/src/utils/client.ts, packages/typescript/ai-openai/src/model-meta.ts, packages/typescript/ai-openai/src/image/image-provider-options.ts, packages/typescript/ai-openai/src/audio/tts-provider-options.ts, packages/typescript/ai-openai/src/audio/transcription-provider-options.ts, packages/typescript/ai-openai/src/tools/function-tool.ts, packages/typescript/ai-openai/src/openai-adapter.ts
Adds client factory, model metadata, comprehensive provider options for image/TTS/transcription; updates legacy adapter to use TextOptions; enables new video/transcription/TTS/image models
OpenAI Exports & Tests
packages/typescript/ai-openai/src/index.ts, packages/typescript/ai-openai/package.json
Exports all new adapters and types as tree-shakeable modules; adds zod to peerDependencies; deprecates legacy monolithic adapter
Anthropic Adapters
packages/typescript/ai-anthropic/src/adapters/text.ts, packages/typescript/ai-anthropic/src/adapters/summarize.ts
New text and summarization adapters with streaming, structured output, and tool support; comprehensive message and content conversion utilities
Anthropic Utilities & Configuration
packages/typescript/ai-anthropic/src/utils/client.ts, packages/typescript/ai-anthropic/src/utils/schema-converter.ts, packages/typescript/ai-anthropic/src/tools/custom-tool.ts, packages/typescript/ai-anthropic/src/anthropic-adapter.ts
Adds Anthropic client factory and schema converter; updates tool converter to use Anthropic-specific schema; legacy adapter now uses TextOptions
Anthropic Exports
packages/typescript/ai-anthropic/src/index.ts, packages/typescript/ai-anthropic/package.json
Exports new text and summarize adapters as tree-shakeable modules; adds zod to peerDependencies
Gemini Adapters
packages/typescript/ai-gemini/src/adapters/text.ts, packages/typescript/ai-gemini/src/adapters/embed.ts, packages/typescript/ai-gemini/src/adapters/image.ts, packages/typescript/ai-gemini/src/adapters/summarize.ts, packages/typescript/ai-gemini/src/adapters/tts.ts
New modular adapters for text, embedding, image generation, summarization, and TTS with corresponding factories
Gemini Utilities & Configuration
packages/typescript/ai-gemini/src/utils/client.ts, packages/typescript/ai-gemini/src/utils/schema-converter.ts, packages/typescript/ai-gemini/src/image/image-provider-options.ts, packages/typescript/ai-gemini/src/tools/tool-converter.ts, packages/typescript/ai-gemini/src/model-meta.ts, packages/typescript/ai-gemini/src/gemini-adapter.ts
Adds Gemini client factory, schema converter, comprehensive image provider options; enables image/TTS models; legacy adapter uses TextOptions
Gemini Exports & Tests
packages/typescript/ai-gemini/src/index.ts, packages/typescript/ai-gemini/package.json, packages/typescript/ai-gemini/tests/*
Exports all new adapters as tree-shakeable modules; adds zod to peerDependencies; tests updated to use new adapters and ai()
Ollama Adapters
packages/typescript/ai-ollama/src/adapters/text.ts, packages/typescript/ai-ollama/src/adapters/embed.ts, packages/typescript/ai-ollama/src/adapters/summarize.ts
New modular text, embedding, and summarization adapters with corresponding factories
Ollama Utilities & Configuration
packages/typescript/ai-ollama/src/utils/client.ts, packages/typescript/ai-ollama/src/utils/schema-converter.ts, packages/typescript/ai-ollama/src/ollama-adapter.ts
Adds Ollama client factory and schema converter; legacy adapter uses TextOptions; uses Ollama-specific schema conversion
Ollama Exports
packages/typescript/ai-ollama/src/index.ts, packages/typescript/ai-ollama/package.json
Exports new text, embed, and summarize adapters as tree-shakeable modules; moves zod from dependencies to peerDependencies
Client & DevTools
packages/typescript/ai-client/src/chat-client.ts, packages/typescript/ai-devtools/src/store/ai-context.tsx, packages/typescript/ai-devtools/vite.config.ts
Adds post-stream action queue to chat client; updates event channels from chat:* to text:* in devtools; adds type cast for Solid plugin
Example Applications
examples/ts-group-chat/chat-server/capnweb-rpc.ts, examples/ts-group-chat/chat-server/chat-logic.ts, examples/ts-group-chat/chat-server/claude-service.ts, examples/ts-react-chat/src/routes/api.tanchat.ts, examples/ts-solid-chat/src/routes/api.chat.ts, examples/ts-svelte-chat/src/routes/api/chat/+server.ts, examples/ts-vue-chat/vite.config.ts, examples/README.md
All examples updated to use new adapter factories (openaiText, anthropicText, etc.) and ai() instead of chat(); imports and configurations refactored; type normalizations applied
Example Dependency Updates
examples/*/package.json, packages/typescript/ai-client/package.json, packages/typescript/ai-devtools/package.json, package.json
Minor version bumps for vite (^7.2.4 β†’ ^7.2.7), TanStack packages, React, and UI libraries across all example applications

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Areas requiring extra attention:

  • New adapter implementations (packages/typescript/ai-*/src/adapters/*.ts): Complex streaming logic, schema conversion, message formatting, and provider option handling across OpenAI, Anthropic, Gemini, and Ollama
  • Type system changes across core and all adapter packages (migration from ChatOptions β†’ TextOptions and related type renames)
  • Schema converter utilities for each provider (convertZodTo*Schema functions) and their integration into tool conversion and structured output flows
  • Post-stream action queue in ai-client for managing delayed continuation logic
  • Event channel rename in devtools from chat:* to text:* and impact on tracking/state
  • Documentation migration completeness: Ensure all code examples and API references accurately reflect the new unified entry point and adapter exports
  • Deprecation handling: Legacy monolithic adapter exports remain but marked for deprecation; verify backward compatibility where appropriate

Poem

πŸ‡ Tree-shakeable whiskers and modular hops,
One ai() to rule them, no more adapter crops!
Text, embed, image, voice in separate bundles so tight,
Hopping through schemas, each provider just right. 🌳✨

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Ai migration doc' is vague and generic, lacking specificity about the migration scope, nature, or main change details. Improve title clarity: use 'Add migration guide for API v1 to v2 transition' or similar to clearly convey the purpose and scope.
βœ… Passed checks (2 passed)
Check name Status Explanation
Description check βœ… Passed The PR description follows the template with checked items and provides context, though the 'Changes' section is brief; the checklist is complete and release impact is clear.
Docstring Coverage βœ… Passed Docstring coverage is 97.22% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • πŸ“ Generate docstrings
πŸ§ͺ Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ai-migration-doc

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@nx-cloud
Copy link

nx-cloud bot commented Dec 13, 2025

View your CI Pipeline Execution β†— for commit a0c983c

Command Status Duration Result
nx affected --targets=test:sherif,test:knip,tes... βœ… Succeeded 7s View β†—
nx run-many --targets=build --exclude=examples/** βœ… Succeeded 1s View β†—

☁️ Nx Cloud last updated this comment at 2025-12-13 17:40:53 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Dec 13, 2025

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@146

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@146

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@146

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@146

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@146

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@146

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@146

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@146

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@146

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@146

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@146

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@146

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@146

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@146

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@146

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@146

commit: a0c983c

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (12)
packages/typescript/ai-devtools/src/store/ai-context.tsx (1)

1312-1384: Address remaining chat:* event references across the codebase.

The migration from chat:* to text:* events is incomplete. While the code in this file has been correctly migrated to use text:started, text:completed, and text:iteration, the following files still reference the old chat:* namespace and should be updated for consistency:

  • testing/panel/src/lib/recording.ts: Lines 161, 322 use chat:started and chat:completed
  • packages/typescript/ai/tests/ai-text.test.ts: References to chat:started and chat:iteration in test cases
  • docs/guides/observability.md: Line 25 example uses chat:started

Update these references to use the text:* event namespace to maintain consistency across the codebase.

examples/ts-vue-chat/vite.config.ts (1)

232-251: Abort upstream work on client disconnect to avoid leaking in-flight AI requests.

The AbortController is created and passed to ai() and toStreamResponse(), but it's never aborted when the HTTP client disconnects. This allows the AI provider call to continue running unnecessarily, wasting resources.

             const abortController = new AbortController()
+            req.on('close', () => abortController.abort())
+            req.on('abort', () => abortController.abort())

             const stream = ai({
               adapter: adapter as any,

Also remove the as any type casts for adapter and selectedModel. The TanStack AI ai() function supports proper generic type inference without requiring type assertions:

             const stream = ai({
-              adapter: adapter as any,
-              model: selectedModel as any,
+              adapter: adapter,
+              model: selectedModel,
               tools: [

If the conditionally-assigned adapter makes inference difficult, use a type union or proper type guards instead of as any.

examples/ts-react-chat/src/routes/api.tanchat.ts (1)

100-115: Refactor to enable compile-time type safety for adapter and model selection.

The as any casts on lines 101–102 completely bypass TypeScript's type checking. The ai() function requires concrete generic typesβ€”specifically a literal model string type paired with the specific adapter typeβ€”which cannot be satisfied when both vary at runtime.

Move the ai() call into each switch case so that the adapter and model are hardcoded per provider, allowing TypeScript to properly infer AIOptionsFor and enforce type safety:

switch (provider) {
  case 'anthropic':
    return toStreamResponse(
      ai({
        adapter: anthropicText(),
        model: 'claude-sonnet-4-5',
        tools: [...],
        // ... rest of options
      }),
      { abortController }
    )
  case 'gemini':
    return toStreamResponse(
      ai({
        adapter: geminiText(),
        model: 'gemini-2.0-flash-exp',
        tools: [...],
        // ... rest of options
      }),
      { abortController }
    )
  // ... other cases
}

This pattern provides per-model type safety as required by the coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."

packages/typescript/ai-gemini/src/model-meta.ts (1)

224-606: Remove IMAGEN_3 model definitionβ€”it was deprecated and retired on November 10, 2025.

Model identifiers verified against current Gemini API documentation (December 2025):

  • Gemini 2.5 Flash variants: gemini-2.5-flash-image, gemini-2.5-flash-lite, gemini-2.5-flash-lite-preview-09-2025 βœ“
  • Gemini 2.0 Flash variants: gemini-2.0-flash, gemini-2.0-flash-preview-image-generation, gemini-2.0-flash-lite βœ“
  • Imagen 4 variants: imagen-4.0-generate-001, imagen-4.0-ultra-generate-001, imagen-4.0-fast-generate-001 βœ“

However, IMAGEN_3 (imagen-3.0-generate-002) is no longer availableβ€”Google deprecated it and recommends using Imagen 4 instead. The model definition should be removed from active code.

examples/README.md (1)

385-390: Inconsistent API usage - complete the migration.

This code example still uses chat({ instead of ai({, which is inconsistent with the rest of the migration in this file (lines 311, 314, 363) and the PR's objectives.

Apply this diff to complete the migration:

-const stream = chat({
+const stream = ai({
   adapter: openai(),
   model: 'gpt-4o',
   messages,
   tools: [weatherTool], // SDK executes these automatically
 })
docs/guides/tools.md (1)

282-288: Fix invalid snippet: duplicate tools key in object literal.
As written, this won’t compile and readers won’t know which form to use.

 ai({
   adapter: openaiText(),
   messages,
-  tools: [addToCartDef], // Client will execute, or
-  tools: [addToCartServer], // Server will execute
+  // Client will execute (definition-only):
+  // tools: [addToCartDef],
+
+  // Server will execute (server impl):
+  tools: [addToCartServer],
 });
docs/getting-started/quick-start.md (1)

181-194: Incomplete ai() call in tool example may confuse users.

The example shows ai({ tools: [getProducts] }) without the required adapter, model, and messages parameters. Consider showing a complete call or adding a comment indicating this is a partial snippet.

 const getProducts = getProductsDef.server(async ({ query }) => {
   return await db.products.search(query)
 })

-ai({ tools: [getProducts] })
+// Use with the ai() function
+ai({
+  adapter: openaiText(),
+  model: 'gpt-4o',
+  messages: [{ role: 'user', content: 'Find products' }],
+  tools: [getProducts],
+})
packages/typescript/ai-anthropic/src/anthropic-adapter.ts (1)

83-96: Wire TextOptions.abortController into the SDK request signal (or document precedence).

After switching to TextOptions, consumers will reasonably expect abortController to work. Consider:

-      const stream = await this.client.beta.messages.create(
+      const signal =
+        options.abortController?.signal ??
+        (options.request instanceof Request ? options.request.signal : options.request?.signal)
+
+      const stream = await this.client.beta.messages.create(
         { ...requestParams, stream: true },
         {
-          signal: options.request?.signal,
+          signal,
           headers: options.request?.headers,
         },
       )
docs/adapters/anthropic.md (1)

89-117: Docs: messages is undefined in the β€œWith Tools” example.
The snippet won’t run as-is; add a minimal const messages = [...] (or show request parsing like the earlier POST example).

docs/api/ai.md (3)

18-30: Docs: missing maxIterations import in the main ai() example.
Copy/paste will fail. Add it to the import line.

-import { ai } from "@tanstack/ai";
+import { ai, maxIterations } from "@tanstack/ai";

64-75: Docs: summarization style values don’t match the actual type.
SummarizationOptions.style is 'bullet-points' | 'paragraph' | 'concise' (no "detailed"). (Based on the provided type snippet.)

-- `style?` - Summary style ("concise" | "detailed")
+- `style?` - Summary style ("concise" | "bullet-points" | "paragraph")

101-144: Align toolDefinition examples to consistent Zod + .server()/.client() pattern.

The getWeather example at lines 334–359 uses { parameters, execute }, contradicting the toolDefinition() pattern shown at lines 101–144 (inputSchema, outputSchema, .server()/.client()). Update the weatherTool example to match the documented pattern:

-const weatherTool = toolDefinition({
-  name: "getWeather",
-  description: "Get the current weather for a city",
-  parameters: {
-    city: { type: "string", description: "City name" },
-  },
-  async execute({ city }) {
-    return { temperature: 72, condition: "Sunny" };
-  },
-});
+import { z } from "zod";
+
+const weatherToolDef = toolDefinition({
+  name: "getWeather",
+  description: "Get the current weather for a city",
+  inputSchema: z.object({
+    city: z.string(),
+  }),
+  outputSchema: z.object({
+    temperature: z.number(),
+    condition: z.string(),
+  }),
+});
+
+const weatherTool = weatherToolDef.server(async ({ city }) => {
+  return { temperature: 72, condition: "Sunny" };
+});

Also clarify the distinction between passing a ToolDefinition (schema only) versus toolDef.server(...) (executable on server).

♻️ Duplicate comments (2)
packages/typescript/ai-openai/package.json (1)

43-52: Verify dual listing of @tanstack/ai in both dependencies and peerDependencies.

Similar to the Anthropic adapter package, this package lists @tanstack/ai in both dependencies (line 43) and peerDependencies (line 51). Clarify whether this dual listing is intentional or if @tanstack/ai should be exclusively in one section.

packages/typescript/ai-ollama/package.json (1)

44-53: Verify dual listing of @tanstack/ai in both dependencies and peerDependencies.

Consistent with the Anthropic and OpenAI adapter packages, this package lists @tanstack/ai in both dependencies (line 44) and peerDependencies (line 52). This pattern appears across all adapter packages in this PR, suggesting it may be intentional, but it warrants verification.

🟠 Major comments (25)
examples/ts-react-chat/src/routes/api.tanchat.ts-116-116 (1)

116-116: Stream type cast to any is unnecessary with proper adapter typing.

The stream returned from ai() is cast to any before passing to toStreamResponse(). When the text adapter is properly typed, ai() returns AsyncIterable<StreamChunk> by default (matching toStreamResponse()'s parameter type exactly). This cast is a workaround for the adapter being cast to any on line 102 and would be eliminated by fixing the root adapter type issue.

packages/typescript/ai-ollama/src/adapters/embed.ts-40-73 (1)

40-73: OllamaEmbedAdapterOptions.host is unused (misleading public API)

Right now host in OllamaEmbedAdapterOptions is never read; only hostOrClient affects the client. Either remove host from the options type, or honor it when hostOrClient isn’t provided.

Proposed fix (honor options.host):

 export class OllamaEmbedAdapter implements EmbeddingAdapter<
   typeof OllamaEmbeddingModels,
   OllamaEmbedProviderOptions
 > {
@@
   constructor(
     hostOrClient?: string | Ollama,
     options: OllamaEmbedAdapterOptions = {},
   ) {
     if (typeof hostOrClient === 'string' || hostOrClient === undefined) {
-      this.client = createOllamaClient({ host: hostOrClient })
+      const host = typeof hostOrClient === 'string' ? hostOrClient : options.host
+      this.client = createOllamaClient({ host })
     } else {
       this.client = hostOrClient
     }
     this.defaultModel = options.model ?? 'nomic-embed-text'
   }
@@
 export function createOllamaEmbed(
   host?: string,
   options?: OllamaEmbedAdapterOptions,
 ): OllamaEmbedAdapter {
   return new OllamaEmbedAdapter(host, options)
 }

Also applies to: 114-129

packages/typescript/ai-ollama/src/adapters/text.ts-100-131 (1)

100-131: OllamaTextAdapterOptions.host is unused (misleading public API)
Same as the other adapters: either remove host from options or honor it when hostOrClient is undefined.

Also applies to: 391-406

packages/typescript/ai-ollama/src/adapters/summarize.ts-55-88 (1)

55-88: OllamaSummarizeAdapterOptions.host is unused (misleading public API)
Same issue as embed.ts: either remove host from options or honor it when hostOrClient is undefined.

Also applies to: 152-167

packages/typescript/ai-ollama/src/adapters/text.ts-189-281 (1)

189-281: Tool call index is not provided by ollama's streamed responses; derive it deterministically from array position

Ollama's OpenAI-compatible streamed tool-call responses do not include the index field on function, making the unsafe cast to { index: number } unreliable. This breaks consumers that depend on ordering or reconciliation. Use the loop index when iterating tool calls instead.

-      const handleToolCall = (toolCall: ToolCall): StreamChunk => {
+      const handleToolCall = (toolCall: ToolCall, index: number): StreamChunk => {
         const actualToolCall = toolCall as ToolCall & {
           id: string
-          function: { index: number }
+          function: { index?: number }
         }
         return {
@@
-          index: actualToolCall.function.index,
+          index,
         }
       }
@@
         if (chunk.message.tool_calls && chunk.message.tool_calls.length > 0) {
-          for (const toolCall of chunk.message.tool_calls) {
-            yield handleToolCall(toolCall)
+          for (const [i, toolCall] of chunk.message.tool_calls.entries()) {
+            yield handleToolCall(toolCall, i)
             hasEmittedToolCalls = true
           }
@@
       if (chunk.message.tool_calls && chunk.message.tool_calls.length > 0) {
-        for (const toolCall of chunk.message.tool_calls) {
-          yield handleToolCall(toolCall)
+        for (const [i, toolCall] of chunk.message.tool_calls.entries()) {
+          yield handleToolCall(toolCall, i)
           hasEmittedToolCalls = true
         }
       }
packages/typescript/ai-ollama/src/adapters/summarize.ts-90-118 (1)

90-118: Provider options interface is unused; SummarizationOptions framework limitation prevents implementation

OllamaSummarizeProviderOptions is defined but cannot be applied because SummarizationOptions (from @tanstack/ai) does not include a providerOptions fieldβ€”unlike TextOptions which supports generic, type-safe provider options. The summarize() method hardcodes temperature: 0.3 and num_predict, leaving model tuning parameters like num_ctx, top_p, and repeat_penalty inaccessible to callers.

To fix this, extend SummarizationOptions to include providerOptions similar to TextOptions, then update the adapter to extract and merge them:

  async summarize(options: SummarizationOptions): Promise<SummarizationResult> {
    const model = options.model || this.defaultModel
    const prompt = this.buildSummarizationPrompt(options)
+   const providerOptions = options.providerOptions as
+     | OllamaSummarizeProviderOptions
+     | undefined

    const response = await this.client.generate({
      model,
      prompt,
      options: {
        temperature: 0.3,
        num_predict: options.maxLength ?? 500,
+       ...providerOptions,
      },
      stream: false,
    })

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-ollama/src/adapters/text.ts-147-187 (1)

147-187: structuredOutput() should validate with Zod after JSON parsing

The method converts the outputSchema to JSON Schema and passes it to Ollama's format parameter (correct), but then only calls JSON.parse() without validating the result against the original schema. This means callers can receive structurally-invalid data even though they provided a schema.

Add outputSchema.safeParse() after JSON parsing to ensure the response matches the intended schema:

      let parsed: unknown
      try {
        parsed = JSON.parse(rawText)
      } catch {
        throw new Error(
          `Failed to parse structured output as JSON. Content: ${rawText.slice(0, 200)}${rawText.length > 200 ? '...' : ''}`,
        )
      }
+     // Runtime validation with Zod
+     const validated = outputSchema.safeParse(parsed)
+     if (!validated.success) {
+       throw new Error(
+         `Structured output did not match schema: ${validated.error.message}`,
+       )
+     }

      return {
-       data: parsed,
+       data: validated.data,
         rawText,
       }
packages/typescript/ai-ollama/src/adapters/embed.ts-75-108 (1)

75-108: Pass provider options through embeddings request + use generateId for IDs

The createEmbeddings method ignores options.providerOptions and uses Date.now() for ID generation, creating two issues:

  1. Callers cannot pass provider-specific options (e.g., num_gpu, num_thread) since they're not forwarded to the Ollama embeddings request.
  2. id: \embed-${Date.now()}`can collide under concurrency; other adapters usegenerateId()` instead.

Import generateId and apply:

 import {
   createOllamaClient,
   estimateTokens,
   getOllamaHostFromEnv,
+  generateId,
 } from '../utils'
   async createEmbeddings(options: EmbeddingOptions): Promise<EmbeddingResult> {
     const model = options.model || this.defaultModel
+    const providerOptions = options.providerOptions as
+      | OllamaEmbedProviderOptions
+      | undefined
 
     for (const input of inputs) {
       const response = await this.client.embeddings({
         model,
         prompt: input,
+        options: providerOptions,
       })
     return {
-      id: `embed-${Date.now()}`,
+      id: generateId('emb'),
       model,
       embeddings,

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-openai/src/audio/tts-provider-options.ts-1-31 (1)

1-31: Remove "verse" voice option and clarify the instructions parameter limitation.

The voice "verse" is not available in the current OpenAI TTS API and should be removed. The instructions parameter comment is incompleteβ€”while it correctly notes that instructions do not work with tts-1 or tts-1-hd, it should clarify that instructions are supported by gpt-4o-mini-tts. The audio formats listed (mp3, opus, aac, flac, wav, pcm) are correct.

packages/typescript/ai-gemini/src/utils/schema-converter.ts-8-15 (1)

8-15: Remove reliance on internal Zod API schema.def.type for object detection; use JSON Schema structure instead.
The schema.def.type property is not part of Zod's public API and may be undefined or removed in future versions, making this converter fragile at runtime. The JSON Schema structure-based checks (checking for properties key, empty objects) are more stable alternatives.

-    const isZodObject =
-      typeof schema === 'object' &&
-      'def' in schema &&
-      schema.def.type === 'object'
-
-    if (isZodObject && !result.type) {
-      result.type = 'object'
-    }
-
-    if (Object.keys(result).length === 0) {
+    if (Object.keys(result).length === 0) {
       result.type = 'object'
     }
 
     if ('properties' in result && !result.type) {
       result.type = 'object'
     }

Also applies to: packages/typescript/ai-ollama/src/utils/schema-converter.ts, packages/typescript/ai-anthropic/src/utils/schema-converter.ts, packages/typescript/ai-openai/src/utils/schema-converter.ts

packages/typescript/ai-gemini/src/utils/client.ts-38-43 (1)

38-43: Extract duplicated generateId to a shared utility.

The generateId function is duplicated across multiple providers (Anthropic, OpenAI, Ollama, and now Gemini). This violates the DRY principle and creates maintenance overhead.

Consider extracting this to a shared utility package (e.g., @tanstack/ai-utils or within the core @tanstack/ai package) that all provider packages can import. This ensures consistent ID generation and reduces code duplication.

Example location: packages/typescript/ai-utils/src/id-generator.ts or packages/typescript/ai/src/utils/id-generator.ts

// Shared utility
export function generateId(prefix: string): string {
  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`
}

Then import in each provider:

+import { generateId } from '@tanstack/ai-utils'
-export function generateId(prefix: string): string {
-  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`
-}

Based on learnings, tree-shakeable architecture is preferred, so ensure the shared utility is properly exported.

packages/typescript/ai-openai/src/adapters/image.ts-59-92 (1)

59-92: Don’t let providerOptions override validated request fields (n, size, etc.).

Right now ...providerOptions is last, so a caller can bypass validation by supplying conflicting values inside providerOptions. Prefer applying providerOptions first, then writing the normalized/validated fields last.

   private buildRequest(
     options: ImageGenerationOptions<OpenAIImageProviderOptions>,
   ): OpenAI_SDK.Images.ImageGenerateParams {
     const { model, prompt, numberOfImages, size, providerOptions } = options

     return {
-      model,
-      prompt,
-      n: numberOfImages ?? 1,
-      size: size as OpenAI_SDK.Images.ImageGenerateParams['size'],
-      ...providerOptions,
+      ...providerOptions,
+      model,
+      prompt,
+      n: numberOfImages ?? 1,
+      size: size as OpenAI_SDK.Images.ImageGenerateParams['size'],
     }
   }
packages/typescript/ai-gemini/src/adapters/tts.ts-73-85 (1)

73-85: Either support or explicitly reject voice / format / speed from TTSOptions.
Right now they’re silently ignored, which is confusing. Consider throwing if provided, or document β€œGemini TTS ignores these” in JSDoc.

packages/typescript/ai-anthropic/src/adapters/text.ts-130-167 (1)

130-167: Use client.beta.messages.create() for structured output instead of non-beta client.messages.create().

Tool-based structured outputs and tool_choice are beta-only features in the Anthropic SDK. The streaming implementation correctly uses client.beta.messages.create(), but the structuredOutput() method calls the non-beta client.messages.create(), which does not have full tool support. Change lines 155–162 to use this.client.beta.messages.create() to ensure both endpoints use the same compatible API contract.

packages/typescript/ai-anthropic/src/adapters/text.ts-341-426 (1)

341-426: Add explicit guard for role: 'tool' messages without toolCallId.

Currently, a role: 'tool' message without toolCallId falls through to the generic fallback and is converted to { role: 'user', ... }, corrupting the conversation. Although the framework never creates such messages, there is no explicit validation. Add an error throw or explicit log to reject this invalid state rather than silently mishandling it.

packages/typescript/ai-anthropic/src/adapters/text.ts-504-522 (1)

504-522: Add block type check to content_block_stop handling to prevent duplicate empty-args emissions.

The current code yields empty arguments when a tool call has no input, but doesn't verify that the stopped block is actually a tool_use block. A subsequent content_block_stop event (for text or thinking blocks) will reference the same currentToolIndex, potentially re-emitting the empty args for the last tool block. Use the event's block type or index to ensure only actual tool_use block completions trigger this emission.

packages/typescript/ai-anthropic/src/adapters/text.ts-482-502 (1)

482-502: Emit accumulated tool arguments instead of delta fragments to match content/thinking pattern and prevent data loss.

During input_json_delta events, you accumulate into existing.input but yield only event.delta.partial_json. This inconsistency causes consumers to lose non-empty tool call argumentsβ€”unlike content and thinking streams which emit both delta and accumulated state. The diff correctly changes arguments to use existing.input for consistency and completeness.

           } else if (event.delta.type === 'input_json_delta') {
             const existing = toolCallsMap.get(currentToolIndex)
             if (existing) {
               existing.input += event.delta.partial_json

               yield {
                 type: 'tool_call',
                 id: genId(),
                 model: model,
                 timestamp,
                 toolCall: {
                   id: existing.id,
                   type: 'function',
                   function: {
                     name: existing.name,
-                    arguments: event.delta.partial_json,
+                    arguments: existing.input,
                   },
                 },
                 index: currentToolIndex,
               }
             }
           }
packages/typescript/ai-openai/src/adapters/embed.ts-22-27 (1)

22-27: OpenAIEmbedProviderOptions are defined but never forwarded to the OpenAI API.
The encodingFormat and user options have no effect unless extracted from the options parameter and passed to the embeddings request as encoding_format and user. Additionally, consider using OPENAI_EMBEDDING_MODELS[0] instead of hardcoding 'text-embedding-ada-002' for consistency with the model definitions.

 export class OpenAIEmbedAdapter extends BaseEmbeddingAdapter<
   typeof OPENAI_EMBEDDING_MODELS,
   OpenAIEmbedProviderOptions
 > {
   async createEmbeddings(options: EmbeddingOptions): Promise<EmbeddingResult> {
+    const providerOptions = (options as EmbeddingOptions & {
+      providerOptions?: OpenAIEmbedProviderOptions
+    }).providerOptions
+
     const response = await this.client.embeddings.create({
-      model: options.model || 'text-embedding-ada-002',
+      model: options.model || OPENAI_EMBEDDING_MODELS[0],
       input: options.input,
       dimensions: options.dimensions,
+      encoding_format: providerOptions?.encodingFormat,
+      user: providerOptions?.user,
     })

Also applies to lines 50–55.

packages/typescript/ai-gemini/src/adapters/embed.ts-20-29 (1)

20-29: Wire GeminiEmbedProviderOptions into embedContent request and throw on missing embeddings.

The adapter defines GeminiEmbedProviderOptions with taskType, title, and outputDimensionality but never uses them in the API request. Additionally, if response.embeddings?.[0]?.values is missing, the embedding is silently dropped, causing the returned embeddings array to have fewer entries than the input array (index misalignment).

Fix this by:

  1. Extracting providerOptions from options
  2. Passing taskType, title, and outputDimensionality to the embedContent config
  3. Throwing an error instead of silently dropping missing embeddings
  async createEmbeddings(options: EmbeddingOptions): Promise<EmbeddingResult> {
    const model = options.model || this.defaultModel
+   const providerOptions = (options as any).providerOptions as
+     | GeminiEmbedProviderOptions
+     | undefined

    // Ensure input is an array
    const inputs = Array.isArray(options.input)
      ? options.input
      : [options.input]

    const embeddings: Array<Array<number>> = []

-   for (const input of inputs) {
+   for (const [idx, input] of inputs.entries()) {
      const response = await this.client.models.embedContent({
        model,
        contents: [{ role: 'user', parts: [{ text: input }] }],
        config: {
-         outputDimensionality: options.dimensions,
+         outputDimensionality:
+           providerOptions?.outputDimensionality ?? (options as any).dimensions,
+         taskType: providerOptions?.taskType,
+         title: providerOptions?.title,
        },
      })

-     if (response.embeddings?.[0]?.values) {
-       embeddings.push(response.embeddings[0].values)
-     }
+     const values = response.embeddings?.[0]?.values
+     if (!values) {
+       throw new Error(
+         `Gemini embedContent returned no embedding values (model=${model}, index=${idx})`,
+       )
+     }
+     embeddings.push(values)
    }

Committable suggestion skipped: line range outside the PR's diff.

packages/typescript/ai-openai/src/adapters/video.ts-78-112 (1)

78-112: Validate the actual size you send (providerOptions.size can bypass validation).

createVideoJob() validates validateVideoSize(model, size) where size = options.size, but buildRequest() will fall back to providerOptions.size when options.size is undefined. That means an invalid providerOptions.size can slip through and hit the API.

Suggested fix:

- const { model, size, duration, providerOptions } = options
+ const { model, size, duration, providerOptions } = options
+ const resolvedSize = size ?? providerOptions?.size

- validateVideoSize(model, size)
+ validateVideoSize(model, resolvedSize)

(And use resolvedSize consistently when building the request.)

Also applies to: 275-301

packages/typescript/ai-gemini/src/adapters/text.ts-205-255 (1)

205-255: Use nextToolIndex to generate unique tool call IDs.

When the same function is called multiple times in a stream, const toolCallId = functionCall.name creates collisions in toolCallMap, causing separate tool calls to merge their arguments. Use a stable synthetic ID like const toolCallId = functionCall.name || ${++callIndex}; // or any monotonic counter approach to ensure each call gets a unique identifier and preserves its own arguments.

packages/typescript/ai-openai/src/adapters/video.ts-92-110 (1)

92-110: Heavy reliance on any for OpenAI β€œvideos” API needs a compatibility gate and clearer failure modes.

Right now the adapter assumes client.videos.create/retrieve/... exist and have certain response shapes. If the SDK differs, this becomes runtime-unsafe. Consider feature-detecting client.videos once (constructor) and throwing a clear β€œSDK does not support videos in this version” error before runtime calls.

Also applies to: 132-154, 171-272

packages/typescript/ai-gemini/src/adapters/summarize.ts-26-40 (1)

26-40: Provider options interface defined but non-functional due to missing framework support.

GeminiSummarizeProviderOptions defines generationConfig and safetySettings, but SummarizationOptions (the method parameter type) has no providerOptions field to pass them through. Additionally, the generateContent() call at lines 86-99 only applies systemInstruction to config, ignoring temperature, topP, topK, maxOutputTokens, stopSequences, and safety settings entirely.

Requires two-part fix: (1) extend SummarizationOptions in the framework to support providerOptions, and (2) map options.providerOptions.generationConfig.* and safetySettings into the SDK config object.

packages/typescript/ai-gemini/src/adapters/text.ts-382-439 (1)

382-439: Store function name with tool call results for proper Gemini API attribution.

functionResponse: { name: msg.toolCallId, ... } violates Gemini's API contract. According to the Gemini API specification, functionResponse.name must be the function's declared name (e.g., "get_current_weather"), not the toolCallId. Tool results won't be correctly attributed to their function calls. The ModelMessage structure for tool role messages needs to preserve the function name from the original tool call, or the adapter must have a way to map toolCallId back to the function name.

packages/typescript/ai-gemini/src/image/image-provider-options.ts-160-174 (1)

160-174: Aspect ratio mappings appear incorrect for some sizes.

Several mappings don't match the actual aspect ratios:

  • '1536x1024': '3:4' β€” 1536/1024 β‰ˆ 1.5, which is closer to 3:2 (landscape), not 3:4 (portrait).
  • '1024x1536': '4:3' β€” 1024/1536 β‰ˆ 0.67, which is 2:3 (portrait), not 4:3 (landscape).

The comments acknowledge these are "approximate mappings," but users relying on these mappings may get unexpected results.

 export const GEMINI_SIZE_TO_ASPECT_RATIO: Record<string, GeminiAspectRatio> = {
   // Square
   '1024x1024': '1:1',
   '512x512': '1:1',
   // Landscape
   '1024x768': '4:3',
-  '1536x1024': '3:4', // Actually this is portrait, but matching common dimensions
+  '1536x1024': '3:4', // Note: 1536x1024 is actually ~3:2, mapped to closest supported ratio
   '1792x1024': '16:9',
   '1920x1080': '16:9',
   // Portrait
   '768x1024': '3:4',
-  '1024x1536': '4:3', // Inverted
+  '1024x1536': '3:4', // Portrait dimensions, using 3:4 portrait ratio
   '1024x1792': '9:16',
   '1080x1920': '9:16',
 }
🟑 Minor comments (15)
packages/typescript/ai-devtools/src/store/ai-context.tsx-1309-1309 (1)

1309-1309: Update comment to reflect new event namespace.

The comment references "Chat Events" but the event listeners below use the text:* namespace. Update the comment for consistency.

Apply this diff:

-    // ============= Chat Events (for usage tracking) =============
+    // ============= Text Events (for usage tracking) =============
examples/ts-vue-chat/vite.config.ts-5-10 (1)

5-10: Consolidate the duplicate @tanstack/ai import.

The @tanstack/ai package is imported twiceβ€”once on line 5 and again on line 10. Move toolDefinition to the first import statement:

-import { ai, maxIterations, toStreamResponse } from '@tanstack/ai'
+import { ai, maxIterations, toStreamResponse, toolDefinition } from '@tanstack/ai'
 import { openaiText } from '@tanstack/ai-openai'
 import { anthropicText } from '@tanstack/ai-anthropic'
 import { geminiText } from '@tanstack/ai-gemini'
 import { ollamaText } from '@tanstack/ai-ollama'
-import { toolDefinition } from '@tanstack/ai'
packages/typescript/ai-ollama/src/utils/client.ts-39-41 (1)

39-41: Potential for short/empty random suffix in ID generation.

Math.random().toString(36).substring(7) can produce strings of varying length (0-6 characters) depending on the random value. For more consistent IDs, consider using substring(2, 9) or a fixed-length approach.

 export function generateId(prefix: string = 'msg'): string {
-  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`
+  return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(2, 9)}`
 }
packages/typescript/ai-ollama/src/adapters/summarize.ts-120-146 (1)

120-146: maxLength mixes β€œwords” in the prompt with β€œtokens” in num_predict
The prompt says β€œunder N words”, but you treat maxLength as tokens for num_predict. Either (a) rename semantics in the prompt to β€œtokens”, or (b) keep it as words and translate to an approximate token budget.

packages/typescript/ai-ollama/src/adapters/text.ts-283-386 (1)

283-386: Add explanatory comments for image source handling or implement explicit format conversion

The current code treats data and non-data image sources identically by pushing source.value directly. While the ollama-js SDK accepts multiple image formats (base64, URLs, file paths, bytes) and handles conversion internally, the previous implementation included clarifying comments that "URL-based images [are] not directly supported" and "Ollama may need the image to be fetched externally." These comments should be restored to document the current behavior and any limitations, or the code should explicitly handle format conversion for consistency with how other adapters differentiate source types.

docs/adapters/ollama.md-126-126 (1)

126-126: Fix hyphenation in compound modifier.

The phrase should use a hyphen: "tool-calling support" instead of "tool calling support" when used as a compound adjective modifying "support."

Apply this diff:

-**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool calling support.
+**Note:** Tool support varies by model. Models like `llama3`, `mistral`, and `qwen2` generally have good tool-calling support.
packages/typescript/ai-gemini/src/tools/tool-converter.ts-79-82 (1)

79-82: Wrap variable declaration in block to prevent leakage.

The jsonSchema variable is declared in a switch case without a surrounding block. This can cause the variable to be accessible in other case clauses, leading to unexpected behavior.

Apply this diff to wrap the declaration in a block:

       default:
+        {
         // Collect function declarations to group together
         // Description is required for Gemini function declarations
         if (!tool.description) {
           throw new Error(
             `Tool ${tool.name} requires a description for Gemini adapter`,
           )
         }

         // Convert Zod schema to Gemini-compatible JSON Schema
         const jsonSchema = tool.inputSchema
           ? convertZodToGeminiSchema(tool.inputSchema)
           : { type: 'object', properties: {}, required: [] }

         functionDeclarations.push({
           name: tool.name,
           description: tool.description,
           parameters: jsonSchema,
         })
         break
+        }

Based on static analysis hints.

docs/reference/type-aliases/TextStreamOptionsForModel.md-14-16 (1)

14-16: Update comment to reflect renamed type.

Line 14 still refers to "Chat options" but should be "Text options" to match the renamed type TextStreamOptionsForModel.

Apply this diff:

-Chat options constrained by a specific model's capabilities.
+Text options constrained by a specific model's capabilities.
docs/config.json-189-196 (1)

189-196: Consider adding redirects/aliases for renamed function reference routes (chat→text).
If the docs site supports redirects, keeping legacy routes working avoids breaking deep links from older blog posts / READMEs.

docs/config.json-484-490 (1)

484-490: Same redirect/alias concern for renamed type-alias pages (ChatStream→TextStream).
Worth preserving legacy URLs if feasible.

packages/typescript/ai-openai/src/adapters/tts.ts-71-79 (1)

71-79: Ensure instructions is only passed to compatible models.
The instructions field is supported in SpeechCreateParams, but only works with gpt-4o-mini-tts; passing it to tts-1 or tts-1-hd will fail at runtime. Validate the model before spreading providerOptions or document this constraint.

packages/typescript/ai-openai/src/adapters/summarize.ts-57-66 (1)

57-66: Handle empty choices explicitly (avoid silently returning '').
Returning '' can mask upstream failures; consider throwing when choices[0] is missing.

packages/typescript/ai-anthropic/src/adapters/summarize.ts-55-68 (1)

55-68: Guard against missing response.usage / unexpected content blocks.
If the SDK can omit usage (or include non-text blocks), consider null-safe access + clearer erroring when summary ends up empty unexpectedly.

packages/typescript/ai-gemini/src/image/image-provider-options.ts-208-226 (1)

208-226: Comment/implementation mismatch for image count limits.

The comment states "Imagen models support 1-8 images per request (varies by model)" but the implementation hardcodes maxImages = 4. The model parameter is accepted but not used for model-specific validation.

Consider either:

  1. Using the model parameter to apply model-specific limits, or
  2. Updating the comment to reflect the actual hardcoded limit.
 export function validateNumberOfImages(
   model: string,
   numberOfImages: number | undefined,
 ): void {
   if (numberOfImages === undefined) return

-  // Most Imagen models support 1-4 images, some support up to 8
-  const maxImages = 4
+  // All Imagen models currently support up to 4 images per request
+  const maxImages = 4 // TODO: Implement model-specific limits if needed
   if (numberOfImages < 1 || numberOfImages > maxImages) {
packages/typescript/ai-openai/src/adapters/text.ts-226-226 (1)

226-226: Dead code: eventTypeCounts map is never populated.

The eventTypeCounts map is declared at line 226 but never updated during stream processing. The error handler at lines 479-486 logs it, but it will always be empty.

Either populate the map during processing:

       for await (const chunk of stream) {
         chunkCount++
+        eventTypeCounts.set(chunk.type, (eventTypeCounts.get(chunk.type) || 0) + 1)

Or remove the unused declaration:

-    const eventTypeCounts = new Map<string, number>()

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 58 to 67
## Available Models

### Chat Models

- `claude-sonnet-4-5-20250929` - Claude Sonnet 4.5 (balanced)
- `claude-opus-4-5-20251101` - Claude Opus 4.5 (most capable)
- `claude-haiku-4-0-20250514` - Claude Haiku 4.0 (fastest)
- `claude-3-5-sonnet-20241022` - Claude 3.5 Sonnet
- `claude-3-opus-20240229` - Claude 3 Opus

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

fd model-meta.ts

Repository: TanStack/ai

Length of output: 201


🏁 Script executed:

cat packages/typescript/ai-anthropic/src/model-meta.ts

Repository: TanStack/ai

Length of output: 12900


🏁 Script executed:

rg "claude-sonnet-4-5-20250929|claude-opus-4-5-20251101|claude-haiku-4-0-20250514|claude-3-5-sonnet-20241022|claude-3-opus-20240229" --type ts --type md

Repository: TanStack/ai

Length of output: 4378


Update ANTHROPIC_MODELS in packages/typescript/ai-anthropic/src/model-meta.ts to match versioned model IDs used throughout the codebase.

The model IDs in docs/adapters/anthropic.md are correct and used consistently across examples and testing, but the model-meta.ts constants have incorrect IDs that don't match actual Anthropic API model identifiers:

  • claude-sonnet-4-5 should be claude-sonnet-4-5-20250929
  • claude-opus-4-5 should be claude-opus-4-5-20251101
  • claude-haiku-4-5 should be claude-haiku-4-0-20250514
  • claude-3-7-sonnet should be claude-3-5-sonnet-20241022
  • claude-opus-4 should be claude-3-opus-20240229

Correct the model ID constants to match the actual versioned IDs used in the codebase.

πŸ€– Prompt for AI Agents
In packages/typescript/ai-anthropic/src/model-meta.ts around lines 58 to 67, the
ANTHROPIC_MODELS constants use unversioned/incorrect IDs; update each constant
to the versioned model IDs used in the repo/docs: replace claude-sonnet-4-5 β†’
claude-sonnet-4-5-20250929, claude-opus-4-5 β†’ claude-opus-4-5-20251101,
claude-haiku-4-5 β†’ claude-haiku-4-0-20250514, claude-3-7-sonnet β†’
claude-3-5-sonnet-20241022, and claude-opus-4 β†’ claude-3-opus-20240229 so the
constants match the documented and tested model identifiers.

Comment on lines 86 to 102
```ts
import { openaiText } from '@tanstack/ai-openai'

const textAdapter = openaiText()

```

2. **Class constructor** (for more control):

```ts
import { createOpenAIText } from '@tanstack/ai-openai/adapters'

const textAdapter = createOpenAIText({
apiKey: 'your-api-key',
// additional configuration...
})
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

cat packages/typescript/ai-openai/package.json | jq '.exports'

Repository: TanStack/ai

Length of output: 145


🏁 Script executed:

#!/bin/bash
# Check if adapters directory exists in ai-openai package
ls -la packages/typescript/ai-openai/src/ | head -20

# Also check what's exported from the main index.ts
echo "=== Main index.ts exports ==="
head -50 packages/typescript/ai-openai/src/index.ts

Repository: TanStack/ai

Length of output: 2228


🏁 Script executed:

#!/bin/bash
# Search for actual imports of adapters in the codebase
echo "=== Imports from @tanstack/ai-openai ==="
rg "from ['\"]@tanstack/ai-openai['\"]" --type ts --type tsx -A 2 | head -30

echo -e "\n=== Imports from @tanstack/ai-openai/adapters ==="
rg "from ['\"]@tanstack/ai-openai/adapters['\"]" --type ts --type tsx -A 2 | head -30

# Check adapters directory index.ts if it exists
echo -e "\n=== Checking adapters/index.ts ==="
if [ -f "packages/typescript/ai-openai/src/adapters/index.ts" ]; then
  cat packages/typescript/ai-openai/src/adapters/index.ts
else
  echo "No adapters/index.ts found"
fi

Repository: TanStack/ai

Length of output: 282


🏁 Script executed:

#!/bin/bash
# List contents of adapters directory
ls -la packages/typescript/ai-openai/src/adapters/

# Check one of the adapter files to see what it exports
echo -e "\n=== Contents of text adapter ==="
head -30 packages/typescript/ai-openai/src/adapters/text.ts

Repository: TanStack/ai

Length of output: 1480


Incorrect import path - use @tanstack/ai-openai instead of /adapters subpath.

Line 96 shows importing from '@tanstack/ai-openai/adapters', but this path is not properly exported in the package configuration. The package.json only exports from the root (.), and there is no adapters/index.ts file to support the subpath import.

The correct import should be from '@tanstack/ai-openai', which re-exports all adapter functions (as shown in lines 19-21 and 165). Update line 96 to:

import { createOpenAIText } from '@tanstack/ai-openai'

This ensures consistency throughout the document and uses the actual exported API.

πŸ€– Prompt for AI Agents
In docs/guides/tree-shakeable-adapters.md around lines 86 to 102, the example
imports from the non-exported subpath '@tanstack/ai-openai/adapters' (line ~96);
change that import to use the package root: import { createOpenAIText } from
'@tanstack/ai-openai' so it matches the package.json exports and other examples
in the doc.

Comment on lines 1 to 22
// ============================================================================
// New Tree-Shakeable Adapters (Recommended)
// ============================================================================

// Text (Chat) adapter - for chat/text completion
export {
AnthropicTextAdapter,
anthropicText,
createAnthropicText,
type AnthropicTextConfig,
type AnthropicTextProviderOptions,
} from './adapters/text'

// Summarize adapter - for text summarization
export {
AnthropicSummarizeAdapter,
anthropicSummarize,
createAnthropicSummarize,
type AnthropicSummarizeConfig,
type AnthropicSummarizeProviderOptions,
} from './adapters/summarize'

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find the ai-anthropic package.json and print its exports (if any).
fd -a '^package\.json$' packages/typescript/ai-anthropic | while read -r f; do
  echo "==> $f"
  python - <<'PY' "$f"
import json,sys
p=sys.argv[1]
d=json.load(open(p))
print(json.dumps(d.get("exports", {}), indent=2))
PY
done

Repository: TanStack/ai

Length of output: 141


🏁 Script executed:

cat packages/typescript/ai-anthropic/package.json

Repository: TanStack/ai

Length of output: 1232


🏁 Script executed:

fd -a 'package\.json$' packages/typescript | head -10 | while read -r f; do
  echo "==> $f"
  cat "$f" | python3 -c "import json, sys; d=json.load(sys.stdin); print('exports:', json.dumps(d.get('exports', {}), indent=2)[:500])"
done

Repository: TanStack/ai

Length of output: 1941


Add subpath exports to package.json to enable tree-shaking of adapters. The package.json exports only the root "." entry point. To support tree-shakeable imports as shown in the code (e.g., from './adapters/text'), add subpath exports like:

"exports": {
  ".": {
    "types": "./dist/esm/index.d.ts",
    "import": "./dist/esm/index.js"
  },
  "./adapters": {
    "types": "./dist/esm/adapters/index.d.ts",
    "import": "./dist/esm/adapters/index.js"
  }
}

This allows consumers to use import { anthropicText } from '@tanstack/ai-anthropic/adapters' and properly tree-shake unused adapters.

πŸ€– Prompt for AI Agents
In packages/typescript/ai-anthropic/src/index.ts around lines 1–22 the module
re-exports adapter modules directly which requires subpath exports so consumers
can import adapters for tree-shaking; update this package's package.json
"exports" to include a subpath entry for "./adapters" (and keep the root "."
export) that maps "types" to the adapters types file in dist/esm (e.g.,
./dist/esm/adapters/index.d.ts) and "import" to the adapters JS entry (e.g.,
./dist/esm/adapters/index.js), ensuring consumers can import from
"@tanstack/ai-anthropic/adapters" and tree-shake unused adapters.

Comment on lines 654 to 668
case 'image': {
const imageMetadata = part.metadata
if (part.source.type === 'url') {
return {
type: 'input_image',
image_url: part.source.value,
detail: imageMetadata?.detail || 'auto',
}
}
// For base64 data, construct a data URI
return {
type: 'input_image',
image_url: part.source.value,
detail: imageMetadata?.detail || 'auto',
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

rg -nA10 "base64" --type=ts packages/typescript/ai-openai/

Repository: TanStack/ai

Length of output: 13450


🏁 Script executed:

rg -n "source.*value|image.*url" --type=ts packages/typescript/ai-openai/src/adapters/text.ts | head -30

Repository: TanStack/ai

Length of output: 233


🏁 Script executed:

# Check how image sources are created and formatted
rg -nB5A5 "type.*base64|source.*.type.*base64" --type=ts packages/typescript/ai-openai/

Repository: TanStack/ai

Length of output: 136


🏁 Script executed:

rg -nA10 "source.*type.*base64|type.*base64.*source" --type=ts packages/typescript/ai-openai/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Look at how image parts are structured when created
rg -nB5 "image.*base64" --type=ts packages/typescript/

Repository: TanStack/ai

Length of output: 2691


🏁 Script executed:

# Check if there's any test data or documentation about image format
find packages/typescript/ai-openai/tests -name "*.test.ts" -exec grep -l "image" {} \; | head -5

Repository: TanStack/ai

Length of output: 167


🏁 Script executed:

# Look at the part structure and how images are passed in
cat packages/typescript/ai-openai/tests/text.test.ts | head -100

Repository: TanStack/ai

Length of output: 135


🏁 Script executed:

find packages/typescript/ai-openai/tests -name "*.test.ts" | xargs grep -l "base64\|image" | head -5

Repository: TanStack/ai

Length of output: 167


🏁 Script executed:

# Look at the part/content structure definition
rg -n "type.*base64|part.*source" --type=ts packages/typescript/ai/src/types.ts -A3 -B3 | head -60

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Check for any image content handling tests
find packages/typescript -name "*.test.ts" -exec grep -l "source.*base64" {} \;

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

# Find the content/part type definitions
rg -n "type.*Part|interface.*Part|ContentPart" --type=ts packages/typescript/ai/src/types.ts | head -20

Repository: TanStack/ai

Length of output: 966


🏁 Script executed:

# Look at the complete type definitions for content
cat packages/typescript/ai/src/types.ts | grep -A 20 "interface.*Content\|type.*Content" | head -60

Repository: TanStack/ai

Length of output: 1645


🏁 Script executed:

# Check the model-meta tests which might show usage
cat packages/typescript/ai-openai/tests/model-meta.test.ts | head -150

Repository: TanStack/ai

Length of output: 6016


🏁 Script executed:

# Check if ImagePart metadata or structure includes mime type information
rg -n "ImagePart|image.*metadata|mime" --type=ts packages/typescript/ai/src/types.ts -A5 | head -40

Repository: TanStack/ai

Length of output: 1030


🏁 Script executed:

# Look at the actual code to see if mime type is available anywhere
cat packages/typescript/ai-openai/src/adapters/text.ts | sed -n '640,690p'

Repository: TanStack/ai

Length of output: 1341


🏁 Script executed:

# Check OpenAI documentation or any comments about image_url format requirements
rg -n "image_url|data:image" --type=ts packages/typescript/ai-openai/ -B2 -A2 | head -50

Repository: TanStack/ai

Length of output: 3124


Fix base64 image handling to construct proper data URI.

The comment at line 663 correctly states "For base64 data, construct a data URI," but the code incorrectly passes part.source.value directly to image_url. When source.type is 'data', the value is raw base64 (not a data URI), which OpenAI's API will reject. Construct a proper data URI by prefixing the base64 value: data:image/jpeg;base64,${part.source.value} (or use image/png as appropriate).

πŸ€– Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/text.ts around lines 654 to 668,
the image branch returns raw base64 for parts with source.type === 'data'
instead of a data URI; update the logic so when part.source.type === 'url' you
keep using the URL, but when part.source.type === 'data' you build a proper data
URI (use part.metadata?.mime if available, otherwise default to image/jpeg or
image/png) and prefix the base64 with "data:{mime};base64," before assigning to
image_url; preserve the detail fallback to imageMetadata?.detail || 'auto'.

Comment on lines 199 to 252
// Last resort: The /content endpoint returns raw binary video data, not JSON.
// We need to construct a URL that the client can use to fetch the video.
// The URL needs to include auth, so we'll create a signed URL or return
// a proxy endpoint.

// For now, return a URL that goes through our API to proxy the request
// since the raw endpoint requires auth headers that browsers can't send.
// The video element can't add Authorization headers, so we need a workaround.

// Option 1: Return the direct URL (only works if OpenAI supports query param auth)
// Option 2: Return a blob URL after fetching (memory intensive)
// Option 3: Return a proxy URL through our server

// Let's try fetching and returning a data URL for now
const baseUrl = this.config.baseUrl || 'https://api.openai.com/v1'
const apiKey = this.config.apiKey

const contentResponse = await fetch(
`${baseUrl}/videos/${jobId}/content`,
{
method: 'GET',
headers: {
Authorization: `Bearer ${apiKey}`,
},
},
)

if (!contentResponse.ok) {
// Try to parse error as JSON, but it might be binary
const contentType = contentResponse.headers.get('content-type')
if (contentType?.includes('application/json')) {
const errorData = await contentResponse.json().catch(() => ({}))
throw new Error(
errorData.error?.message ||
`Failed to get video content: ${contentResponse.status}`,
)
}
throw new Error(
`Failed to get video content: ${contentResponse.status}`,
)
}

// The response is the raw video file - convert to base64 data URL
const videoBlob = await contentResponse.blob()
const buffer = await videoBlob.arrayBuffer()
const base64 = Buffer.from(buffer).toString('base64')
const mimeType =
contentResponse.headers.get('content-type') || 'video/mp4'

return {
jobId,
url: `data:${mimeType};base64,${base64}`,
expiresAt: undefined, // Data URLs don't expire
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

cat -n packages/typescript/ai-openai/src/adapters/video.ts | head -50

Repository: TanStack/ai

Length of output: 1918


🏁 Script executed:

wc -l packages/typescript/ai-openai/src/adapters/video.ts

Repository: TanStack/ai

Length of output: 110


🏁 Script executed:

cat -n packages/typescript/ai-openai/src/adapters/video.ts | tail -100

Repository: TanStack/ai

Length of output: 3190


🏁 Script executed:

rg -A 5 -B 5 "\.server\(\)|\.client\(\)" packages/typescript/ai-openai/src/adapters/

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

rg "getVideoUrl" packages/typescript/ai-openai/src/ -A 3 -B 3

Repository: TanStack/ai

Length of output: 1065


🏁 Script executed:

sed -n '199,252p' packages/typescript/ai-openai/src/adapters/video.ts

Repository: TanStack/ai

Length of output: 2327


🏁 Script executed:

rg "class BaseVideoAdapter" packages/typescript/ -A 20

Repository: TanStack/ai

Length of output: 1867


🏁 Script executed:

fd "adapters" packages/typescript/ -type d

Repository: TanStack/ai

Length of output: 225


🏁 Script executed:

ls -la packages/typescript/ai/src/adapters/ | head -30

Repository: TanStack/ai

Length of output: 138


🏁 Script executed:

rg "getVideoUrl" packages/typescript/ -B 5 -A 5

Repository: TanStack/ai

Length of output: 4293


🏁 Script executed:

rg "\.server\(\)|\.client\(\)" packages/typescript/ -B 2 -A 2 | head -80

Repository: TanStack/ai

Length of output: 2795


🏁 Script executed:

find packages/typescript -name "*.ts" -path "*/adapters/*" -type f | head -20

Repository: TanStack/ai

Length of output: 1158


🏁 Script executed:

rg "typeof window|typeof document|'use server'|'use client'" packages/typescript/ai-openai/src/adapters/ -A 2 -B 2

Repository: TanStack/ai

Length of output: 37


🏁 Script executed:

cat packages/typescript/ai/src/activities/video/index.ts | head -100

Repository: TanStack/ai

Length of output: 2996


🏁 Script executed:

rg "apiKey" packages/typescript/ai-openai/src/adapters/ -B 2 -A 2

Repository: TanStack/ai

Length of output: 16304


🏁 Script executed:

cat packages/typescript/ai-openai/src/adapters/text.ts | grep -A 50 "class OpenAITextAdapter"

Repository: TanStack/ai

Length of output: 1754


🏁 Script executed:

grep -r "typeof window" packages/typescript/ai-openai/src/ || echo "No typeof window checks found"

Repository: TanStack/ai

Length of output: 84


getVideoUrl() fallback exposes API key and creates problematic data URLs.

This fallback path uses this.config.apiKey directly in a fetch call and constructs base64 data URLs for potentially large video files. Risks:

  • API key can be exposed if adapter is instantiated in client-side environments
  • Base64 data URLs for large videos consume enormous memory and can crash browser tabs
  • Uses Buffer.from() which isn't available in browsers without polyfills

Add an environment guard to prevent client-side execution:

+ if (typeof window !== 'undefined') {
+   throw new Error('Fetching video content requires a server environment to protect API keys.')
+ }

Avoid converting to data URLs by default. Consider returning a server-side retrievable URL or instructing users to implement proper proxying instead.

πŸ€– Prompt for AI Agents
In packages/typescript/ai-openai/src/adapters/video.ts around lines 199 to 252,
the fallback in getVideoUrl() fetches the video using this.config.apiKey and
builds a base64 data: URL (using Buffer), which can leak the API key in client
environments, crash browsers with large videos and relies on Node Buffer; change
this by adding a runtime environment guard that throws or rejects if executed in
a browser (e.g., detect window or navigator and refuse to use API key
client-side), stop returning base64 data URLs for large media and instead return
a server-side retrievable URL (either require a configured proxy/signed URL
endpoint on your server or return an instruction/error that caller must request
the proxied URL from the server), ensure any server-only code uses Node-safe
APIs (avoid Buffer in code paths that could run in the browser) and validate
that this.config.apiKey is only used when running server-side.

@AlemTuzlak AlemTuzlak changed the base branch from main to feat/splitting-adapters-up December 14, 2025 08:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants