Latest articles

How to make requests to OpenAI using the Claude (Anthropic) SDK

27 March 2026Braintrust Team

TL;DR: The Braintrust AI Gateway lets you call OpenAI models using Anthropic's SDK. Point your Anthropic client at https://gateway.braintrust.dev without /v1, since the Anthropic SDK appends its own path prefix, authenticate with your Braintrust API key, and pass an OpenAI model name such as gpt-4o in the model parameter. Braintrust converts the Anthropic request format into OpenAI's chat completions format and returns the response in Anthropic's standard structure, so your existing client logic continues to work with OpenAI through the same Anthropic SDK.

Keep your Anthropic SDK setup and access OpenAI through Braintrust

Teams already using Anthropic may still want access to OpenAI models in the same application to compare Claude and GPT on the same prompts, use GPT in workflows where OpenAI performs better, or keep OpenAI available as a fallback provider.

Adding the OpenAI SDK creates extra integration work. Request handling in the application is already built around Anthropic's client, Anthropic's response structure, and fields such as content[0].text. Introducing a second SDK means maintaining another client, handling a different response format, and testing both integration paths whenever application logic changes. For developers who want access to OpenAI within an Anthropic-based application, maintaining a second SDK and its separate response-handling logic is often unnecessary.

Braintrust AI Gateway lets engineers use OpenAI through the same messages.create() workflow already used for Claude. Instead of adding OpenAI's SDK and supporting a second request and response pattern in the application, developers can keep the Anthropic SDK in place and call GPT through the same interface. Braintrust handles the format conversion at the gateway, stores provider credentials in Braintrust settings, and keeps the application on one SDK.

How to make requests to OpenAI using the Anthropic SDK

Braintrust AI Gateway receives each request in Anthropic's format, identifies the target provider from the model name, translates the payload into OpenAI's expected structure, and returns the result in the same Anthropic response format the client already uses. The steps below cover the setup needed to send your first OpenAI request through the Anthropic SDK.

Prerequisites

1. Create a Braintrust account and generate an API key

Sign up at Braintrust, then go to Settings > Organization > API keys and click + API key. Enter a name for the key, click Create, and copy it immediately because Braintrust will not show it again. The key, prefixed with sk-, is used by your application to authenticate requests sent through the Braintrust AI Gateway.

For production use cases, such as CI/CD pipelines or backend services, create a service token under Settings > Organization > Service tokens. Assign the token to the appropriate permission groups, then click Create. Service tokens use the bt-st- prefix and work anywhere API keys are accepted.

2. Store your OpenAI API key in Braintrust

Go to Settings > Organization > AI providers and select OpenAI. Paste your OpenAI API key and click Save. Braintrust uses this stored key when sending requests to OpenAI, so your OpenAI credential does not need to be embedded in application code. Braintrust encrypts provider API keys using AES-256 with unique keys and nonces.

Teams that bill OpenAI usage to different accounts by project can also set project-level provider keys under Project Settings > AI Providers. When a project-level key is present, Braintrust uses it instead of the organization-level default.

3. Install the Anthropic SDK

Run npm install @anthropic-ai/sdk for TypeScript or JavaScript, or pip install anthropic for Python.

Calling OpenAI with the Anthropic SDK through Braintrust

When calling OpenAI with the Anthropic SDK, the main Anthropic-specific difference is the base URL: https://gateway.braintrust.dev without /v1 because the Anthropic SDK appends its own path automatically.

With the base URL configured, the request still follows Anthropic's usual messages.create() pattern, including fields such as messages and max_tokens. When the model field contains an OpenAI model name, Braintrust recognizes that the request should be routed to OpenAI and returns the result in Anthropic's response format, which means existing parsing logic such as response.content[0].text continues to work without any changes.

typescript

const client = new Anthropic({
  baseURL: "https://gateway.braintrust.dev",
  apiKey: process.env.BRAINTRUST_API_KEY,
});

async function main() {
  // Call OpenAI's GPT using the Anthropic SDK
  const response = await client.messages.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Hello!" }],
    max_tokens: 50,
  });

  console.log(
    response.content[0].type === "text" ? response.content[0].text : "",
  );
}

main();

Here is the equivalent in Python.

python
import os

from anthropic import Anthropic

client = Anthropic(
    base_url="https://gateway.braintrust.dev",
    api_key=os.environ["BRAINTRUST_API_KEY"],
)

# Call OpenAI's GPT using the Anthropic SDK
response = client.messages.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
    max_tokens=50,
)

print(response.content[0].text)

Enabling logging and caching

Logging sends each request trace to a Braintrust project, where you can review OpenAI calls alongside the rest of your model traffic. To attach the request to a Braintrust trace, add the x-bt-parent header using Braintrust's logger.

typescript


const logger = initLogger({ projectName: "My Project" });

async function main() {
  await logger.traced(async (span) => {
    const client = new Anthropic({
      baseURL: "https://gateway.braintrust.dev",
      apiKey: process.env.BRAINTRUST_API_KEY,
    });

    // Call OpenAI's GPT using the Anthropic SDK
    const response = await client.messages.create(
      {
        model: "gpt-4o",
        messages: [{ role: "user", content: "Hello!" }],
        max_tokens: 50,
      },
      {
        headers: {
          "x-bt-parent": await span.export(),
        },
      },
    );

    console.log(
      response.content[0].type === "text" ? response.content[0].text : "",
    );
  });
}

main();

Each trace records token usage, latency, cost, and the full request and response, all of which appear in your Braintrust project's logs page. Because Claude and GPT traces live in the same project, you can filter by model name and compare them directly without adding extra tooling.

Caching is useful during prompt iteration because it prevents sending the same request to the provider multiple times. To cache responses, set "x-bt-use-cache": "always" in your client's defaultHeaders. Braintrust stores cached responses using AES-GCM encryption with a key derived from your API key and ties each cache entry to the API key that created it. Cache entries remain available for one week by default, and the x-bt-cache-ttl header lets you set a different duration in seconds for a specific request.

Use Braintrust AI Gateway to access more LLM models through one SDK

Calling OpenAI through the Anthropic SDK is one example of how Braintrust AI Gateway lets teams keep one SDK while accessing models from another provider. The same approach works across any supported SDK and AI provider combination.

Braintrust's AI Gateway accepts requests from the OpenAI SDK, the Anthropic SDK, and the Google Gemini SDK. You can keep the SDK already built into your application, point that client to Braintrust's gateway URL, and switch providers by changing the model name in the existing code.

Braintrust supports direct integrations with major model providers, including OpenAI, Anthropic, Google, Mistral, Groq, Fireworks, Together, xAI, Perplexity, Replicate, Cerebras, Baseten, and Lepton. It also supports cloud platform providers, including AWS Bedrock, Google Vertex AI, Azure OpenAI, and Databricks. If more LLM models are required, Braintrust's custom provider configuration supports self-hosted models, fine-tuned models, and proprietary AI endpoints.

Provider credentials are stored in Braintrust's organization settings and never appear in application code, so onboarding a new provider requires adding the key in the dashboard and referencing the new model name in your requests.

Use new model providers without building a new SDK integration each time. Start free with Braintrust today.

FAQs

Do I need separate API keys for Anthropic and OpenAI when using the Braintrust AI Gateway?

Your application only needs a Braintrust API key. Anthropic and OpenAI credentials are added separately in Braintrust under AI Providers, keeping provider keys out of application code and giving you one place to manage provider authentication.

Can I compare Claude and OpenAI model outputs using Braintrust?

Both models are accessible through the same Anthropic client instance. With logging enabled, you can send the same prompts to both Claude and GPT through the same integration pattern and review the results together in Braintrust. The logs page shows differences in output quality, response time, and token cost without requiring a separate evaluation pipeline.

How do I monitor costs for Claude calls vs. OpenAI calls?

Enable logging on your Braintrust project, and each request records its token usage and cost. The Logs dashboard groups this data by model and provider, making it straightforward to see how Claude and OpenAI spending breaks down across your workloads.