Fireworks AI Integration
Connect Helicone with Fireworks AI, a platform that provides a fast, affordable, and customizable solution for generative artificial intelligence that helps product developers run, fine-tune, and share large language models.
You can follow their documentation here: https://docs.fireworks.ai/getting-started/quickstart
Gateway Integration
Create an FireworksAI account
Log into www.fireworks.ai or create an account. Once you have an account, you can generate an API key.
Set HELICONE_API_KEY and FIREWORKS_API_KEY as environment variable
HELICONE_API_KEY=<your API key>
FIREWORKS_API_KEY=<your API key>
Modify the base URL and add Auth headers
Replace the following FireworksAI URL with the Helicone Gateway URL:
https://api.fireworks.ai/inference/v1/chat/completions
-> https://fireworks.helicone.ai/inference/v1/chat/completions
and then add the following authentication headers.
Helicone-Auth: `Bearer ${HELICONE_API_KEY}`
Authorization: `Bearer ${FIREWORKS_API_KEY}`
Now you can access all the models on FireworksAI with a simple fetch call:
Example
curl \
--header 'Authorization: Bearer <FIREWORKS_API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"model": "accounts/fireworks/models/llama-v3-8b-instruct",
"prompt": "Say this is a test"
}' \
--url https://fireworks.helicone.ai/inference/v1/completions
For more information on how to use headers, see Helicone Headers docs. And for more information on how to use FireworksAI, see FireworksAI Docs.