Note: The full API interface is documented here: CBorg API Documentation
1. How should I securely store the API key?
API keys must be stored securely. Do not put raw API keys into your source code! Use environment variables.
Read this: Best Practices for API Key Security
2. How to handle slow requests?
When sending a large document for processing, you may encounter a client-side request timeout error while waiting for the complete response.
You can increase the timeout by adjusting the request_timeout
parameter, e.g.:
client = openai.ChatCompletion.create(
model=MODEL,
messages=[{ “role”: “user”, “content”: prompt }],
request_timeout=600,
)
Alternatively, you can use the streaming mode which will send back partial chunks of data as they are processed:
response = client.chat.completions.create(
model=MODEL,
messages = [
{
"role": "user",
"content": PROMPT
}
],
stream=True
)
answer = ''
for chunk in response:
if chunk.choices[0].delta.content is not None:
answer = answer + chunk.choices[0].delta.content
print(answer)
3. I received message: “Invalid API Key Error” - How do I confirm my key is valid?
When using your CBorg key with OpenAI-API-Compatible applications and/or libraries, you must override the Base URL to the CBorg API server at https://api.cborg.lbl.gov - otherwise these apps will reach out to OpenAI’s servers which will reject your key!
Note: If your client is on LBL-Net you can use https://api-local.cborg.lbl.gov which provides a direct network path without relaying your request through Cloudflare!
Assuming the API key is available in your user environment as $CBORG_API_KEY, you can check the current key is valid using /key/info
, e.g.;
curl --location "https://api.cborg.lbl.gov/key/info" --header "Authorization: Bearer $CBORG_API_KEY"
4. How do I check my key budget and spend?
Assuming the API key is available in your user environment as $CBORG_API_KEY, you can check the current spend with /key/info
, e.g.;
curl --location 'https://api.cborg.lbl.gov/key/info' --header "Authorization: Bearer $CBORG_API_KEY"
You can also calculate the estimated cost of an API call, before making the call, using /spend/calculate.
LBL-hosted models (anything model name starting with /lbl
) are hosted in our on-prem datacenter therefore the cost will always return zero for these models.
Note that commercial models return the number of input and output tokens consumed by a request, but do not return the actual cost - it is up to the user to apply the appropriate cost-conversion logic to determine the actual cost.
5. How do I send images to a vision model?
import openai
import os
client = openai.OpenAI(
api_key=os.environ.get('CBORG_API_KEY'),
base_url="https://api.cborg.lbl.gov"
)
import base64
from io import BytesIO
def encode_file(file):
import base64
with open(file, "rb") as fp:
return base64.b64encode(fp.read()).decode("utf-8")
response = client.chat.completions.create(
model="lbl/cborg-vision", # can set this to any model that supports vision
messages = [
{
"role": "user",
"content": [
{ "type": "text", "text": "Describe the picture." },
{ "type": "image_url", "image_url": { "url": "data:image/png;base64," + encode_file('image.png') } }
]
}
],
temperature=0.0,
stream=False
)
print(response)