You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Keeping this in mind, there are two changes I would like to see that could help companies who use these endpoints.
API Update Endpoint
If we had an endpoint like GET /endpoints/{version}/{operation_id}, (or /changelog/{version}/{operation_id}), developers could programmatically check for changes instead of manually reading the changelog. (Or maybe I can have Claude read your changelog for me 😜)
This would let us build automated alerts and monitoring for upcoming changes. The endpoint could also support query parameters for filtering by date ranges, change types, or impact levels.
Smart Model Fallbacks
Currently, we have to specify a model for most endpoints. It would be nice if a request could specify fallback preferences, such as:
{
"model": "text-davinci-003",
"fallback_strategy": {
"priority": ["performance", "cost"],
"max_cost_increase_percentage": 10,
"capabilities_required": [...], // only for models with specific capabilities needed"auto_fallback": true
}
}
If the specified model becomes unavailable, the API would:
Automatically switch to the best alternative model matching our requirements
Include a warning header in the response: X-Model-Fallback: original=text-davinci-003,actual=gpt-3.5-turbo
Send an email notification about the fallback
Still allow opting out via auto_fallback: false or omitting the fallback_strategy key.
This would give developers a fighting chance of keeping their systems running if they don't have time to handle deprecations immediately. Yes, it might affect behavior and costs, but that's better than complete failure - and the constraints in fallback_strategy help control this.
Security & Compliance Considerations
Fallback models would respect existing API key permissions
Usage would be tracked separately for billing transparency
Audit logs would clearly show model substitutions
Organizations could disable auto-fallback account-wide if needed
These changes would have helped those teams handle the transition more gracefully while still maintaining proper security and monitoring. What do you think?
The text was updated successfully, but these errors were encountered:
The Problem
OpenAI sometimes deprecates models. When this happens, there are a ton of developers around the world who have to:
That's a drain on resources around the world. However, OpenAI tries to ensure that API changes are backwards-compatible.
Keeping this in mind, there are two changes I would like to see that could help companies who use these endpoints.
API Update Endpoint
If we had an endpoint like
GET /endpoints/{version}/{operation_id}
, (or/changelog/{version}/{operation_id}
), developers could programmatically check for changes instead of manually reading the changelog. (Or maybe I can have Claude read your changelog for me 😜)The endpoint could return something like:
This would let us build automated alerts and monitoring for upcoming changes. The endpoint could also support query parameters for filtering by date ranges, change types, or impact levels.
Smart Model Fallbacks
Currently, we have to specify a model for most endpoints. It would be nice if a request could specify fallback preferences, such as:
If the specified model becomes unavailable, the API would:
X-Model-Fallback: original=text-davinci-003,actual=gpt-3.5-turbo
auto_fallback: false
or omitting thefallback_strategy
key.This would give developers a fighting chance of keeping their systems running if they don't have time to handle deprecations immediately. Yes, it might affect behavior and costs, but that's better than complete failure - and the constraints in
fallback_strategy
help control this.Security & Compliance Considerations
These changes would have helped those teams handle the transition more gracefully while still maintaining proper security and monitoring. What do you think?
The text was updated successfully, but these errors were encountered: