Quick Start
This guide will help you quickly integrate Picept’s Smart Routing and Universal API into your applications. You’ll learn how to set up the client, make basic API calls, use smart routing features, and connect to private models.
This guide will help you quickly integrate Picept’s Smart Routing and Universal API into your applications. You’ll learn how to set up the client, make basic API calls, use smart routing features, and connect to private models.
Installation
Picept is designed to work seamlessly with the OpenAI SDK, making it easy to integrate into existing projects or start new ones. If you’re already using OpenAI in your project, you can simply update your client configuration to use Picept.
Authentication Setup
Before making API calls, you’ll need to set up authentication. Picept uses API keys to secure access to its services. You can find your API key in the Picept dashboard under the API Keys section.
The client initialization follows the familiar OpenAI pattern, but points to Picept’s API endpoint:
Making Your First API Call
Let’s start with a simple API call using a specific model. This example demonstrates the basic request structure and how to access the response. Picept follows OpenAI’s chat completion format, making it easy to transition from other AI providers.
When specifying a model, use the format model-name[provider]
to indicate which provider’s model you want to use:
Using Smart Routing
One of Picept’s most powerful features is its ability to automatically route requests to the optimal model based on your requirements. Instead of choosing a specific model, you can provide a list of models, and Picept will select the best one based on quality, cost, speed, and latency.
Here’s how to use smart routing with multiple models:
Pre-trained Routers
Picept provides pre-trained routers that have been optimized for different use cases. These routers can outperform single models while being faster and more cost-effective. For example, Router1 has been shown to beat GPT-4’s performance while being 3 times faster and cheaper.
Here’s how to use a pre-trained router:
Connecting to Private Models
If you have self-hosted models or want to use private deployments, Picept can route requests to your custom endpoints. This feature requires your model to be accessible via a public URL and to be compatible with one of Picept’s supported providers.
Here’s how to connect to a private model:
Request Tracking
For monitoring and debugging purposes, you can add custom tracking IDs to your requests. This is especially useful when you need to trace requests across your application or analyze performance patterns.
Add a trace ID in your client configuration:
This ID will appear in your analytics dashboard and can be used to filter logs and metrics.
Usage and Metrics Dashboard
Monitor and optimize your AI operations with Picept’s comprehensive analytics dashboard. From individual prompt performance to organization-wide metrics, get the insights you need to make data-driven decisions.
Cost and Usage Analytics
Track your AI spending and usage patterns across all providers and models:
- Monthly spend breakdown by provider/model
- Token usage analytics
- Request volume trends
- Cost per request metrics
The dashboard makes it easy to:
- Compare costs across different providers
- Identify high-usage patterns
- Optimize spending based on usage patterns
- Set and monitor budget alerts
- Export detailed cost reports
Performance Metrics
Monitor the speed and reliability of your AI operations:
- Average response times by model/provider
- Latency distributions
- Success rates
- Token processing speeds
- Request queue times
Individual Request Monitoring
Dive deep into specific requests to understand performance at a granular level:
- Prompt-level metrics
- Token counts
- Response times
- Model selection decisions
- Cost breakdown
Next Steps
Now that you’ve got the basics set up, you can:
- Explore our pre-trained routers to find the best one for your use case
- Learn about custom router training to optimize for your specific needs
- Set up monitoring and analytics to track your API usage
- Join our community to share experiences and get help
Remember to check out our API reference for detailed information about all available endpoints and parameters.