In today’s fast-paced AI environments, waiting to discover issues isn’t an option. Picept’s real-time monitoring gives you immediate visibility into your AI system’s performance, letting you catch and address issues before they impact your users.

Alert Configuration

Alerts in Picept are configured as part of your evaluation payload. Here’s how to set them up:

"alert": {
    "enabled": True,
    "options": {
        "email": ["team@company.com"],
        "webhook": "https://your-alert-endpoint.com",
        "on_type_status": [{"sentiment analysis": False}]  # Trigger alert when sentiment analysis fails
    }
}

Let’s break down each component:

Alert Options

  • enabled: Master switch to turn alerts on/off
  • options: Configure how and when you want to be notified
    • email: List of email addresses to receive notifications
    • webhook: URL where alert data should be sent
    • on_type_status: Specify which evaluator failures should trigger alerts

Here’s a complete example showing alerts in context:

payload = {
    "evaluation_name": "customer-feedback",
    
    "alert": {
        "enabled": True,
        "options": {
            "email": ["team@company.com"],
            "webhook": "https://your-alert-endpoint.com",
            "on_type_status": [{"sentiment analysis": False}]
        }
    },
    
    "dataset": {
        'file_name': "data-x27",
        "id": "S3:xxx1234",
        "response": 'The restaurant service was disgraceful...'
    },
    
    "evaluators": {
        "content safety": {
            "response": "response",
            "explanation": True,
            "judge_model": "gpt-4o-mini[openai]",
            "criteria": {
                "topic_detection": {
                    "enabled": True,
                    "expected_topics": ['mortgage loans']
                }
            }
        },
        "sentiment analysis": {
            "input_text": "response",
            "explanation": True,
            "judge_model": "gpt-4o[openai]",
            "passing_criteria": "Positive"
        }
    }
}

When an evaluation fails the specified criteria (in this case, negative sentiment), Picept will:

  1. Send an email to the specified addresses
  2. POST the alert data to your webhook endpoint
  3. Include detailed information about why the evaluation failed

Real-Time Monitoring Dashboard

The dashboard serves as your command center for AI quality assurance, providing an at-a-glance view of all your evaluation activities. Whether you’re running one-off evaluations or monitoring continuous production traffic, you have full visibility into your AI system’s performance.

Every evaluation job in Picept is automatically logged and tracked in our intuitive monitoring dashboard. Here’s what you can do:

Job Tracking

  • View all running and completed evaluation jobs
  • Track progress in real-time
  • Filter and search through job history
  • Access detailed evaluation results

Performance Analytics

  • Monitor success/failure rates
  • Track response times and latency
  • Analyze evaluation patterns
  • Identify potential issues early

Alert Management

When an evaluation fails your specified criteria, the dashboard provides:

  • Visual indicators of failed evaluations
  • Detailed failure analysis
  • Alert history and patterns
  • Response time tracking

Job Details

Click into any job to examine:

  • Individual evaluation results
  • Detailed explanations for each evaluator
  • Time stamps and duration metrics
  • Complete evaluation context

Team Collaboration

The dashboard enables teams to:

  • Share evaluation insights
  • Comment on results
  • Track issue resolution
  • Maintain evaluation history

All historical data is retained and searchable, making it easy to analyze trends and patterns over time.