Skip to main content
This guide covers detailed webhook configuration for connecting your bot to AI backends like OpenAI, Anthropic, or custom APIs.

Quick Start

The fastest way to create an integration:
1

Get Your API Endpoint

Identify the webhook URL for your AI service:
  • OpenAI: https://api.openai.com/v1/chat/completions
  • Anthropic: https://api.anthropic.com/v1/messages
  • Custom: Your own API endpoint
2

Gather Credentials

Get your API key or authentication token from the service.
3

Create Integration

In Chatbot Platform, create a new integration with the URL and headers.
4

Test

Use the test feature to verify it works.

Webhook Configuration

URL Requirements

Your webhook URL must:
  • Use HTTPS (not HTTP)
  • Be publicly accessible
  • Accept POST requests
  • Return responses within the timeout period

Headers

Common headers for AI services: OpenAI:
Authorization: Bearer sk-proj-...
Content-Type: application/json
Anthropic:
x-api-key: sk-ant-...
anthropic-version: 2023-06-01
Content-Type: application/json
Custom:
Authorization: Bearer YOUR_TOKEN
X-API-Key: YOUR_KEY
Content-Type: application/json

Timeout

Set timeout based on expected response time:
TimeoutUse Case
10sFast models (GPT-3.5)
30s (default)Standard models (GPT-4)
60sComplex reasoning or tools
90s+Long-form generation
Timeout should be less than your channel’s timeout to avoid stuck requests.

Request Format

Chatbot Platform sends this payload to your webhook:
{
  "messages": [
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "conversation_id": "conv_abc123",
  "user": {
    "id": "user_123",
    "platform": "telegram",
    "name": "John Doe",
    "username": "johndoe"
  },
  "bot": {
    "id": "bot_456",
    "name": "Support Bot"
  },
  "metadata": {
    "channel": "telegram",
    "chat_id": "123456789",
    "message_id": "789"
  }
}

Message Format

Messages follow OpenAI’s format and support both text and multimodal content: Text Messages:
{
  "role": "user",
  "content": "The message text"
}
Multimodal Messages (with images/files):
{
  "role": "user",
  "content": [
    {
      "type": "text",
      "text": "What's in this image?"
    },
    {
      "type": "image_url",
      "image_url": {
        "url": "https://storage.chatbotplatform.io/attachments/abc123.jpg"
      }
    }
  ]
}
Assistant Responses:
{
  "role": "assistant",
  "content": "The bot's previous response"
}
The messages array includes conversation history based on your context window setting.
File Support: When users send images or files through supported channels (Telegram, Slack), the attachments are automatically converted to the multimodal content format above. Files are stored securely and accessible via HTTPS URLs.

Response Format

Your webhook must return JSON with a content field:

Simple Response

{
  "content": "Hello! How can I help you today?"
}

Response with Attachments

Send files back to the user by including attachments:
{
  "content": "Here's the document you requested",
  "attachments": [
    {
      "url": "https://your-cdn.com/files/document.pdf",
      "filename": "document.pdf",
      "mime_type": "application/pdf"
    }
  ]
}
The platform downloads the file and sends it through the appropriate channel.

Response with Metadata

{
  "content": "Hello! How can I help you today?",
  "metadata": {
    "model": "gpt-4",
    "tokens_used": 42,
    "processing_time_ms": 1250,
    "custom_field": "any value"
  }
}
Metadata is stored but not sent to the user. Use it for logging and analytics.

Error Response

Return HTTP error status codes for failures:
{
  "error": "Rate limit exceeded",
  "code": "rate_limit"
}
Status codes:
  • 400 - Invalid request
  • 401 - Authentication failed
  • 429 - Rate limit
  • 500 - Server error
  • 503 - Service unavailable

Platform-Specific Setup

OpenAI

OpenAI’s API doesn’t accept our payload format directly. Use a proxy or transform the payload: Option 1: Proxy Service Deploy a simple proxy that converts our format to OpenAI’s:
// Example proxy (Node.js/Express)
app.post('/proxy/openai', async (req, res) => {
  const { messages } = req.body;

  const openaiResponse = await fetch('https://api.openai.com/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      model: 'gpt-4',
      messages: messages
    })
  });

  const data = await openaiResponse.json();
  res.json({ content: data.choices[0].message.content });
});
Option 2: Use Make.com or Zapier Configure a webhook workflow that transforms the payload.

Anthropic (Claude)

Similar to OpenAI, Anthropic requires a proxy:
app.post('/proxy/anthropic', async (req, res) => {
  const { messages } = req.body;

  // Convert to Anthropic format (separate system message)
  const systemMessage = messages.find(m => m.role === 'system');
  const otherMessages = messages.filter(m => m.role !== 'system');

  const anthropicResponse = await fetch('https://api.anthropic.com/v1/messages', {
    method: 'POST',
    headers: {
      'x-api-key': process.env.ANTHROPIC_API_KEY,
      'anthropic-version': '2023-06-01',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      model: 'claude-3-5-sonnet-20241022',
      max_tokens: 1024,
      system: systemMessage?.content,
      messages: otherMessages
    })
  });

  const data = await anthropicResponse.json();
  res.json({ content: data.content[0].text });
});

Custom Backend

For your own API, simply accept our payload format and return the expected response:
# Example custom backend (Flask)
@app.route('/chat', methods=['POST'])
def chat():
    data = request.json
    messages = data['messages']
    user = data['user']

    # Your logic here
    response_text = generate_response(messages, user)

    return jsonify({
        'content': response_text,
        'metadata': {
            'model': 'custom-v1',
            'user_id': user['id']
        }
    })

Testing Webhooks

Manual Testing

Use curl to test your webhook:
curl -X POST https://your-api.com/webhook \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -d '{
    "messages": [{"role": "user", "content": "Test message"}],
    "conversation_id": "test_123",
    "user": {"id": "test_user", "platform": "telegram"},
    "bot": {"id": "test_bot", "name": "Test Bot"}
  }'
Expected response:
{
  "content": "Test response"
}

Platform Test Feature

Use the built-in test in Chatbot Platform:
  1. Go to your integration settings
  2. Click Test Integration
  3. Enter a test message
  4. Review the response and any errors

Security Best Practices

Use HTTPS

Always use HTTPS endpoints to encrypt data in transit

Validate Requests

Verify requests come from Chatbot Platform (check IP or signature)

Rotate API Keys

Periodically rotate API keys and tokens

Rate Limiting

Implement rate limiting on your backend
Store API keys as environment variables, never in code or version control.

Troubleshooting

Connection Errors

Symptoms: Timeout or connection refused errors Solutions:
  • Verify URL is correct and accessible
  • Check firewall rules allow traffic
  • Ensure SSL certificate is valid
  • Test with curl from a different network

Authentication Errors

Symptoms: 401 or 403 responses Solutions:
  • Double-check API key is correct
  • Verify header names match requirements
  • Check for extra spaces in header values
  • Ensure API key has required permissions

Response Format Errors

Symptoms: Bot sends error message to user Solutions:
  • Verify response is valid JSON
  • Check content field is present
  • Look for encoding issues (UTF-8)
  • Test response with JSON validator

Timeout Issues

Symptoms: Requests timeout before completion Solutions:
  • Increase timeout value
  • Optimize backend for faster responses
  • Use streaming responses (if supported)
  • Check for network latency

Advanced Configuration

Custom Headers

Add custom headers for:
  • A/B testing identifiers
  • User tracking
  • Feature flags
  • Custom authentication
Example:
X-Bot-ID: bot_123
X-Feature-Flags: streaming,tools
X-User-Tier: premium

Dynamic Payloads

The platform includes useful metadata in every request:
{
  "user": {
    "id": "unique_id",
    "platform": "telegram|slack|discord",
    "name": "User's name",
    "username": "username"
  },
  "metadata": {
    "channel": "platform_name",
    "chat_id": "chat_identifier",
    "message_id": "message_identifier"
  }
}
Use this for:
  • Per-user personalization
  • Platform-specific responses
  • Analytics and tracking

Next Steps

A/B Testing

Configure multiple integrations

Custom Headers

Advanced header configuration