Insight API
HomeContact Us
HomeContact Us
LinkedIn
Github
  1. Pulse AI
  • Getting Started with Insight
  • Example Requests
  • Recognition & Branding Requirements
  • Discovery
    • Generate New Discovery
      POST
    • Get a Specific Discovery
      GET
    • Get All Discoveries
      GET
  • Genres
    • Get Genres
      GET
  • Artists
    • Search Artists
      GET
    • Get a Specific Artist
      GET
  • Venues
    • Search Venues
      GET
    • Get a Specific Venue
      GET
  • Pulse AI
    • Chat Completions API
    • Get Models
      GET
    • Chat Completions Endpoint
      POST
    • Healthcheck
      GET
  1. Pulse AI

Chat Completions API

Chat Completions Streaming API#

OpenAI Compatible Endpoint#

Overview#

The Chat Completions API provides an OpenAI-compatible interface for interacting with the Pulse AI Research Agent. This endpoint enables real-time streaming responses using Server-Sent Events (SSE), allowing applications to display AI-generated content as it's being created.
The API follows the OpenAI chat completions format for maximum compatibility with existing tools and libraries designed for OpenAI's API.

Base URL#

https://insight.lineupiq.io/v1/chat/completions

Authentication#

The API requires authentication using both an API key and an application ID:
HeaderDescriptionRequired
x-api-keyYour API key for authenticationYes
x-app-idYour application IDYes
Content-TypeMust be application/jsonYes
Authentication errors will return a 401 status code with details about the missing or invalid credentials.

Request Format#

HTTP Method#

POST

Request Body#

{
  "model": "control-4v1",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant specializing in music industry knowledge."
    },
    {
      "role": "user",
      "content": "I'm looking for venues in Edmonton that can host a rock concert."
    }
  ],
  "stream": true
}

Parameters#

ParameterTypeDescriptionRequiredDefault
modelstringID of the model to use (e.g., 'control-4v1', 'discovery-pulse-4v1')Yes-
messagesarrayA list of messages comprising the conversation so farYes-
messages[].rolestringThe role of the message author (system, user, or assistant)Yes-
messages[].contentstringThe content of the messageYes-
streambooleanWhether to stream the response (recommended)Notrue

Response Format#

The API returns a stream of events in the Server-Sent Events (SSE) format with content type text/event-stream. Each event starts with data: followed by a JSON string and two newlines.

Successful Response (200 OK)#

The streaming response consists of multiple events:
1.
First Content Chunk
data: {"choices": [{"delta": {"role": "assistant", "content": "Hello"}, "index": 0, "finish_reason": null}]}\n\n
2.
Subsequent Content Chunks
data: {"choices": [{"delta": {"content": ", I am your assistant"}, "index": 0, "finish_reason": null}]}\n\n
3.
End Marker with Finish Reason
data: {"choices":[{"delta":{},"finish_reason":"stop","index":0}]}\n\n
4.
Final Done Marker
data: [DONE]\n\n

Error Responses#

Authentication Errors (401 Unauthorized)#

Missing API Key:
{
  "detail": "API key missing. Provide it via X-API-Key header or Bearer token"
}
Invalid API Key:
{
  "detail": "Invalid API key"
}
Missing User Email:
{
  "error": {
    "message": "Authentication required",
    "type": "authentication_error",
    "param": null,
    "code": "invalid_request_error"
  }
}

In-Stream Errors#

Errors that occur after the stream has started will be sent as events in the stream:
data: {"error": {"message": "An internal server error occurred", "type": "server_error", "param": null, "code": "internal_error"}}\n\n

Client Implementation#

JavaScript Example#

Python Example#

Rate Limits#

PlanRequests per minuteTokens per minute
FreeTBDTBD
BasicTBDTBD
ProTBDTBD
EnterpriseCustomCustom
Exceeding these limits will result in a 429 Too Many Requests response.

Models#

The following models are available through this endpoint:
Model IDDescriptionContext WindowUse Case
discovery-pulse-4v1Research-focused AI assistant32K tokensDeep research, complex queries

Best Practices#

1.
Always use streaming: Set stream: true for the best user experience, allowing content to appear incrementally.
2.
Include conversation history: Pass the full conversation history in the messages array to maintain context.
3.
Use system messages: Set the assistant's behavior and capabilities with a system message at the beginning of your messages array.
4.
Handle errors gracefully: Implement proper error handling for both pre-stream and in-stream errors.
5.
Implement reconnection logic: Have your client automatically reconnect if the connection is lost during streaming.

Limitations#

Maximum request size: 32KB
Maximum conversation length: 32K tokens (varies by model)
Maximum streaming duration: 5 minutes

Support#

If you encounter any issues or have questions about the Chat Completions API, please contact our support team at api-support@lineupiq.io.
Previous
Get a Specific Venue
Next
Get Models
Built with