POST
/
ai
/
v1
/
chat
/
completions
Generate chat completions with Messari AI
curl --request POST \
  --url https://api.messari.io/ai/v1/chat/completions \
  --header 'Content-Type: */*' \
  --header 'X-Messari-API-Key: <api-key>' \
  --data '{
  "allow_clarification_query": true,
  "generate_related_questions": 123,
  "inline_citations": true,
  "messages": [
    {
      "MultiContent": [
        {
          "image_url": {
            "detail": "<string>",
            "url": "<string>"
          },
          "text": "<string>",
          "type": "<string>"
        }
      ],
      "content": "<string>",
      "function_call": {
        "arguments": "<string>",
        "name": "<string>"
      },
      "name": "<string>",
      "reasoning_content": "<string>",
      "refusal": "<string>",
      "role": "<string>",
      "tool_call_id": "<string>",
      "tool_calls": [
        {
          "function": {
            "arguments": "<string>",
            "name": "<string>"
          },
          "id": "<string>",
          "index": 123,
          "type": "<string>"
        }
      ]
    }
  ],
  "response_format": "<string>",
  "stream": true,
  "verbosity": "<string>"
}'
{
  "data": {
    "messages": [
      {
        "content": "<string>",
        "role": "<string>"
      }
    ]
  },
  "error": "<string>",
  "metadata": {
    "charts": [
      {
        "citationId": 123,
        "dataset": "<string>",
        "end": "2023-11-07T05:31:56Z",
        "entities": [
          {
            "entityId": "<string>",
            "entityType": "<string>"
          }
        ],
        "granularity": "<string>",
        "id": 123,
        "metric": "<string>",
        "metricTimeseries": {
          "point_schema": [
            {
              "attribution": [
                {
                  "link": "<string>",
                  "name": "<string>"
                }
              ],
              "description": "<string>",
              "format": "<string>",
              "group_aggregate_operation": "<string>",
              "id": "<string>",
              "is_timestamp": true,
              "name": "<string>",
              "slug": "<string>",
              "subcategory": "<string>",
              "time_bucket_aggregate_operation": "<string>"
            }
          ],
          "series": [
            {
              "entity": {},
              "key": "<string>",
              "points": [
                [
                  "<any>"
                ]
              ]
            }
          ]
        },
        "start": "2023-11-07T05:31:56Z",
        "tier": "<string>"
      }
    ],
    "cited_sources": [
      {
        "citationId": 123,
        "domain": "<string>",
        "title": "<string>",
        "url": "<string>"
      }
    ],
    "related_questions": [
      "<string>"
    ],
    "status": "<string>",
    "trace_id": "aSDinaTvuI8gbWludGxpZnk="
  }
}
If you’d like to play around with the responses feel free to try on the Messari web-app, it uses the same underlying endpoint implementation. The chat completion endpoint leverages a graph architecture with agents to:
  • Access Messari’s real-time quantitative (compute) dataset which includes but is not limited to: Market data, Asset metrics, Fundraising, Token unlocks
  • Access Messari’s qualitative (search) dataset which includes but is not limited to: News, Blogs, Youtube transcriptions, RSS-feeds, Twitter, Webcrawl documents, Proprietary datasets of: Research, Quarterlies, Diligence Reports
  • Generate market insights and analysis
  • Process natural language queries about crypto assets, protocols and projects
For a more interactive experience trying out the API, here is our Replit in Typescript. Simply populate the API_KEY field with your Enterprise API key generated in the Messari Account > API page of our webapp and you’re off!

Request Params

verbosity

The verbosity parameter controls the level of detail in the model’s response. When set to "verbose", the model provides more comprehensive and detailed explanations, including additional context, examples, and supporting information. Other possible values might include "balanced" or "succinct" for shorter responses. Example usage:
{
  "verbosity": "verbose"
}

response_format

The response_format parameter specifies the desired formatting style for the model’s response. When set to "markdown", the output will be formatted using Markdown syntax, allowing for structured text with headings, lists, code blocks, and other formatting elements. Other common option might include "text". Example usage:
{
  "response_format": "markdown"
}

inline_citations

The inline_citations parameter determines whether the response metadata should include citations within its response text. When set to true, the model will reference sources directly in the text where information is being drawn from. This is particularly useful for academic, research, or factual content where attribution is important. Example usage:
{
  "inline_citations": true
}

stream

The stream parameter controls whether the API response is delivered as a complete response or as a stream of partial responses. When set to false, the API will wait until the entire response is generated and then deliver it in one piece. When set to true, the API would begin sending partial responses as they are generated, useful for implementing real-time typing effects or processing responses incrementally. Example usage:
{
  "stream": false
}
The generate_related_questions parameter determines whether the response metadata should include AI-generated related questions to the user query. This feature is useful for receiving follow up questions as part of the API response payload to prompt the user with potential next questions. Note: A maximum of 5 is allowed. Example usage:
{
  "generate_related_questions": 2
}

Authorizations

X-Messari-API-Key
string
header
required

Body

*/*

Response

200
application/json

Default response

The response is of type object.