LangFlair Docs
  • WELCOME
    • Introduction
    • What is LangFlair?
  • Setting Up
    • Create Account
    • Team Setup
    • Project Setup
    • Add Tasks
  • Prompt Management
    • Create Prompt
    • Test and Refine Prompt
    • Prompt Status
    • Task Prompts
    • Task Testing via API
  • API Integration
    • Working with LangFlair APIs
  • Stats and Insights
    • Prompt Call Logs Overview
    • Leveraging Task Stats for A/B Testing
  • SUPPORT
    • Contact Us
Powered by GitBook
On this page
  • Endpoint
  • Authentication
  • Core APIs
  1. API Integration

Working with LangFlair APIs

LangFlair APIs are designed with a broad range of product and engineering use cases in mind, offering flexible solutions to integrate Large Language Model (LLM) capabilities into your applications. Here's how to get started:

Endpoint

Base URL: https://www.langflair.com/api

Authentication

LangFlair uses team API keys for authentication, which can be generated from the team management screen. To authenticate your API calls, pass the key as a URL parameter.

Example:

https://www.langflair.com/api/uni-call/execute?key=<API_KEY>

Core APIs

LangFlair provides three main APIs for handling both self-managed and LangFlair-managed LLM calls:

1. Prompt API

Use this API to retrieve the prompt for a given use case if you plan to handle LLM calls on your own.

CURL Command:

curl --location --request POST 'https://www.langflair.com/api/uni-call/prompt?key=<API_KEY>&use_case=<USE_CASE_INTERNAL_NAME>' \
--header 'Content-Type: application/json' \
--data-raw '{
    "filter_params": {
        "var1": "val2"
    }
}'

Here "filter_params" are optional. They are required if you have created prompts for given use case with filters.

Output Format:

{
    "llm": "llm-name",
    "llm_model": "llm-model-id",
    "prompt_text": "Some user text with variables - {{var1}}",
    "system_text": "Some system prompt text with variables - {{var2}}"
}

Now you can use these system and user prompt text and call mentioned llm call at your end.

Currently, LangFlair doesn't have any python or node library to call various LLMs from user code. We are working on it and keep our users posted on the same.

2. Execution API (Recommended)

For an end-to-end managed LLM call response, use this API. LangFlair manages the LLM calls, simplifying the process.

CURL Command:

curl --location --request POST 'https://www.langflair.com/api/uni-call/execute?key=<API_KEY>&use_case=<USE_CASE_INTERNAL_NAME>' \
--header 'Content-Type: application/json' \
--data-raw '{
    "template_params": {
        "var1": "val1",
        "var2": "val2"
    },
    "filter_params": {
        "var1": "val1"
    },
    "context_id": "context_id", 
    "previous_messages": [
        {
            "role": "user",
            "content": "Hello"
        },
        {
            "role": "assistant",
            "content": "Hi, how can I help you?"
        }
    ]
}'

Here "template_params" correspond to variables in both system and user prompt texts. "filter_params" are optional for refining prompt selections by additional criteria. "context_id" and "previous_messages," also optional, enhance LLM interactions with prior context. Use a unique "context_id" for LangFlair-managed contexts, or manually provide "previous_messages" for alternative context handling.

Output Format:

{
    "latency": 1234, // Milliseconds
    "llm": "llm-name",
    "llm_model": "llm-model-id",
    "prompt_call_id": "alphanumeric id",
    "prompt_id": "alphanumric id",
    "prompt_output": "prompt output from llm"
}

Make a note of prompt_call_id; as this would be used for the feedback api below.

3. Feedback API

Capture end-user feedback on the product experience generated from LLM output. This data is invaluable for prompt optimization.

curl --location --request POST 'https://www.langflair.com/api/uni-call/feedback?key=<API_KEY>' \
--header 'Content-Type: application/json' \
--data-raw '{
    "prompt_call_id": "prompt_call_id",
    "rating": 1, // User Rating (1 thumbs up, -1 thumbs down etc.)
    "feedback": "(optional) Custom feedback"
}'

Library Support

Currently, LangFlair does not offer Python or Node libraries for calling various LLMs from user code. We are actively working on developing these resources and will keep our users updated on our progress.

By leveraging these APIs, you can seamlessly integrate sophisticated LLM functionalities into your applications, enhancing them with AI-powered content generation, analysis, and interaction capabilities.

PreviousTask Testing via APINextPrompt Call Logs Overview

Last updated 1 year ago