Task Testing via API
Last updated
Last updated
Testing your task through the LangFlair API is a straightforward process designed to validate the functionality and output of your prompts. To initiate testing, follow these steps to locate and utilize the appropriate API endpoint:
Navigate to the Task Screen: Access your project within LangFlair and open the specific task you wish to test.
Find the API Endpoint Information: Scroll to the lower section of the task screen, where you'll find the API endpoint details. This information is crucial for setting up your API call.
URL Params: API URL accepts LangFlair API Key
for the team and "internal_name"
of task as required parameters.
Filter Params (Optional): Use filter_params
if you've applied filters to your prompts within this task. These parameters help narrow down the prompts to those that meet specific criteria, ensuring your API call tests the correct configurations.
LLM Params (Optional): The llm_params
allow you to specify additional options or settings for the Large Language Model being used. Adjust these only if you need to customize the LLM's behavior beyond the default settings.
Template Params: The template_params
should include a JSON object representing the key-value pairs for variables utilized in your system and user text of the prompts. This ensures that the API call accurately reflects the dynamic content within your prompts.
Context Params (Optional): To enrich LLM interactions with context for more relevant responses, you have two options: use 'context_id
', a unique identifier for each user session, or provide a 'previous_messages
' JSON. These methods help the LLM generate responses that are contextually aligned with the ongoing conversation. An ideal use case for this is simulating an interactive interview chat.
To test your task via the API, construct your API call using the endpoint provided and include the necessary parameters based on your testing needs. If your prompts utilize template variables, ensure the template_params
JSON is correctly formatted with the relevant keys and values.
By carefully setting up your API call with the appropriate parameters, you can effectively test the task gaining insights into the performance and output of your prompts within the specified LLM configuration.