LangFlair Docs
  • WELCOME
    • Introduction
    • What is LangFlair?
  • Setting Up
    • Create Account
    • Team Setup
    • Project Setup
    • Add Tasks
  • Prompt Management
    • Create Prompt
    • Test and Refine Prompt
    • Prompt Status
    • Task Prompts
    • Task Testing via API
  • API Integration
    • Working with LangFlair APIs
  • Stats and Insights
    • Prompt Call Logs Overview
    • Leveraging Task Stats for A/B Testing
  • SUPPORT
    • Contact Us
Powered by GitBook
On this page
  • Context Testing
  • Handling LLM Key Errors
  1. Prompt Management

Test and Refine Prompt

PreviousCreate PromptNextPrompt Status

Last updated 1 year ago

One of the standout features of LangFlair is the ability to dynamically adjust your prompts and immediately test the changes directly from the console. This functionality not only streamlines the development process but also significantly enhances the effectiveness of your prompts. To make the most of this feature, follow these steps:

Before Testing:

Ensure all template variables within your prompt have been accurately filled out. Template variables are placeholders in your prompt that need specific values for the AI to generate relevant and precise responses. Not providing values for these variables might result in less targeted outputs or errors during the test.

How to Test and Refine:

  1. Fill in Template Variables: Review your prompt and ensure every template variable {{variable_name}} has an appropriate value assigned. These values should reflect the real data or scenarios you expect the prompt to handle.

  2. Navigate to the Testing Console: Within the LangFlair platform, locate and access the testing console. This might be a dedicated section or an option available within the prompt editing area.

  3. Enter Test Values (if applicable): If your prompt requires specific inputs for testing, enter these values in the designated fields. These inputs simulate the actual data your prompt will process, providing a realistic preview of its functionality.

  4. Run the Test: Execute the test to see how the AI processes your prompt with the provided inputs. Pay close attention to the output to determine if it aligns with your expectations.

  5. Refine as Needed: Based on the test results, you may need to adjust the wording, structure, or variables within your prompt. After making changes, retest the prompt to evaluate the improvements. Repeat this process until you achieve the desired outcome.

This iterative process of testing and refining is crucial for developing highly effective prompts that deliver value in real-world applications. LangFlair's console facilitates this workflow, empowering you to perfect your prompts with efficiency and ease.

Context Testing

Certain use cases, such as simulating an interview process, necessitate the maintenance of context throughout the interaction. LangFlair provides robust support for context management, allowing for a more natural and cohesive exchange between the end-user and the LLM. This can be achieved through two primary methods:

1. LangFlair Managed 'Context'

LangFlair offers a managed solution for context preservation, where the platform dynamically logs end-user inputs and LLM outputs. This information is then utilized as "previous messages" in subsequent interactions to enrich the context provided to the LLM.

  • How to Use:

    • Unique Context ID: When making a prompt call, pass a unique context ID for each user session. LangFlair leverages this ID to construct and maintain a continuous context trail, ensuring subsequent responses are contextually relevant.

2. Manual 'Previous_Messages' Payload

For scenarios where active prompts are used and LangFlair's default behavior does not retain user inputs or LLM outputs, manual intervention is required to maintain context.

  • How to Use:

    • Previous Messages: You can manually include the last few previous messages in your prompt call payload. This approach requires you to collect and manage these messages on your end, passing them along with each request to ensure the LLM has the necessary context for its response.

Choosing Your Context Management Strategy

  • Managed Context: Opt for LangFlair's managed context if you prefer automated, hassle-free context tracking that dynamically updates with each interaction. This is ideal for complex, ongoing conversations where continuity is key.

  • Manual Payload: Select the manual 'previous_messages' method if you need more control over what is passed as context or when using "ACTIVE" status prompts where LangFlair does not store interaction history. This option might be suitable for simpler interactions or when specific, targeted context is required.

Both methods are designed to enhance the conversational quality and relevance of LLM responses, enabling you to tailor the user experience based on your specific needs and the nature of the interaction.

In the upcoming sections of this documentation, we will cover how to manage context through LangFlair APIs in detail.

Handling LLM Key Errors

In LangFlair, the alignment of your Large Language Model (LLM) keys with your team is crucial for seamless operation. If you encounter an error indicating that your LLM key is not associated with your current team, it means the platform cannot access the necessary LLM resources under the current team settings.

Error Message:

If the LLM key you've provided or are trying to use is not configured for the team you're currently working with, LangFlair will display an error message to alert you of this mismatch.

Steps to Resolve:

  1. Verify Your Team Selection: Ensure you're working within the correct team environment where the LLM key was originally configured or is intended to be used.

  2. Check LLM Key Configuration: Review the LLM key settings within your LangFlair account to confirm that it's correctly associated with your intended team. This might involve checking the key's permissions or the team settings where it's applied.

  3. Update or Reconfigure the LLM Key: If the key is indeed misplaced or misconfigured, you may need to update the LLM key settings or re-enter the key for the correct team. Ensure you have the proper permissions to do so.

  4. Contact Support: If you're unable to resolve the issue on your own, don't hesitate to reach out to LangFlair's support team. Provide them with the error message details and your account information for more personalized assistance.

By following these steps, you can quickly identify and resolve issues related to LLM key mismatches, ensuring your projects proceed without interruption.

Test Prompt
Context Testing
No LLM Key Error