Prompt Call Logs Overview
Using the "execute" API in LangFlair automatically generates detailed call logs. These logs are instrumental for analyzing the performance and outcomes of your prompts, especially during the testing phase. Here's a breakdown of the fields captured in each prompt call log:
use_case: The specific use case associated with the prompt call.
llm: The Large Language Model used for the call.
llm_model_id: The specific model ID of the LLM.
prompt_id: A unique identifier for the prompt.
prompt_status: Status of prompt when the call was logged. It can be TESTING or ACTIVE.
prompt_input: Captured only if the prompt is in 'TESTING' status, indicating the input provided to the prompt.
prompt_text: The actual text of the prompt sent to the LLM.
llm_params: Parameters such as temperature, top_p that are overridden for the call.
prompt_output: Recorded only if the prompt is in 'TESTING' status, showing the output generated by the LLM.
latency: The time taken for the call, measured in milliseconds.
rating: Numeric feedback from the user, such as -1 or 1.
feedback: Text feedback provided by the user.
status: The status of the prompt call, which can be 'pending', 'success', or 'failed'.
create_date: The date and time when the prompt call was made.
Storage and Analysis:
Testing Phase: When a prompt is in "TESTING" status, LangFlair records both the input and output. This data is crucial for analyzing the prompt’s effectiveness and making necessary adjustments.
Active Status: For prompts marked as "ACTIVE", input and output data are not stored to respect data privacy and minimize unnecessary storage. However, latency data and user feedback are still recorded for performance analysis.
LangFlair Managed 'Context': When leveraging LangFlair's managed Context for enhanced LLM interactions, user inputs and LLM outputs are retained for 24 hours.
Log Rollover: Prompt call logs are rolled over daily and only aggregated information is retained for longer-term analysis.
User Feedback: While rolling over the prompt call logs, if there are any user 'text' feedback it will be stored in a separate table for future qualitative analysis.
These call logs serve multiple purposes, from enabling prompt optimization during the testing phase to capturing valuable user feedback on the generated content. By providing a comprehensive view of each prompt call's performance, LangFlair empowers developers to refine their AI-driven features for optimal results.
Last updated