LLM Completion Request
Function: LLM Completion Request
This action allows you to interact with various Large Language Models (LLMs) to generate text, answer questions, summarize information, or create structured data based on your instructions. You provide a prompt, choose an AI model, and the action returns the AI's response.
Input
- User prompt (Text, Required) A clear and concise instruction or question that you want to send to the AI model. This is the primary input for the AI to generate its response.
- User prompt placeholders (Key-Value Pairs, Optional)
If your "User prompt" contains dynamic values enclosed in
\{\{double curly braces\}\}(e.g.,\{\{product_name\}\}), you can provide a list of key-value pairs here. The action will automatically replace each placeholder with its corresponding value before sending the prompt to the AI. - System prompt (Text, Optional) This sets the initial instructions for how the AI should behave throughout the conversation. It helps guide the AI's persona, tone, and overall response style. For example, you might instruct it to "Act as a helpful customer service agent."
- System prompt placeholders (Key-Value Pairs, Optional) Similar to user prompt placeholders, these replace dynamic values within your "System prompt" using key-value pairs.
- Model (Dropdown, Required)
Select the specific AI model you want to use for this request. Different models have varying capabilities, costs, and performance characteristics. Examples include
gpt-4o,claude-sonnet,llama-3.1-8b-instant, etc. - Files (List of Files, Optional) A list of files (like documents or images) that you want the AI to analyze or answer questions about. Note: This feature is only supported by certain advanced AI models. If you select a model that doesn't support files, this input will not be available or will result in an error.
- API token (Password, Optional) If you have your own API key for a specific AI service, you can provide it here. This allows you to use the AI model directly through your own account, bypassing the platform's AI credits.
- Response format (Data Structure, Optional) Define a specific structure (like a JSON schema) for the AI's response. If you provide a data structure, the AI will be instructed to generate its output in that exact format. If left blank, the AI will return a plain text response.
Output
- Result (Variable) This variable will store the AI's generated response. If you specified a "Response format," the output will be a structured object (e.g., a JSON object) conforming to that format. Otherwise, it will be a plain text string.
Execution Flow
Real-Life Examples
Example 1: Summarizing Customer Feedback
Scenario: You have a long piece of customer feedback and want a quick summary.
Inputs:
- User prompt: "Summarize the following customer feedback in three bullet points: 'The new update to the mobile app is very buggy. I experienced frequent crashes when trying to access my account details, and the navigation feels clunky. The previous version was much more stable and user-friendly. I hope these issues are resolved soon, as I rely on this app daily.'"
- Model:
gpt-4o-mini - Result:
FeedbackSummary
Result:
The FeedbackSummary variable will contain a text string like:
- The mobile app update is buggy with frequent crashes.
- Users are experiencing issues accessing account details and clunky navigation.
- The previous version was more stable, and users hope for quick resolution of current issues.
Example 2: Extracting Contact Information into a Structured Format
Scenario: You receive an email with contact details and want to automatically extract the name, email, and phone number into a structured format for your CRM.
Inputs:
- User prompt: "Extract the contact information from the following text: 'Hello, my name is Alice Wonderland, you can reach me at [email protected] or call me at +1 (555) 123-4567. I look forward to hearing from you.'"
- System prompt: "You are an expert data extraction assistant. Always provide the output in the specified JSON format."
- Model:
claude-sonnet-4 - Response format: (Assuming you have a Data Structure named "ContactDetails" defined as follows)
\{
"type": "object",
"properties": \{
"name": \{ "type": "string" \},
"email": \{ "type": "string", "format": "email" \},
"phone": \{ "type": "string" \}
\},
"required": ["name", "email"]
\} - Result:
ExtractedContact
Result:
The ExtractedContact variable will contain a structured object:
\{
"name": "Alice Wonderland",
"email": "[email protected]",
"phone": "+1 \(555\) 123-4567"
\}
Example 3: Answering Questions about a Document
Scenario: You have a PDF document (e.g., a company policy) and want to quickly find specific information within it.
Inputs:
- User prompt: "What is the company's policy on remote work, specifically regarding eligibility and required equipment?"
- Files: (Upload your "CompanyPolicy.pdf" file here)
- Model:
llama-4-maverick-17b-128e-instruct(or any other model that supports file input) - Result:
PolicyAnswer
Result:
The PolicyAnswer variable will contain a text string with the relevant information extracted from the PDF, such as:
"The company's remote work policy states that employees are eligible after 6 months of service, subject to manager approval. Required equipment includes a stable internet connection, a company-issued laptop, and a secure workspace. Employees are responsible for maintaining their home office environment."