Skip to main content

Automate Anything with LLMs in NoCode-X - Effortless AI Workflows

Table of Contents

Introduction

This guide demonstrates how to use Large Language Models (LLMs) in NoCode-X to automate workflows and create AI-powered applications. The tutorial covers building a simple SVG generator using LLMs, integrating AI tasks, and ensuring observability for debugging.

Video Tutorial

Use Cases

  • Generating SVG icons dynamically for web applications.
  • Automating repetitive tasks using AI workflows.
  • Building AI-powered tools for creative tasks like image generation.
  • Debugging and monitoring AI tasks with built-in observability.

Prerequisites

  • A NoCode-X workspace with access to AI task execution.
  • Basic understanding of NoCode-X page creation and actions.
  • Familiarity with LLMs like GPT-3.5 or GPT-4.

Quick Start Guide

  1. Create a Page:

    • Add a text input, button, and image to the page.
    • Design the layout using vertical and horizontal lists.
  2. Set Up AI Task:

    • Use the "Execute an AI Task" action to call an LLM.
    • Pass the user input as a prompt to the LLM.
  3. Display the Output:

    • Extract the SVG from the LLM response.
    • Set the SVG as the source for the image element.
  4. Test and Debug:

    • Use application logs to monitor prompts and responses.

Detailed Implementation Steps

1. Creating the Page Layout (Timestamp: 0:26-3:13)

  • Add a vertical list to the page and include a title, text input, button, and image.
  • Customize the design with colors, fonts, and alignment.
// Example: Page layout setup
const page = {
title: "SVG Creator",
elements: [
{ type: "textInput", id: "promptInput", label: "Prompt" },
{ type: "button", id: "generateButton", label: "Generate SVG" },
{ type: "image", id: "svgImage" }
]
};

2. Setting Up the AI Task (Timestamp: 3:32-6:03)

  • Create an action for the button to execute an AI task.
  • Use the "Execute an AI Task" function to call the LLM.
// Example: AI task setup
const aiTask = {
name: "Generate SVG",
description: "Create an SVG based on the following description: {{prompt}}",
model: "GPT-3.5",
outputFormat: "SVG"
};

3. Extracting and Displaying the SVG (Timestamp: 6:08-7:01)

  • Extract the SVG from the LLM response using the "Extract XML" function.
  • Set the SVG as the source for the image element.
// Example: Extracting and displaying SVG
const svg = extractXML(aiResponse);
setImageSource("svgImage", svg);

4. Testing and Debugging (Timestamp: 9:02-10:23)

  • Use application logs to monitor prompts and responses.
  • Verify that the correct SVG is generated and displayed.
// Example: Logging AI task details
console.log("Prompt sent to LLM:", prompt);
console.log("LLM response:", aiResponse);

Advanced Features

1. Switching Between LLMs (Timestamp: 7:57-8:11)

  • Easily switch between different LLMs like GPT-3.5, GPT-4, or DeepSeek.
// Example: Switching LLMs
aiTask.model = "GPT-4";

2. Observability and Debugging (Timestamp: 9:02-10:23)

  • Use built-in observability tools to debug AI tasks and monitor responses.

3. Customizing Prompts (Timestamp: 5:05-5:48)

  • Add placeholders and dynamic inputs to create flexible prompts.
// Example: Dynamic prompt
aiTask.description = `Create an SVG based on the following description: ${userInput}`;

References