mAIstro
Overview
What is it?
- The mAIstro feature is a versatile and innovative platform, offering an open-ended playground for "retrieval augmented generation". It empowers users to seamlessly integrate their preferred Language Model (LLM), select from a range of data sources including Knowledge Bases, websites, local files, or typed text, and employ the NeuralSeek Template Language (NTL) markup for dynamic content retrieval. Notably, mAIstro enhances data by incorporating features like summarization, stopwords removal, and keyword extraction, all while providing expert guidance with LLM prompt syntax and base weighting. With the ability to output results to an editor or directly to a Word document, mAIstro delivers a powerful and user-friendly experience, making it a standout feature in content generation and retrieval.
Why is it important?
- Efficient Content Retrieval: mAIstro simplifies the process of accessing and retrieving content from various sources. This efficiency is crucial for anyone who relies on accurate and relevant information.
- Enhanced Data Quality: mAIstro enhances data quality by providing tools for summarization, stopwords removal, and keyword extraction. This ensures that the retrieved content is refined, concise, and tailored to the user's needs, saving time and effort in manual data preprocessing.
- User-Friendly Interface: mAIstro offers a user-friendly interface that makes interacting with Language Models and crafting dynamic prompts accessible to a broader audience. This accessibility is vital for individuals who may not have advanced technical skills but still require the benefits of advanced language models.
- Expert Guidance: mAIstro provides users with expert guidance by pre-configuring LLM prompt parameters and model-specific base weights. This guidance helps users achieve optimal results without the need for in-depth knowledge of language model intricacies.
- Output Flexibility: The ability to output results to an editor or directly to a Word document enhances flexibility and convenience for users, allowing them to seamlessly integrate the generated content into their workflows.
- Semantic Scoring: The incorporation of a Semantic Scoring model allows users to assess the relevance and alignment of generated content with their specific requirements. This feature adds a layer of precision and control to the content generation process.
How does it work?
- mAIstro streamlines the interaction with Language Models, making it accessible and user-friendly while providing powerful tools for content retrieval and enhancement. Users can seamlessly integrate retrieved content into their workflows with precision and control, making it a valuable asset for various professional fields.
Additional Capabilities
- Choice of LLM: (BYOLLM Plans) Select your preferred LLM, and seamlessly integrate it with mAIstro.
- Utilize NeuralSeek Template Language (NTL): Craft dynamic prompts using a combination of regular words and NTL markup to retrieve content from different sources.
- User-Friendly Agent Editor: Create custom prompts with an easy-to-use point-and-click visual agent editor.
- Utilize Other NeuralSeek Features: Extract, Protect, or Seek a query through the mAIstro platform.
- Versatile Content Retrieval: Retrieve data from various sources, including KnowledgeBases, SQL Databases, websites, local files, or your own text.
- Content Enhancement: Improve your data with features like summarization, stopword removal, keyword extraction, and PII removal to ensure your content is refined and valuable.
- Guarded Prompts: mAIstro provides Prompt Injection Protection and Profanity Guardrails, preventing embarrassing moments with Language Generation.
- Table Understanding: Conduct searches and generate answers with natural language queries against structured data.
- Effortless Output: Easily view your generated content within the built-in editor or export it directly to a Word document, offering convenient control over your output.
- Precision Semantic Scoring: Importantly, all these operations are assessed using our Semantic Scoring model. This allows insight into the content's scope tailored to your preferences.
NeuralSeek Template Language (NTL)
NeuralSeek's mAIstro feature is powered by NeuralSeek Template Language (NTL), enabling users or developers to create expressions, or extract and format data from various sources for subsequent processing by an LLM without traditional coding. Often times, this is faster than a custom Python script.
It simplifies many tasks - API connections, data formatting, mathematics - streamlining the process of preparing data for further language model processing.
How does it work?
- Users utilize agent commands within NTL to query databases, websites, uploaded documents, APIs, and more, while specifying parameters for extraction and formatting. The resulting data is then available for use in driving subsequent language generation.
Some general rules
- Considering the extensive use of double quotes in NTL, typically you will need to \"\" escape double quotes to use in functions. For example, in SQL / Database queries.
- Any blank value (e.g. "") is considered "not present" or null-equivalent.
- Variables used with << >> notation will always expand in-place.
Syntax Highlighting
The NeuralSeek Template Language (NTL) brings flexibility to mAIstro by enabling dynamic workflows through functions that support data querying, HTTP requests, calculations, and variable management. Now, with syntax highlighting in the Agent NTL, it’s even easier to write, read, and manage complex code snippets for efficient development.
Examples of Syntax Highlighting
- Considering the extensive use of double quotes in NTL, typically you will need to \"\" escape double quotes to use in functions. For example, in SQL / Database queries.
- Any blank value (e.g. "") is considered "not present" or null-equivalent.
- Variables used with << >> notation will always expand in-place.
Important
Any of the example NTL shown here can be copy-pasted into the Agent NTL tab, and then switch back to the Agent Editor for easier analysis.
Agent Editor
The Agent Editor allows users to create expressions using movable, chain-linked, and customizable blocks that execute commands. It simplifies user interaction through drag-and-drop blocks, making it easy to navigate complex use cases with no code required.
Click to insert
All the elements on the left panel can be created in the editor by clicking them.
You can also drag and drop a node from the sidebar into the editor and use it to "chain" nodes together
Click to edit
Selecting a card will highlight the node blue, and a dialog will appear on the right side to edit the configuration options for the selected node. Depending on the type of the node, there may be several options. See the NTL reference page for a description of all configurable options.
Deleting a node
You may delete a node by clicking the red Delete Node
button at the bottom of the options panel.
Hover Menus
Hover menus allow users to easily access and insert secrets, user-defined variables, system-defined variables, or generate new variables while working in the visual builder. This feature enhances the building process by providing quick access to essential elements without disrupting the workflow.
Secrets
This function will provide a dropdown of variables defined as "secrets" in the configure tab. This code will vary depending on your staging instance.
Variables
This function will provide a dropdown list of previously defined variables that the user can call on with the click of a button.
Dynamic Variables
This function will provide a dropdown list of system-defined variables that can be added to the mAIstro flow. The user must evaluate the agent first before they can make use of this feature.
New
This function will generate a new variable prompt: << name: myVar, prompt: true >>
.
Stacking elements
Adding nodes, by default, will connect the elements vertically. We call this Stacking
, or building a Flow
.
Stacked elements flow from top to bottom, meaning the output produced by the top element will become available as input to the bottom/next element.
Chaining elements
You can also connect elements horizontally. This is called Chaining
.
Chaining is useful when you want to direct a node's output. In this example, the output of the LLM will be provided as input to the extract keywords element - chained
together.
Example
Evaluating
Clicking the evaluate button will run the expression, and generate output.
Saving as user agent
You may frequently use the same expression over and over again. We offer the ability to save the agent for re-use, and also be triggered via an API call.
Build an expression, and then click the Save
button along the bottom of the editor. Enter the agent name and (optional) description. Click Save
in the dialog to save it as a user agent.
Loading the agent
Your saved agent can be loaded into the editor, or called upon later from the API.
Click the Load
button along the bottom of the editor, select User Agents
, and click the checkbox to the agent that you want to load. Click Load Agent
to load the saved agent into the editor.
Output Formats
- Inline - Suitable for displaying rendered output from the flow, supporting charts and HTML.
Example
Display a chart using data retrieved from an API endpoint, rendered with Chart.js.
Preview the HTML generated by the LLM node.
- Raw - Useful for viewing the unprocessed text output from the flow.
Example
Validate raw HTML generated from a HTTP request to a user endpoint, creating a presentation HTML user card.
View the underlying HTML behind the inline format to ensure expected output.
- Word - Quickly download the raw output as a Word document for easy sharing or storage.
- PDF (text) - Generate a text document in PDF format from the raw output.
- PDF (html) - Converts an HTML format into a PDF document.
- CSV - Create CSV files with extracted data from various sources.
Example
Extract and preview CSV data, such as medical texts with illnesses and corresponding medications or therapies.
The resulting CSV will look similar to this:
mAIstro Inspector
- The mAIstro Inspector (the small bug icon near the top-right) allows users to drill down to the details of each step, exposing what was set, when it was set, and how it was processed.
- Expand steps individually to drill down into specific values, calculations, assignments, or generation.
Quick start with auto-builder
Get started by giving a prompt in the auto-builder. Use this example prompt: Build an agent to send individualized emails to each address listed in an input CSV file
. This gives you the ability to start from scratch, use an existing agent or build one using natural language commands.
This will output a customizable agent that you can test or adapt to your needs.
Automagic Parallel Execution
To optimize performance and reduce the overall execution time, you can run multiple nodes in parallel by assigning their outputs to variables. This allows the total execution time to depend on the longest-running node, rather than the sum of all nodes. This all happens automatically under the hood!
Steps for Parallel Execution
- Define Nodes: Set up multiple nodes to query different data sources (e.g., KnowledgeBase, Websites, Seek).
- Assign Variables: Assign the output of each node to a variable (e.g.,
kbResult
,webResult
,seekResult
). - Execute in Parallel: All nodes run simultaneously, and the system will wait only for the slowest node to complete.
- Use Results: After all nodes finish, select the result that best fits your needs by comparing outputs across variables.
Example
In this setup, three nodes are executed in parallel:
- Node 1 retrieves data from a KnowledgeBase.
- Node 2 scrapes content from a Website.
- Node 3 performs a query using Seek.
Final Result
By assigning each node's output to a variable, you ensure that the total runtime is determined by the longest-running node. This strategy improves efficiency by taking advantage of parallelism, ensuring your task completes in the shortest possible time.
Using parallelism in this way significantly reduces execution time and enhances workflow efficiency, without having to write complex code or manage states.
Features
Here is a list of articles relevant to the mAIstro tab.
Guides
Here is a list of guides relevant to the mAIstro tab.