October 2024 LLMz (Beta)

πŸ› οΈ This month includes a large update to LLMz, the engine that powers Autonomous Nodes. You can continue to use previous, stable versions, which we recommend for production environments. If you want to give new functionalities a try, you can swap to October 2024 in the Settings menu found in the Studio. Here are some of the new functionalities and improvements:

πŸ”„ Autonomous Nodes now support multiple message types: text, image, audio, video, button, card, and carousel.
πŸš€ Autonomous Nodes can now process multiple messages in the same turn, so that your agent is more flexible in responding to different types of user queries.
πŸ“Š Agent logging now includes visually distinct message types: error, success, prompt, info.
βš™οΈ Autonomous Nodes can now be adjusted to only respond when needed, such that it can perform background tasks without exposing that to the user.
⬆️ We increased the instruction limit on Autonomous Nodes by 10x. This means you can input larger instructions without running up against a limit.

Here's what else is new:

FEATURES & ENHANCEMENTS

🎨 The webchat CSS editor now contains inline autocomplete, so you can get to coding faster with all of the available classes and styling components.
πŸ”— We moved integrations and shareable workflows into a single, unified Hub, to make it easier to find the functionality you’re looking for.
⚑ General improvements to the performance of the visual flow editor.
πŸ“‹ You can now reorder Table columns.
πŸ–ŒοΈ The Studio and Dashboard have gone through a large visual revamp, giving you more contextual messages and making it easier to find what you’re looking for. No functionalities were changed.
πŸ“‚ You can now see Files uploaded to your bot through the File API in the Studio. These will be greyed out until they belong to a Knowledge Base.
πŸ’» Improved typings in the execute code card.

BUG FIXES

πŸ› Fixed an issue where pasting Knowledge Base URLs would result in empty fields.
πŸ”§ Fixed the performance and behaviour of outgoing hooks.
πŸ›‘οΈ Fixed an issue where the Policy Agent would rewrite messages even if the message conformed to the indicated policy.

MISCELLANEOUS

πŸ—‚οΈ You can now overwrite files uploaded by another Workspace member with the Files API.


Workspace Dashboard Refresh

A revamped Workspace homepage in the dashboard gives you faster, clearer insights into bot performance. The streamlined layout focuses on real-time metrics and trends at a glance.

Webchat Styler

We introduced a native styler in the dashboardβ€”offering both simplicity and deep customization. Control the look and feel of your webchat easily, from basic appearance tweaks to detailed, fine-grained adjustments with custom CSS and an intelligent code editor.

What else is new?

FEATURES & ENHANCEMENTS

🧭 Updated interface for indexing specific web pages gives you better visibility on what's being indexed for your KB.
πŸ“Š The billing dashboard now shows next month’s subscription plan when you modify your current one.
πŸ” You can now use the File API to point your searches to multiple KBs seamlessly.

BUG FIXES

🧠 Resolved an issue where answers would be incorrect when the query was too short.
πŸ‘₯ Fixed a bug where two builders performing the same action would create duplicates instead of merging gracefully.
⏰ We resolved a problem with sending timed events.

HITL

The Human in the Loop (HITL) feature offers seamless control over chatbot interactions. Users can now step in and manage conversations manually whenever sensitive queries arise, so you can ensure greater oversight and brand safety. This feature is ideal for industries where compliance and human judgment are essential, or for any situation where a manual touch is necessary.

Integration Hub Revamp

The Integration Hub has been expanded, allowing you to connect and orchestrate multiple systems and APIs with ease. It's also now easier to find and install pre-built integrations, reducing setup time and simplifying workflows. With this update, you can streamline operations across platforms for improved flexibility and scalability.

What else is new?

Features & enhancements

🌟 Autonomous nodes now support copy/pasting, as well as copy/pasting of multiple nodes.
🌟 The logging for LLMz has been improved significantly.
πŸ€– You can now use Cerebras models within Botpress Studio.
πŸ“š When an Autonomous Node uses the Knowledge Base, the start node is no longer required to use the KB agent.
πŸ”§ Autonomous Nodes now have typed inputs (string, number) for functions to execute, with improvements to slow operations and overall speed enhancements.
πŸ”— Integrations are now enabled by default upon installation.
πŸ•·οΈ The web crawler has been updated to fix an issue where certain URLs were being ignored and to handle capitalization correctly.
🌐 You can now manually enter URLs into an indexed website.

Bug fixes

πŸ› οΈ An issue where variables were not being properly created under specific conditions has been fixed.
πŸ“¨ Improvements have been made to queued messages, and the order of messages is now correct.
🧣 Fixed an issue where you were unable to read/write user variables after receiving a trigger.
πŸ“ A fix was applied to resolve an issue with query KB cards.
πŸ›‘ Fixed an issue where table import wasn't working.
⚠️ The problem causing the "add website" modal to crash has been fixed.
❌ An issue preventing the deletion of integrations under certain circumstances has been resolved.

Bring Your Own LLM

Now you can plug in your own LLMs directly into Botpress using the LLM interface. This gives you more flexibility and control over your bot's AI-powered actions. Whether you're working with custom models or specialized solutions, you can now easily integrate them into your workflows. This update is perfect for those who want to fine-tune their bot's behaviour to fit specific use cases or experiment with different LLMs on the fly.

Hugging Face Integration

You can now access Hugging Face's extensive library of models right into the Botpress Hub. It's now simple to access and deploy state-of-the-art NLP models without jumping through extra hoops.

Computed Table Columns

Computed table columns are now live, allowing you to create dynamic, calculated data directly within your tables. You can perform programmatic computation executed through code, or AI-powered operations powered by an LLM. It’s a great way to add more depth to the structured data you have stored in Tables.

What else is new?

FEATURES & ENHANCEMENTS

πŸ› οΈ Autonomous Nodes now provide more descriptive error messages when encountering issues.
πŸ“ LLMz’s truncation logic now accounts for response length, ensuring more accurate outputs.
🧩 Improved the Autonomous Node’s handling of variable reading and writing.
⚑ Optimized Autonomous Nodes to reduce redundant model calls when previous results are available.

BUG FIXES

⌨️ Fixed an issue with premature truncation that caused messages to lose content unexpectedly.
πŸ—ΊοΈ Fixed an issue with sitemap discovery for Knowledge Bases (KBs).
πŸ“• Resolved a problem where LLMz would respond with an empty message.
❌ Corrected an error that prevented bot deletion under certain conditions.

UI Enhancements and Stability Improvements

πŸ”§ Tooltip visibility in execute code card: Resolved an issue where tooltips were being cut off in the execute code card, ensuring complete visibility of information.

πŸš€ Studio & bot conversation stability: Implemented extensive stability improvements to enhance the reliability of Studio and bot conversation states.

🚨 Timeout flow override notification: When setting the Inactivity Timeout to 0, a new Toast message now appears alerting you that the Timeout Workflow is disabled.

Integration and Postback Enhancements

πŸ“‚ Grouped integration actions in cards tray: Actions associated with an integration are now neatly grouped together within the cards tray, improving organization and usability.

πŸ”„ Postback functionality in carousel cards: Added the ability to include postback actions in carousel cards. When a user clicks on the button, the associated text can now be sent as a user message.

Bug Fixes and Improvements

πŸ“š KB flexibility: Fixed an issue where responses without citations couldn’t be sent before.

❗ Integration setup error messages: Error messages are now displayed directly within the integration setup modal, making it easier to diagnose and resolve integration issues.

βš™οΈ Webchat v2 improvements: Various enhancements have been made to webchat v2 to improve user experience and performance.

More robust LLM selection

Now, when selecting which LLM to use for which case, we provide more robust information, like context on which models are best used for which use cases or modalities. We also provide recommendations like general purpose models, or which to use if you want to optimize for cost. The expanded menu allows you to sort by maximum context window, as well as the price of input and output tokens.

Rate Autonomous Node responses

You can now rate and provide feedback on the answers generated by Autonomous Nodes. In the emulator, when an Autonomous Node generates a response, you'll be prompted to provide feedback like a rating and any qualitative commentary you have about that response.

What else is new?

Features & enhancements

🧠 We now provide a more robust description of LLMs when selecting them for use in the studio, including tags, use cases, and the ability to sort by input/output token price.
⭐ Provide feedback on Autonomous Node responses directly in the emulator by rating generations.
πŸ› οΈ You can now execute integration actions through code, instead of solely through the cards provided by an integration.
πŸšͺ A helpful message is now displayed when trying to enter a Workspace you don’t have access to.
πŸ“‚ You can now for Workflow Hub cards in a git-like manner.
🎯 We added a small animation to the Inspect button below Autonomous Node responses to make it clear that you can analyze its iteration process.
πŸŽ›οΈ You can now select which Workspace you want to publish Workflows from, for example if you’ve got one you want to keep for production and development.
βš™οΈ We optimized the Table management experience, so you should notice a smoother workflow when working with records.
πŸ“Š We made various improvements to the Logs and Issues tabs of the Dashboard, to make them easier to read and understand.

Bug fixes

πŸ•’ Fixed an error where the Timeout flow would ignore choices on capture cards.
πŸ€– Fixed a bug where the Autonomous Node would ignore instructions provided to an Agent.
🌐 Fixed an issue where browser translators would prevent people from entering the Studio under certain conditions.
πŸ—‚οΈ Fixed the behaviour of the File API when encountering files with invalid encoding.
πŸ”„ We now prevent an issue where Tables would crash upon failure to connect to a database under certain circumstances.
πŸ” Fixed a bug where special characters would cause a search query to fail.

Studio Integrations

The Studio is now the default location to install and manage integrations. They can be configured from this location, as well as installed/uninstalled. This reduces the need to navigate back to the dashboard when wanting to make changes to an integration.

Select LLMs

We've added the ability to select which LLM is used for which task across all actions and card in the Studio. You can configure them in cards, agents, and other global settings like for Autonomous Nodes. You can also set up custom LLM configurations for different actions.

Enabling LLMs can be done through the integrations menu. You can install integrations that allow you to access LLMs across different LLM providers like OpenAI and Anthropic.

WhatsApp OAuth

The WhatsApp integration now supports OAuth, which offers a simplified configuration experience if you're looking to deploy a bot to a Meta-verified business quickly and simply. Manual configuration is still available for complex use cases, but if you're looking for a faster experience then OAuth should work great.

Transcribe Audio

Enabling models that support audio transcription will give you access to the 'Transcribe Audio' card in the Studio. This enables your bot to receive and transcribe audio files, which is currently done through a File URL. We are working to simplify this experience soon.

Policy Agent

The Policy Agent is a global agent that allows you to set system prompts, like 'do not talk about competitor X or Y'. It's a great option to provide global commands to your bot that can be managed at the conversation or the node level.