Llama

Most recent version: v0.1.0

See the changelog of this Action type here.

Overview

This action enriches based on the evaluation of the LLaMa2 Chat model. This model offers a flexible, advanced prompt system capable of understanding and generating responses across a broad spectrum of use cases for text logs.

By integrating LLaMA 2, Onum not only enhances its data processing and analysis capabilities but also becomes more adaptable and capable of offering customized and advanced solutions for the specific challenges faced by users across different industries.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find MLLLaMa2 in the Actions tab (under the Enrichment group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

Token*

The API token of the model you wish to

Model*

The name of the model to connect to. It’s possible to select between the three available Llama2 models: Llama2-7b-Chat, Llama2-13b-Chat and Llama2-70b-Chat.

Prompt

This will be the input field to call the model.

Temperature

This is the randomness of the responses. If the temperature is low, the data sampled will be more specific and condensed, whereas setting a high temperature will acquire more diverse but less precise answers.

System Prompt

Describe in detail the task you wish the AI assistant to carry out.

Max Length

The maximum number of characters for the result.

Out field*

Specify a name for the output event.

Click Save to complete.

Last updated

Was this helpful?