Llama
Most recent version: v0.1.0
Last updated
Was this helpful?
Most recent version: v0.1.0
Last updated
Was this helpful?
See the changelog of this Action type .
Note that this Action is only available in certain Tenants. Contact us if you need to use it and don't see it in your Tenant.
This action enriches based on the evaluation of the LLaMa2 Chat model. This model offers a flexible, advanced prompt system capable of understanding and generating responses across a broad spectrum of use cases for text logs.
By integrating LLaMA 2, Onum not only enhances its data processing and analysis capabilities but also becomes more adaptable and capable of offering customized and advanced solutions for the specific challenges faced by users across different industries.
In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.
These are the input and output ports of this Action:
To open the configuration, click the Action in the canvas and select Configuration.
Enter the required parameters:
Token*
The API token of the model you wish to
Model*
The name of the model to connect to. It’s possible to select between the three available Llama2 models: Llama2-7b-Chat, Llama2-13b-Chat and Llama2-70b-Chat.
Prompt
This will be the input field to call the model.
Temperature
This is the randomness of the responses. If the temperature is low, the data sampled will be more specific and condensed, whereas setting a high temperature will acquire more diverse but less precise answers.
System Prompt
Describe in detail the task you wish the AI assistant to carry out.
Max Length
The maximum number of characters for the result.
Output
Specify a name for the output event.
Click Save to complete.