Amazon GenAI

circle-info

See the changelog of this Action type here.

circle-exclamation

Overview

The Amazon GenAI Action allows users to enrich events by generating structured outputs using models hosted on Amazon Bedrock, such as Claude, Titan, or Jurassic.

circle-exclamation

Ports

These are the input and output ports of this Action:

chevron-rightInput portshashtag
  • Default port - All the events to be processed by this Action enter through this port.

chevron-rightOutput portshashtag
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Amazon GenAI in the Actions tab (under the AI group) and drag it onto the canvas.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

Region*

Choose the Google Cloud location for AWS (e.g., eu-central-1). Your region is displayed in the top right-hand corner of your AWS console.

Model*

Enter your Model ID or Model Inference Profile (arn) e.g. e.g., arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2

  • Go to Model Access in the left sidebar.

  • You’ll see a list of available foundation models (FMs) like Anthropic Claude, AI21, Amazon Titan, Meta Llama, etc.

  • Click on a model to view its Model ID (e.g., anthropic.claude-v2) and ARN (e.g., arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2).

System Instructions

Optional instructions to influence the behavior of the model (e.g., "You are a security analyst...").

Prompt Field*

Select the field in the event containing the prompt to send to the model. Must be string. This field will be sent as-is to the model.

Amazon Bedrock models support both English and multilingual prompts, depending on the model selected.

Temperature

Adjusts randomness of outputs: greater than 1 is random, 0 is deterministic, and 0.75 is a good starting value. Default value is 0.1

Max Tokens

Maximum number of tokens to generate. A word is generally 2-3 tokens. The default value is 128 (min 1, max 8892).

Top P

Top P sets a probability threshold to limit the pool of possible next words. Whereas temperature controls how random the selection is,top_p controls how many options are considered. Range: 0–1. Default is 1.0.

JSON credentials*

Provide the secret JSON credentials used to authenticate against Amazon Bedrock.

Output Field*

Give a name to the output field that will return the evaluation.

4

Click Save to complete.

circle-info

Use conditional logic upstream to prevent sending unstructured or non-informative prompts to the model, helping to optimize costs and relevance.

Example

Read our use case to learn how to use this Action in a real cybersecurity scenario.

Last updated

Was this helpful?