# Amazon GenAI

{% hint style="info" %}
See the changelog of this Action type [here](https://app.gitbook.com/s/IXj83By4f20oCrZZ0gyP/).
{% endhint %}

{% hint style="warning" %}
Note that this Action is only available in Tenants with access to Amazon Bedrock. [Get in touch with us](https://app.gitbook.com/s/cSjT21I4EUhzghjc1rER/) if you don't see it and want to access it.
{% endhint %}

## Overview <a href="#overview" id="overview"></a>

The **Amazon GenAI** Action allows users to enrich events by generating structured outputs using models hosted on **Amazon Bedrock**, such as Claude, Titan, or Jurassic.

{% hint style="warning" %}
In order to configure this action, you must first link it to a Listener. Go to [Building a Pipeline ](https://docs.onum.com/the-workspace/pipelines/building-a-pipeline)to learn how this works.
{% endhint %}

<figure><picture><source srcset="https://965373739-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FkxZeV4nlXcIAjMGZxzLI%2Fuploads%2FOWslHWooegbEYWccbMrJ%2Fdark-large%20(23).png?alt=media&#x26;token=f53cc449-b021-4bdc-86a8-70edc55b990d" media="(prefers-color-scheme: dark)"><img src="https://965373739-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FkxZeV4nlXcIAjMGZxzLI%2Fuploads%2FVZlk3Y3nZ6FIYYGDRyvj%2Flight-large%20(25).png?alt=media&#x26;token=00ca4bcc-e724-4dea-9f17-ebba4ba3e5d9" alt=""></picture><figcaption></figcaption></figure>

## Ports

These are the input and output ports of this Action:

<details>

<summary>Input ports</summary>

* **Default port** - All the events to be processed by this Action enter through this port.

</details>

<details>

<summary>Output ports</summary>

* **Default port** - Events are sent through this port if no error occurs while processing them.
* **Error port** - Events are sent through this port if an error occurs while processing them.

</details>

## Configuration

{% stepper %}
{% step %}
Find **Amazon GenAI** in the **Actions** tab (under the **AI** group) and drag it onto the canvas.
{% endstep %}

{% step %}
To open the configuration, click the Action in the canvas and select **Configuration**.
{% endstep %}

{% step %}
Enter the required parameters:

<table><thead><tr><th width="231">Parameter</th><th>Description</th><th data-hidden></th></tr></thead><tbody><tr><td><strong>Region</strong><mark style="color:red;"><strong>*</strong></mark></td><td>Choose the Google Cloud location for AWS (e.g., <code>eu-central-1</code>).  Your region is displayed in the top right-hand corner of your AWS console.</td><td></td></tr><tr><td><strong>Model</strong><mark style="color:red;"><strong>*</strong></mark></td><td><p>Enter your Model ID or Model Inference Profile (arn) e.g. e.g., <code>arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2</code> </p><ul><li>Go to the <a href="https://console.aws.amazon.com/bedrock/home"><strong>Amazon Bedrock console</strong></a>.</li><li>Go to <strong>Model Access</strong> in the left sidebar.</li><li>You’ll see a list of available foundation models (FMs) like <strong>Anthropic Claude</strong>, <strong>AI21</strong>, <strong>Amazon Titan</strong>, <strong>Meta Llama</strong>, etc.</li><li>Click on a model to view its <strong>Model ID</strong> (e.g., <code>anthropic.claude-v2</code>) and <strong>ARN</strong> (e.g., <code>arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2</code>).</li></ul></td><td></td></tr><tr><td><strong>System Instructions</strong></td><td>Optional instructions to influence the behavior of the model (e.g., "You are a security analyst...").</td><td></td></tr><tr><td><strong>Prompt Field</strong><mark style="color:red;"><strong>*</strong></mark></td><td><p>Select the field in the event containing the prompt to send to the model. Must be <code>string</code>. This field will be sent as-is to the model. </p><p></p><p>Amazon Bedrock models support both English and multilingual prompts, depending on the model selected.</p></td><td></td></tr><tr><td><strong>Temperature</strong></td><td>Adjusts randomness of outputs: greater than <code>1</code> is random, <code>0</code> is deterministic, and <code>0.75</code> is a good starting value. Default value is <code>0.1</code></td><td></td></tr><tr><td><strong>Max Tokens</strong></td><td>Maximum number of tokens to generate. A word is generally 2-3 tokens. The default value is 128 (min 1, max 8892).</td><td></td></tr><tr><td><strong>Top P</strong></td><td><strong>Top P</strong> sets a probability threshold to limit the pool of possible next words. Whereas <code>temperature</code> controls <strong>how random</strong> the selection is,<code>top_p</code> controls <strong>how many options</strong> are considered. Range: <code>0–1</code>. Default is <code>1.0</code>.</td><td></td></tr><tr><td><strong>JSON credentials</strong><mark style="color:red;"><strong>*</strong></mark></td><td>Provide the secret JSON credentials used to authenticate against Amazon Bedrock.</td><td></td></tr><tr><td><strong>Output Field</strong><mark style="color:red;"><strong>*</strong></mark></td><td>Give a name to the output field that will return the evaluation.</td><td></td></tr></tbody></table>
{% endstep %}

{% step %}
Click **Save** to complete.
{% endstep %}
{% endstepper %}

{% hint style="info" %}
Use conditional logic upstream to prevent sending unstructured or non-informative prompts to the model, helping to optimize costs and relevance.
{% endhint %}

## Example

Read our use case to learn how to use this Action in a real cybersecurity scenario.

<table data-view="cards"><thead><tr><th></th><th data-hidden data-card-cover data-type="files"></th></tr></thead><tbody><tr><td><a href="https://app.gitbook.com/s/lMswUMhL1LeEvusY1XNC/using-amazon-genai-to-classify-http-logs">Using Amazon GenAI to classify HTTP logs</a></td><td><a href="https://965373739-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FkxZeV4nlXcIAjMGZxzLI%2Fuploads%2F0QReUGFYmY1QhYRznpAV%2F2025-05-06_12-21-31.png?alt=media&#x26;token=ab285af8-b546-49e4-8f5e-24f2b1c42955">2025-05-06_12-21-31.png</a></td></tr></tbody></table>
