# Google GenAI

{% hint style="info" %}
See the changelog of this Action type [here](https://app.gitbook.com/s/IXj83By4f20oCrZZ0gyP/google-genai).
{% endhint %}

{% hint style="warning" %}
Note that this Action is only available in certain Tenants. [Get in touch with us](https://app.gitbook.com/s/cSjT21I4EUhzghjc1rER/) if you don't see it and want to access it.
{% endhint %}

## Overview <a href="#overview" id="overview"></a>

The **Google GenAI** Action allows users to enrich their data using Google Gemini AI models.

{% hint style="warning" %}
In order to configure this action, you must first link it to a Listener. Go to [Building a Pipeline ](https://docs.onum.com/the-workspace/pipelines/building-a-pipeline)to learn how this works.
{% endhint %}

## Ports

These are the input and output ports of this Action:

<details>

<summary>Input ports</summary>

* **Default port** - All the events to be processed by this Action enter through this port.

</details>

<details>

<summary>Output ports</summary>

* **Default port** - Events are sent through this port if no error occurs while processing them.
* **Error port** - Events are sent through this port if an error occurs while processing them.

</details>

## Configuration

{% stepper %}
{% step %}
Find **Google GenAI** in the **Actions** tab (under the **AI** group) and drag it onto the canvas.
{% endstep %}

{% step %}
To open the configuration, click the Action in the canvas and select **Configuration**.
{% endstep %}

{% step %}
Enter the required parameters:

<table><thead><tr><th width="231">Parameter</th><th>Description</th><th data-hidden></th></tr></thead><tbody><tr><td><strong>Location</strong><mark style="color:red;"><strong>*</strong></mark></td><td>Enter the Google Cloud location for Vertex AI (e.g., <code>us-central1</code>).</td><td></td></tr><tr><td><strong>Model</strong><mark style="color:red;"><strong>*</strong></mark></td><td>Choose the Vertex AI model version to use from the menu.</td><td></td></tr><tr><td><strong>System Instructions</strong><mark style="color:red;"><strong>*</strong></mark></td><td>Enter the required system instructions.</td><td></td></tr><tr><td><strong>Prompt Field</strong><mark style="color:red;"><strong>*</strong></mark></td><td>Enter the prompt you want to send to the model.</td><td></td></tr><tr><td><strong>Temperature</strong></td><td>Adjusts randomness of outputs: greater than <code>1</code> is random, <code>0</code> is deterministic, and <code>0.75</code> is a good starting value. Default value is <code>0.7</code></td><td></td></tr><tr><td><strong>MaxLength</strong></td><td>Maximum number of tokens to generate. A word is generally 2-3 tokens. The default value is 128 (min 1, max 8892).</td><td></td></tr><tr><td><strong>Output Format</strong><mark style="color:red;"><strong>*</strong></mark></td><td>Choose the required output format.</td><td></td></tr><tr><td><strong>JSON credentials</strong><mark style="color:red;"><strong>*</strong></mark></td><td>choose the required JSON credentials.</td><td></td></tr><tr><td><strong>Output Field</strong><mark style="color:red;"><strong>*</strong></mark></td><td>Give a name to the output field that will return the evaluation.</td><td></td></tr></tbody></table>
{% endstep %}

{% step %}
Click **Save** to complete.
{% endstep %}
{% endstepper %}
