BLIP-2

Most recent version: v0.1.0

See the changelog of this Action type here.

Overview

This action integrates with the advanced AI model Blip 2 (Bootstrapped Language-Image Pre-training). This multi-modal AI offers improved performance and versatility for tasks requiring simultaneous understanding of images and text.

Integrating Blip 2 into Onum can transform how you interact with and derive value from data, particularly by leveraging the power of visual content and analysis.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find MLBLip2 in the Actions tab (under the Enrichment group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

Token*

The API token of the model you wish to

URL*

Specify the incoming field that contains the URL value.

Context

Add an optional description for your event.

Question

This is the question you wish to ask the AI model.

Temperature

This is the randomness of the responses. If the temperature is low, the data sampled will be more specific and condensed, whereas setting a high temperature will acquire more diverse but less precise answers.

Output

Specify a name for the output event.

4

Click Save to complete.

Last updated

Was this helpful?