BLIP-2
Most recent version: v0.1.0
Last updated
Was this helpful?
Most recent version: v0.1.0
Last updated
Was this helpful?
See the changelog of this Action type .
Note that this Action is only available in certain Tenants. Contact us if you need to use it and don't see it in your Tenant.
This action integrates with the advanced AI model Blip 2 (Bootstrapped Language-Image Pre-training). This multi-modal AI offers improved performance and versatility for tasks that require the simultaneous understanding of images and text.
Integrating Blip 2 into Onum can transform how you interact with and derive value from data, particularly by leveraging the power of visual content and analysis.
In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.
These are the input and output ports of this Action:
To open the configuration, click the Action in the canvas and select Configuration.
Enter the required parameters:
Token*
The API token of the model you wish to
URL*
Specify the incoming field that contains the URL value.
Context
Add an optional description for your event.
Question
This is the question you wish to ask the AI model.
Temperature
This is the randomness of the responses. If the temperature is low, the data sampled will be more specific and condensed, whereas setting a high temperature will acquire more diverse but less precise answers.
Output
Specify a name for the output event.
Click Save to complete.