Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 247 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

v1.692.0

Loading...

Getting Started

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

THE WORKSPACE

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment

Onum installation process

Overview

Once you’ve obtained an Onum account, just a few steps are needed to complete the installation, depending on the type of deployment you require. Onum supports flexible deployment options, including both on-premises and cloud environments.

In case you have any question regarding the deployment and installation process, please contact us.

Supported browsers

Onum supports the following browsers:

  • Google Chrome

Cloud Deployment

For cloud-based installations, either our Customer Success team or a partner will access Onum's internal tenant manager and create the new account. All the necessary infrastructure will be set up based on estimated usage metrics.

The deployment process is fully automated, ensuring quick and streamlined provisioning and configuration.

Cloud Listeners

Note that the Listener configuration process is slightly different if you are using a Cloud deployment. Learn more about Cloud Listeners in this article.

On-Premises Deployment

In on-premises deployments, either our Customer Success team or a partner will set up the new account. Appropriate access permissions are granted to allow Onum to perform the installation.

A validation script is run to confirm all prerequisites are met and connectivity is established, ensuring a smooth installation process. Once installed, you can access your tenant, start ingesting data, invite users, and take full advantage of Onum’s capabilities.

Dependencies:

  • Docker

  • Packages:

    • gpg

    • curl

    • ipvsadm

    • ca-certificates

  • SIEM access

  • Access to sources

Hardware requirements

Hardware (per Virtual Machine):

  • Distribution: Linux (Debian or Red Hat)

  • Server Hardware: 16GB RAM and 8 CPU

  • Disk Storage: 500GB

Access

In case of upcoming system maintenance, we kindly seek permission to access the customer infrastructure. We aim to ensure seamless operations and address any potential issues promptly.

AI

Schemas

Aggregation

Key Terminology

Get to grips with these key concepts to better understand how Onum works and use it to its full potential.

Action

A unit of work performing a given operation on an event.


API

Application Programming Interface. A set of defined methods of communication among various components.


Cluster

Various distributors and workers can be grouped and contained within a cluster. You can have as many clusters as required per Tenant.


Data sink

Where the data is routed after being processed by Onum.


Data source

Where the data is generated before ingesting it into Onum, e.g. application server logs, firewall logs, S3 bucket, Kafka Topic, etc.


Distributor

This service receives and processes the Listener data before sending it on to workers within a cluster.


Event

An event represents semi-structured data such as a log entry. Events can be parsed so that structured data can be generated and eventually processed by the engine. Events are composed of fields, which are referred to as Field. An action that produces a new field will be referred to as outputField.


Label

Used to sort events coming from Listeners into categories or sets that meet given filters to be used in a Pipeline.


Listener

A Listener retrieves events in a given IP address and a port, routing the data to the Pipelines so that it can be processed.


Lookup

A lookup refers to searching for and retrieving information from a specific source or dataset, typically based on a key or reference.


Multitenancy

Multitenancy is an architecture in which tenants share the same underlying infrastructure, including databases and application code, but their data and configurations are kept separate to ensure privacy and security.


Pipeline

A sequence of Actions connected through inputs/outputs to process a stream of data. Data comes from the Listener and eventually is routed to a Datasink.


Role

A role is assigned to a user in order to control the access they have to certain or all Onum features. This way, we can personalise the experience for each user.


Tag

Tags can be assigned to Listeners, Pipelines or Data sinks in order to classify them or make them easier to find. This is particularly useful if you have a wide database and want to avoid lengthy searching for the resources you wish to use.


Tenant

A Tenant is a domain that contains a set of data in your organization. You can use one or various tenants and grant access to as many as required.


Worker

This service runs the Pipelines, receiving data from its distributor and contained within a Cluster.

Enrichment

Detection

Arithmetic / Logic

Control characters

The Time Range Selector

Overview

Throughout the entire Onum platform, you can set a period to either narrow down or extend the data shown. You can either select a predefined period or apply a custom time range.

The related graph and resources will be automatically updated to display data from the chosen period. To remove a selected period, simply click the bin icon that appears next to the period to go back to the default time range (1 hour ago).

The intervals will be calculated according to the Timezone of your browser. Keep an eye out for future implementations, where you can manually select a timezone.

Predefined and Custom time ranges

As well as predefined time intervals, you can also define a custom time range. To do it, simply select the required starting and ending dates in the calendar.

Comparisons

The interesting thing about Onum is that you can directly see how much volume you have saved compared to past ingestions, telling you what is going well and what requires further streamlining.

The comparison is direct/equivalent, meaning all data shown is analyzed compared to the previously selected equivalent time range.

For example, if the time range is 1 hour, the calculation of differences will be carried out using the previous one hour before the current selection =

  • Range selected: 10:00-11:00

  • Comparison: 09:00-10:00

Again, let´s say you now wish to view data over the last 7 days. The percentages will be calculated by measuring the volume retrospectively two weeks ago with the previous week.

Welcome

Onum helps security and IT leaders focus on the most important data. Gain control of your data by cutting through the noise for deep insights in real-time.

Quick Links

Most popular

Advanced

About Onum

Observability & Orquestration in real time. Any format. Any source.

Overview

The exponential growth of data ingestion volumes can lead to reduced performance, slow response times, and increased costs. With this comes the need to implement optimization strategies & volume reduction control. We help you cut the noise of large data streams and reduce infrastructure by up to 80%.

Gain deep insights from any type of data, using any format, from any source.

All of this...

@ the Edge

By collecting and observing that data at the edge, as close as possible to where it’s being generated, gain real-time observations and take decisive action to prevent network downtime, payment system failures, malware infections, and more.

Unlike most tools that provide data observation and orchestration, Onum is not a data analytics space, which is already served well by security information and event management (SIEM) vendors and other analytics tools. Instead, Onum sits as close as possible to where the data is generated, and well in front of your analytics platforms, to collect and observe data across every aspect of your hybrid network.

Start with the basics

Understanding The Essentials

Get to grips with the important concepts & best practices of the Onum application.

These articles contain information on functionalities across the entire platform.

Getting Started with Onum

Welcome to Onum! This guide will help you start working with Onum, a powerful tool designed to enhance your data analysis and processing capabilities.

Accessing Onum

Once you get your Onum credentials, you only have to go to and enter them to access your Tenant.

A Tenant is a domain that contains a set of data in your organization. You can use one or various Tenants and grant access to as many as required. Learn more about working with Tenants .

Logging in

Once in , there are two ways to log in.

Log in with email address and password

Your password must be a minimum of 10 characters and include a combination of uppercase letters, lowercase letters, numbers, and symbols.

An inactive session will be automatically logged out after one hour.

Navigating the Interface

When you access the Onum app, you'll see , where you can see an overview of the activity in your Tenant.

You can access the rest of the areas in Onum using the left panel.

Create Your First Listener

Onum receives any data through Listeners.

These are logical entities created within a Distributor, acting as the gateway to the Onum system. Configuring a Listener involves defining an IP address, a listening port, and a transport layer protocol, along with additional settings depending on the type of Listener specialized in the data it will receive.

Access the Listeners area to start working with them. Learn how to create your first Listener .

Create Your First Data Sink

Onum outputs data via Data sinks. Use them to define where and how to forward the results of your streamlined data.

Access the Data sinks area to start working with them. Learn how to create your first Data sink .

Build Your First Pipeline

Use Pipelines to start transforming your data and build a data flow. Pipelines are made of the following components:

Learn more about Pipelines .

Use cases

Do you want to check the essential steps in Onum through specific Pipelines? Explore the most common use cases in .

Cloud Listeners

Are you interested in deploying your Onum installation in our Cloud? , and we will configure a dedicated Cloud Tenant for you and your organization.

Overview

If your Onum installation is deployed in our Cloud, the configuration settings of a Listener would be slightly different from Listeners defined in an On-Premise deployment:

  • Cloud Listeners do not have the TLS configuration settings in their creation form, as the connection is already secured.

  • Cloud Listeners have an additional step in their creation process: Network configuration. Use these details to configure your data source to communicate with Onum. Click Download certificate to get the required certificate for the connection. You can also download it from the Listener details once it is created.

Learn more about the configuration steps of each Listener type .

Important Considerations

You must consider the following indications before using Cloud Listeners:

  • Cloud Listener endpoints are created in Onum's DNS. This process is usually fast, and Listeners are normally available immediately. However, note that this may last up to 24-48 hours, depending on your organization's DNS configuration.

  • Cloud Listener endpoints require Mutual TLS (mTLS) authentication, which means that your data input must be able to process a TLS connection and be authorized with a certificate.

  • Your data input must use the Server Name Indication (SNI) method, which means it must send its hostname in the TLS authentication process. If SNI is not used, the certificate routing will fail, and data will not be received, even if the certificate is valid.

If your organization's software cannot meet points 2 and 3, you can use an intermediate piece of software to ensure the client-Onum connection, such as Stunnel.

Architecture

Designed for the Edge, created in the Cloud

Easy, flexible deployment in any environment while keeping them as close as possible to where the data is produced delivers unparalleled speed and efficiency, enabling you to cut the infrastructure you have dedicated to orchestration by up to 80%.

The Onum infrastructure consists of:

  • Distributor: this is the service that hosts the Listener before forwarding it to Workers.

  • Worker: this is the service that runs the Pipelines, receiving data from its Distributor and contained within a Cluster.

  • Cluster: a container grouping Distributors and Workers. You can have as many clusters as required per Tenant.

Listeners are hosted within Distributors and are placed as close as possible to where data is generated. The Distributor pulls tasks from the data queue passing through the pipeline and distributes it to the next available worker in a Cluster. As soon as a Worker completes a task it becomes available again, and the Distributor in turn will assign it the next task from the queue.

The installation process creates the Distributor and all Workers for each data source in the cluster.

How it works

Deployment types

The Onum Platform supports any deployment type ― including on-premises, the Onum public cloud, or your own private cloud.

In a typical SaaS-based deployment, most processing activities are conducted in the Cloud.

Client-side components can be deployed on a Linux machine or on a Kubernetes cluster for easy, flexible deployment in any environment. Onum supports all major cloud environments, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

Learn more about Deployment requirements .

Delivery methods

Onum supports all major standards such as Netflow, Syslog, and Kafka to orchestrate data streams to any desired destination, including popular data analytics tools such as Splunk and Devo, as well as storage environments such as S3.

String to List

Description

This operation converts a string composed of values separated by a specific separator into a list of comma-separated values (data type listString).


Data types

These are the input/output expected data types for this operation:

Input data

- String of separated values to be transformed. You must enter the separator in the parameter below.

Output data

- Resulting values after transforming them into a list of comma-separated values.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Separator*

Enter the character(s) that separate the values in the input strings.


Example

Suppose you want to convert a series of strings representing values separated by / into a list of comma-separated values:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose String to list.

  3. Set Separator to /.

  4. Give your Output field a name and click Save. The values in your input field will be transformed into a comma-separated list. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Contact us
in this section
here
Cover

Any format. Any source.

Collect data from anywhere it’s generated, across every aspect of the network.

All data is aggregated, observed, and seamlessly routed to any destination.

Cover

Edge observability

Listeners are placed right on the edge to collect all data as close as possible to where it’s generated.

Cover

Centralized management

Onum receives data from Listeners and observes and optimizes the data from all nodes. All data is then sent to the proper data sink.

hello/my/world -> hello,my,world
Action

Formatting

Code tidy

Data format

Key Terminology
Understanding The Essentials
Getting Started with Onum
Understanding The Essentials
Pipelines

Listeners

Learn about how to set up and use Listeners

Pipelines

Discover Pipelines to manage and customize your data

Data sinks

Add the final piece of the puzzle for simpler data

console.onum.com
in this article
console.onum.com
Two factor authentication
the Home page
in this article
in this article
Listeners
Actions
Data sinks
in this section
this section
Cover

Cover

Cover

Cover

Listeners

The default tab that opens when in the Pipeline area is the Listeners tab, which shows all Listeners in your Tenant, as well as their labels.

Use the search bar to find a specific Listener or Label.

Edit a Listener

You can edit a label from the list by clicking the ellipses next to its name and selecting Edit.

This will open the Listener Configuration and Labels for you to modify.

Create Listener

If the Listener you wish to use in the Pipeline does not already exist, you can create it directly from this view using the Create Listener button in the bottom right of the tab. This will open the Listener Configuration window.

Add a Listener to your Pipeline

Go to Building a Pipeline to learn step by step.

Filtering

Cards and Table Views
Data Types
Graph Calculations
The Time Range Selector

Listener Integrations

Onum is compatible with any data source, regardless of technology and architecture. A Listener Type is not necessarily limited to one integration and can be used to connect to various.

Although there are only a limited number of types available for use, the integration possibilities are endless. Alternatively, you can contact us to request a Listener type.

Click a Listener to see how to configure it.

Home

A summary of your Tenant activity

Overview

When opening Onum, the Home area is the default view. Here you can see an overview of all the activity in your Tenant.

Use this view to analyze the flow of data and the change from stage to stage of the process. Here you can locate the most important contributions to your workflow at a glance.

All data shown is analyzed compared to the previously selected time range. Use the time range selector at the top of this area to specify the periods to examine.

For example, if the time range were 1 hour ago (the default period), the calculation of differences will be carried out using the previous one hour before the current selection:

  • Range selected: 10:00-11:00

  • Comparison: 09:00-10:00

To learn more about time ranges, go to Selecting a Time Range.

Metrics

The Home view shows various infographics that provide insights into your data flow. Some Listeners or Data Sinks may be excluded from these metrics if they are duplicates or reused.

The Net Saved/Increased and Estimation graphs will show an info tooltip if some Data sinks are excluded from these metrics. You may decide this during the Data sink creation.

In those cases, you can hover over the icon to check the total metrics including all the Data sinks.

Sankey Diagram

Each column of the Sankey diagram provides information and metrics on the key steps of your flow.

You can see how the data flows between:

  1. Listeners: each Listener in your Tenant.

  2. Clusters: the Distributor/Worker group receives the Listener data and forwards it to Pipeline.

  3. Labels: the operations and criteria used to filter out the data to be sent on to Pipelines.

  4. Pipelines: the Pipelines used to obtain desired data and results.

  5. Data sinks: the end destination for data having passed through Listener › Cluster › Label › Pipeline.

Hover over a part of the diagram to see specific savings.

Show Metrics

You can narrow down your analysis even further by selecting a specific node and selecting Show metrics.

This option is not available for all columns.

View Details

Click a node and select View details to open a panel with in-depth details of the selected piece.

From here, you can go on to edit the selected element.

This option is not available for all columns.

Hide/Show Columns

You can choose which columns to view or hide using the eye icon next to its name.

Add New Elements

You can add a new Listener, Label, Pipeline or Data sink using the plus button next to its name.

You can also create all of the aforementioned elements using the Create new button at the top-right:

Replicate

Most recent version: v0.1.0

See the changelog of this Action type .

Note that this Action is only available in certain Tenants. if you need to use it and don't see it in your Tenant.

Overview

This action offers automatic integration with models available on the Replicate platform, whether publicly accessible or privately deployed. This component simplifies accessing and utilizing a wide array of models without manual integration efforts.

Integrating Onum with replicate.com can offer several benefits, enhancing the platform's capabilities and the value it delivers:

  • Access to a Broad Range of Models

  • Ease of Model Deployment

  • Scalability

  • Continuous Model Updates

  • Cost-Effective AI Integration

  • Rapid Prototyping and Experimentation

  • Enhanced Data Privacy and Security

In order to configure this action, you must first link it to a Listener. Go to to learn how to link.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find MLReplicate in the Actions tab (under the Enrichment group) and drag it onto the canvas. Link it to the required and .

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description
4

Click Save to complete.

Where to find these values

It is possible to use all the public models from related to process natural language.

To fill in the values, you need to utilize the information about the user and model from Replicate.com and execute a copy/paste. The following image illustrates how to locate the required parameters on the Replicate.com model website.

Just in case, if the version does not appear, choose the Cog tag and copy the version like in the following picture.

A version identifies every model and requires a set of input parameters.

Convert Distance

Description

This operation converts values between different units of length or distance.


Data types

These are the input/output expected data types for this operation:

Input data

- Values whose unit of length you want to transform. They must be strings representing numbers.

Output data

- Resulting values after transforming them to the selected unit of length.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Input units*

Enter the unit of length of your input data. You must indicate one of the following:

Metric

  • Nanometres (nm)

  • Micrometres (μm)

  • Millimetres (mm)

  • Centimetres (cm)

  • Metres (m)

  • Kilometres (km)

Imperial

  • Thou (th)

  • Inches (in)

  • Feet (ft)

  • Yards (yd)

  • Chains (ch)

  • Furlongs (fur)

  • Miles (mi)

  • Leagues (lea)

Maritime

  • Fathoms (ftm)

  • Cables

  • Nautical miles

Comparisons

  • Cars (4m)

  • Buses (8.4m)

  • American football fields (91m)

  • Football pitches (105m)

Astronomical

  • Earth-to-Moons

  • Earth's equators

  • Astronomical units (au)

  • Light-years (ly)

  • Parsecs (pc)

Output units*

Enter the required unit of length of your output data. You must indicate one of the following:

Metric

  • Nanometres (nm)

  • Micrometres (μm)

  • Millimetres (mm)

  • Centimetres (cm)

  • Metres (m)

  • Kilometres (km)

Imperial

  • Thou (th)

  • Inches (in)

  • Feet (ft)

  • Yards (yd)

  • Chains (ch)

  • Furlongs (fur)

  • Miles (mi)

  • Leagues (lea)

Maritime

  • Fathoms (ftm)

  • Cables

  • Nautical miles

Comparisons

  • Cars (4m)

  • Buses (8.4m)

  • American football fields (91m)

  • Football pitches (105m)

Astronomical

  • Earth-to-Moons

  • Earth's equators

  • Astronomical units (au)

  • Light-years (ly)

  • Parsecs (pc)


Example

Suppose you want to convert a series of events from meters into yards:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose Convert distance.

  3. Set Input units to Metres (m).

  4. Set Output units to Yards (yd).

  5. Give your Output field a name and click Save. The unit of length of the values in your input field will be transformed. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

100 Metres (m) -> 109.3613298 Yards (yd)
Action

Amazon CloudFront

Amazon CloudWatch

Amazon ELB

Amazon Route 53

Amazon S3

Amazon SQS

Apache Flume

Apache Kafka

AWS CloudTrail

AWS Lambda

Azure Event Hubs

Cisco NetFlow

Cisco Umbrella

Cloudflare

Confluent

Cortex

CrowdStrike

Fastly

Fluent Bit

Google Cloud Storage

Google Pub/Sub

HTTP

Juniper

Microsoft 365

Netskope

OKTA

OpenTelemetry

Sophos

Splunk

Syslog

TCP

Zeek/Bro

Zoom

Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover

Microsoft

Net Saved/Increased

Here you can see the difference (in %) of volume saved/increased in comparison to the previous period. Hover the circle icons to see the input/output volumes and see the total GB saved.

Listeners

View the total amount of data ingested by the Listeners in the selected time range compared to the previous, as well as the increased/decreased volume (in %).

Data Sink

You can see at a glance the total amount of data sent out of your Tenant, as well as the difference (in %) with the previous time range selected.

Data Volume

This shows the total volume of ingested data for the selected period. Notice it is the same as the input volume shown in the Net saved/increased metric. You can also see the difference (in %) with the previous time range selected.

Estimation

The estimated volumes ingested and sent over the next 24 hours. This is calculated using the data volume of the time period.

Cover
Cover
Cover
Cover
Cover

Token*

The API token of the model you wish to

Version*

The version can usually be located by running a command on the machine using Replicate API.

Input

This will be the input IP.

Output

Specify a name for the output event.

Building a Pipeline
Listener
Data sink
https://replicate.com/collections/language-models

Listeners

Everything starts with a good Listener

Overview

Essentially, Onum receives any data through Listeners. These are logical entities created within a Distributor, acting as the gateway to the Onum system. Due to this, configuring a Listener involves defining an IP address, a listening port, and a transport layer protocol, along with additional settings depending on the type of Listener specialized in the data it will receive.

A Push type of Listener passively sources data without explicitly requesting, whereas a Pull type is where the user actively requests data from an external source.

If you are using more than one Cluster, it is recommended not to use a Pull-type Listener. You can find out the Listener type in the integration-specific articles below.

Click the Listeners tab on the left menu for a general overview of the Listeners configured in your Tenant and the events generated.

  • The graph at the top plots the volume ingested by your listeners. Click Events to see the events in for all your Listeners, or Bytes to see a bar graph representing the bytes in. Learn more about this graph in this article.

    • Use the Stack Listeners toggle to view each individual Listener on your graph and its metrics.

  • Hover over a point on the chart to show a tooltip containing the Events and Bytes OUT for the selected time, as well as a percentage of how much increase/decrease has occurred between the previous lapse of time and the one currently selected.

At the bottom, you have a list of all the Listeners in your Tenant. You can switch between the Cards view, which shows each Listener in a card, and the Table view, which displays Listeners listed in a table. Learn more about the cards and table views in this article.

Narrow Down Your Data

There are various ways to narrow down what you see in this view:

Add Filters

Add filters to narrow down the Listeners you see in the list. Click the + Add filter button and select the required filter type(s). You can filter by:

  • Name: Select a Condition (Contains, Equals, or Matches) and a Value to filter Listeners by their names.

  • Type: Choose the Listener type(s) you want to see in the list.

  • Version: Filter Listeners by their version.

  • Created by: Selecting this option opens a User drop-down where you can filter by creator.

  • Updated by: Selecting this option opens a User drop-down where you can filter by the last user to update a pipeline.

The filters applied will appear as tags at the top of the view.

Note that you can only add one filter of each type.

Select a Time Range

If you wish to see data for a specific time period, this is the place to click. Go to this article to dive into the specifics of how the time range works.

Select Tags

You can choose to view only those Listeners that have been assigned the desired tags. You can create these tags in the Listener settings or from the cards view. Press the Enter key to confirm the tag, then Save.

To filter by tags, click the + Tags button, select the required tag(s) and click Save.


Create a Listener

Depending on your permissions, you can create a new Listener from this view.

There are several ways to create a new Listener:

From the Listeners view:


Within a Pipeline:


From the Home Page:


Configure a Listener

Configuring your Listener involves various steps. You can open the configuration pane by creating a new Listener or by clicking a Listener in the Listener tab or the Pipeline view and selecting Edit Listener in the pane that opens.

Alternatively, click the ellipses in the card or table view and select Edit.

01. Type

The first step is to define the Listener Type. Select the desired type in this window and select Configuration.

02. Configuration

The configuration is different for each Listener type. Check the different Listener types and how to configure them in this section.

If your Listener is deployed in the Cloud, you will see an extra step for the network properties. Learn more about Listeners in a Cloud deployment in this article.

03. Labels

Use Onum's labels to cut out the noise with filters and search criteria based on specific metadata. This way, you can categorize events sent on and processed in your Pipelines.

Learn more about labels in this article.

Labels

Overview

Use Onum's labels to cut out the noise with filters and search criteria based on specific metadata. This way, you can categorize the events that Listeners receive before being processed in your Pipelines.

As different log formats are being ingested in real-time, the same Listener may ingest different technologies. Labels are useful for categorizing events based on specific criteria.

When creating or editing a Listener, use Labels to categorize and assign filters to your data.

For most Listeners, you will see two main event categories on this screen:

  • All Data - Events that follow the structure defined by the specified protocol, for example, Syslog events with the standard fields, or most of them.

  • Unparsed - These are events that do not follow the structure defined in the selected protocol.

You can define filters and rules for each of these main categories.

What Are Labels Used For?

Once you've defined your labels to filter specific events, you can use them in your Pipelines.

Instead of using the whole set of events that come into your Listeners, you can use your defined labels to use only specific sets of data filtered by specific rules.

Creating Your First Label

When you create a new Listener, you'll be prompted to the Labels screen after configuring your Listener data.

1

Click the + button under the set of data you want to filter (All Data or Unparsed). You'll see your first label. Click the pencil icon a give it a name that describes the data that will filter out.

In this example, we want to filter only events whose version is 2.x, so we named our label accordingly:

2

Below, see the Add filter button. This is where you add the criteria to categorize the content under that label. Choose the field you want to filter by.

In this example, we're choosing Version.

3

Now, define the filter criteria:

  • Condition - Choose between:

    • Contains - Checks when the indicated value appears anywhere in the log.

    • Equals - Filters for exact matches of the value in the log.

    • Matches - Filters for exact matches of the value in the log, allowing for regular expressions.

  • Value - Enter the value to filter by.

In this example, we are setting the Condition to Contains and Value to 2.

4

Click Save and see the header appear for your first label.

From here, you have various options:

Create a new label

To create a new subset of data, select the + sign that extends directly from the All data or Unparsed bars. Be aware that if you select the + sign extending from the header bar, you will create a subheader.

Create a sub-label

You can create a branch from your primary header by clicking the plus button that extends from the main header. There is no limit to the amount that you can add.

Notice that the subheader shows a filter icon with a number next to it to indicate the string of filters applied to it already.

Duplicate your label

To duplicate a label, simply select the duplicate button in its row.

Delete a label

To delete a label, simply select the delete button in its row.

If you attempt to delete a Label that is being used in a Pipeline, you will be asked to confirm where to remove it from.

Once you have completed your chain, click Save.


Unlabeled

Any data that has not been assigned a label will be automatically categorized as unlabeled. This allows you to see the data that is not being processed by any Pipeline, but has not been lost.

This label will appear in the list of Labels for use in your Pipeline so that you can process the data in its unfiltered form.

Your Listener is now ready to use and will appear in the list.

OCSF

Most recent version: v0.2.1

See the changelog of this Action type .

Overview

The OCSF Action allows users to build messages in accordance with the Open Cybersecurity Schema Framework.

In order to configure this Action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.

AI Action Assistant

This Action has an AI-powered chat feature that can help you configure its parameters. Read more about it in this article.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find OCSF in the Actions tab (under the Schema group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

Fields*

This is where you specify the fields you wish to include in your message, by type.

Fields beginning with _ are internal fields.

OCSF Template*

Choose the blueprint used to create the standardized cybersecurity message within the OCSF model.

Destination Field Name*

Give your message a name to identify it by in the end destination.

Message

The message will be automatically transformed to fit the OCSF template selected above, show in JSON format. Drag and drop more fields from the fields area and rearrange them here.

4

Click Save to complete.

Example

Let's say you have received drone flight logs in JSON format and wish to transform it to a OCSF-formatted JSON using the Drone Flights Activity [8001] schema.

1

Raw data

{
  "drone_id": "DRONE-XT12",
  "operator": "alice.wong",
  "flight_id": "FL-20250602-0001",
  "start_time": "2025-06-02T08:00:00Z",
  "end_time": "2025-06-02T08:30:00Z",
  "status": "completed",
  "latitude": 40.7128,
  "longitude": -74.0060,
  "altitude_m": 150.0,
  "battery_level": 45,
  "vendor": "AeroFleet"
}
2

Parse the JSON

Add a Parser to the canvas and extract the fields using the automatic parsing.

3

Build the message

Now use the Message Builder to create a template containing these fields as an OSCF-formatted message.

Select the Drone Flights Activity [8001] schema from the list.

See the JSON reformatted in the Message area:

[
{
  "event_class": "drone_activity",
  "event_type_id": 8001,
  "time": "2025-06-02T08:00:00Z",
  "severity_id": 1,
  "message": "Drone flight FL-20250602-0001 completed successfully",
  "actor": {
    "user": {
      "name": "alice.wong"
    }
  },
  "drone_activity": {
    "drone_id": "DRONE-XT12",
    "flight_id": "FL-20250602-0001",
    "status": "completed",
    "start_time": "2025-06-02T08:00:00Z",
    "end_time": "2025-06-02T08:30:00Z",
    "location": {
      "latitude": 40.7128,
      "longitude": -74.0060,
      "altitude_m": 150.0
    },
    "battery_level": 45
  },
  "metadata": {
    "product": {
      "name": "DroneLogSystem",
      "vendor_name": "AeroFleet"
    }
  }
}
]

Drag and drop the fields to fill in the template with the real data.

Your message now matches the OCSF best practices: it normalizes data into structured actor, drone_activity, and metadata fields.

Convert Speed

Description

This operation converts values between different units of speed.


Data types

These are the input/output expected data types for this operation:

Input data

- Values whose unit of speed you want to transform. They must be strings representing numbers.

Output data

- Resulting values after transforming them to the selected unit of speed.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Input units*

Enter the unit of speed of your input data. You must indicate one of the following:

Metric

  • Metres per second (m/s)

  • Kilometres per hour (km/h)

Imperial

  • Miles per hour (mph)

  • Knots (kn)

Comparisons

  • Human hair growth rate

  • Bamboo growth rate

  • World's fastest snail

  • Usain Bolt's top speed

  • Jet airliner cruising speed

  • Concorde

  • SR-71 Blackbird

  • Space Shuttle

  • International Space Station

Scientific

  • Sound in standard atmosphere

  • Sound in water

  • Lunar escape velocity

  • Earth escape velocity

  • Earth's solar orbit

  • Solar system's Milky Way orbit

  • Milky Way relative to the cosmic microwave background

  • Solar escape velocity

  • Neutron star escape velocity (0.3c)

  • Light in a diamond (0.4136c)

  • Signal in an optical fibre (0.667c)

  • Light (c)

Output units*

Enter the required unit of speed of your output data. You must indicate one of the following:

Metric

  • Metres per second (m/s)

  • Kilometres per hour (km/h)

Imperial

  • Miles per hour (mph)

  • Knots (kn)

Comparisons

  • Human hair growth rate

  • Bamboo growth rate

  • World's fastest snail

  • Usain Bolt's top speed

  • Jet airliner cruising speed

  • Concorde

  • SR-71 Blackbird

  • Space Shuttle

  • International Space Station

Scientific

  • Sound in standard atmosphere

  • Sound in water

  • Lunar escape velocity

  • Earth escape velocity

  • Earth's solar orbit

  • Solar system's Milky Way orbit

  • Milky Way relative to the cosmic microwave background

  • Solar escape velocity

  • Neutron star escape velocity (0.3c)

  • Light in a diamond (0.4136c)

  • Signal in an optical fibre (0.667c)

  • Light (c)


Example

Suppose you want to convert a series of events from kilometers per hour into miles per hour:

  1. In your Pipeline, open the required Action configuration and select the input Field.

  2. In the Operation field, choose Convert speed.

  3. Set Input units to Kilometres per hour (km/h).

  4. Set Output units to Miles per hour (mph).

  5. Give your Output field a name and click Save. The unit of speed of the values in your input field will be transformed. For example:

200 Kilometres per hour (km/h) -> 124.2841804 Miles per hour (mph)

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Field Transformation

Most recent version: v1.1.1

See the changelog of this Action type .

Overview

The Field Transformation action acts as a container that enables users to perform a wide range of operations on data, including encoding and decoding various types of encryption, format conversion, file compression and decompression, data structure analysis, and much more. The results are stored in new events fields.

In order to configure this action, you must first link it to a Listener or other Action. Go to to learn how to link.

AI Action Assistant

This Action has an AI-powered chat feature that can help you configure its parameters. Read more about it in .

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Field Transformation in the Actions tab (under the Transformation group) and drag it onto the canvas.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description
4

Click Save to complete the process.

Example

Here is an example of a data set on the Bytes in/out from IP addresses.

We can use the field transformation operations to reduce the quantity of data sent.

We have a Syslog Listener, connected to a Parser.

Click if you need help configuring the Parser

Configure the parser as follows:

Paste input:

This is the data in its raw format.

Select Manual in the parser drop-down, go to code mode using the button on the right, and paste this log:

You have manually parsed the raw data into separate fields. This is reflected in the output field.

Link the Parser to the Field Transformation action and open its configuration.

We will use the

Input
Output

Destination IP to Hex

Transform the Destination IP to hexadecimal to reduce the number of characters.

Original IP
Hexadecimal
  • Field>Parser: DESTINATION_IP_ADDRESS

  • Operation: To IP Hex

  • Output Field: DestinationIPAddessHex

Add a new field for Destination Host to CRC32

Codify the Destination Host as crc32 to transform the machine name into 8 characters.

Original
CRC32
  • Field>Parser: DESTINATION_HOST

  • Operation: Crc32

  • Output field: DestinationHostCrc32

Conversion

Google Cloud

Field to transform*

Choose a field from the linked Listener/Action to transform in your Action using the drop-down.

Add as many fields as required using the Add New Field button.

Operations*

See a comprehensive list of all the available operations for this Action.

Please bear in mind that the options available in this window will depend on the field to transform.

Add as many Operations as required using Add Operation. You can also use the arrow keys on your keyboard to navigate up and down the list.

If you have added more than one operation, you can reorder them by dragging and dropping them into position.

Test your operation

Before saving your action, you can test it to see the outcome.

Type a message in the Input field and see it transformed in the Output field after passing through the selected operation(s).

Output field*

Give a name to the transformed field and click Save to complete.

518;650;192.168.70.224;60045;192.168.70.210;3871;server.example.com
{fieldName1:csv(separator=";", indices=[0:string(alias="BYTES_IN"),1:string(alias="BYTES_OUT"),2:string(alias="SOURCE_IP_ADDRESS"),3:string(alias="SOURCE_PORT"),4:string(alias="DESTINATION_IP_ADDRESS"),5:string(alias="DESTINATION_PORT"),6:string(alias="DESTINATION_HOST")], totalColumns=7)}

DESTINATION_IP_ADDRESS: 192.168.70.210518

DestinationIPAddressHex: c0.a8.46.d2.224

DESTINATION_HOST: server.example.com

DestinationHostCRC32:

0876633F

192.168.70.210518

c0.a8.46.d2.224

server.example.com

0876633F

Building a Pipeline
this article
To IP Hex and CRC32 operations.
color coded

Convert Mass

Description

This operation converts values between different units of mass.


Data types

These are the input/output expected data types for this operation:

Input data

- Values whose unit of mass you want to transform. They must be strings representing numbers.

Output data

- Resulting values after transforming them to the selected unit of mass.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Input units*

Enter the unit of mass of your input data. You must indicate one of the following:

Metric

  • Yoctogram (yg)

  • Zeptogram (zg)

  • Attogram (ag)

  • Femtogram (fg)

  • Picogram (pg)

  • Nanogram (ng)

  • Microgram (μg)

  • Milligram (mg)

  • Centigram (cg)

  • Decigram (dg)

  • Gram (g)

  • Decagram (dag)

  • Hectogram (hg)

  • Kilogram (kg)

  • Megagram (Mg)

  • Tonne (t)

  • Gigagram (Gg)

  • Teragram (Tg)

  • Petagram (Pg)

  • Exagram (Eg)

  • Zettagram (Zg)

  • Yottagram (Yg)

Imperial Avoirdupois

  • Grain (gr)

  • Dram (dr)

  • Ounce (oz)

  • Pound (lb)

  • Nail

  • Stone (st)

  • Quarter (qr)

  • Tod

  • US hundredweight (cwt)

  • Imperial hundredweight (cwt)

  • US ton (t)

  • Imperial ton (t)

Imperial Troy

  • Grain (gr)

  • Pennyweight (dwt)

  • Troy dram (dr t)

  • Troy ounce (oz t)

  • Troy pound (lb t)

  • Mark

Archaic

  • Wey

  • Wool wey

  • Suffolk wey

  • Wool sack

  • Coal sack

  • Load

  • Last

  • Flax or feather last

  • Gunpowder last

  • Picul

  • Rice last

Comparisons

  • Big Ben (14 tonnes)

  • Blue whale (180 tonnes)

  • International Space Station (417 tonnes)

  • Space Shuttle (2,041 tonnes)

  • RMS Titanic (52,000 tonnes)

  • Great Pyramid of Giza (6,000,000 tonnes)

  • Earth's oceans (1.4 yottagrams)

Astronomical

  • A teaspoon of neutron star (5,500 million tonnes)

  • Lunar mass (ML)

  • Earth mass (M⊕)

  • Jupiter mass (MJ)

  • Solar mass (M☉)

  • Sagittarius A* (7.5 x 10^36 kgs-ish)

  • Milky Way galaxy (1.2 x 10^42 kgs)

  • The observable universe (1.45 x 10^53 kgs)

Output units*

Enter the required unit of mass of your output data. You must indicate one of the following:

Metric

  • Yoctogram (yg)

  • Zeptogram (zg)

  • Attogram (ag)

  • Femtogram (fg)

  • Picogram (pg)

  • Nanogram (ng)

  • Microgram (μg)

  • Milligram (mg)

  • Centigram (cg)

  • Decigram (dg)

  • Gram (g)

  • Decagram (dag)

  • Hectogram (hg)

  • Kilogram (kg)

  • Megagram (Mg)

  • Tonne (t)

  • Gigagram (Gg)

  • Teragram (Tg)

  • Petagram (Pg)

  • Exagram (Eg)

  • Zettagram (Zg)

  • Yottagram (Yg)

Imperial Avoirdupois

  • Grain (gr)

  • Dram (dr)

  • Ounce (oz)

  • Pound (lb)

  • Nail

  • Stone (st)

  • Quarter (qr)

  • Tod

  • US hundredweight (cwt)

  • Imperial hundredweight (cwt)

  • US ton (t)

  • Imperial ton (t)

Imperial Troy

  • Grain (gr)

  • Pennyweight (dwt)

  • Troy dram (dr t)

  • Troy ounce (oz t)

  • Troy pound (lb t)

  • Mark

Archaic

  • Wey

  • Wool wey

  • Suffolk wey

  • Wool sack

  • Coal sack

  • Load

  • Last

  • Flax or feather last

  • Gunpowder last

  • Picul

  • Rice last

Comparisons

  • Big Ben (14 tonnes)

  • Blue whale (180 tonnes)

  • International Space Station (417 tonnes)

  • Space Shuttle (2,041 tonnes)

  • RMS Titanic (52,000 tonnes)

  • Great Pyramid of Giza (6,000,000 tonnes)

  • Earth's oceans (1.4 yottagrams)

Astronomical

  • A teaspoon of neutron star (5,500 million tonnes)

  • Lunar mass (ML)

  • Earth mass (M⊕)

  • Jupiter mass (MJ)

  • Solar mass (M☉)

  • Sagittarius A* (7.5 x 10^36 kgs-ish)

  • Milky Way galaxy (1.2 x 10^42 kgs)

  • The observable universe (1.45 x 10^53 kgs)


Example

Suppose you want to convert a series of events from kilograms into pounds:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose Convert mass.

  3. Set Input units to Kilogram (kg).

  4. Set Output units to Pound (lb).

  5. Give your Output field a name and click Save. The unit of mass of the values in your input field will be transformed. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

List to String

Description

This operation converts a list of comma-separated values (data type listString) into a string of values divided by a specific separator.


Data types

These are the input/output expected data types for this operation:

Input data

- List of comma-separated values.

Output data

- Resulting string of values divided by the given separator.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Separator*

Enter the character(s) you want to use to divide the values in the input string lists.


Example

Suppose you want to convert a series of lists of comma-separated values into a single string representing those values separated by /:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose List to string.

  3. Set Separator to /.

  4. Give your Output field a name and click Save. The values in your input field will be transformed into a string with its values separated by the given character. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Cortex

Integrate with API Logs from the Cortex Platform using the Listener using the data Integration API.

HTTP Pull
100 Kilogram (kg) -> 220.4622622 Pound (lb)
Action
hello,my,world -> hello/my/world
Action

Amazon Web Services

Actions

Perform operations on your events

Overview

The Actions tab shows all available actions to be assigned and used in your Pipeline. Use the search bar at the top to find a specific action. Hover over an action in the list to see a tooltip, as well as the option to View details.

To add an action to a Pipeline, drag it onto the canvas.

Onum supports action versioning, so be aware that the configuration may be showing either the Latest version if you are adding a new action, or current version if you are editing an existing action.

Action Versioning

We are constantly updating and improving Actions, therefore, you may come across old or even discontinued actions.

See the complete version history of each Action here.

If there is an updated version of the Action available, it will show update available in its Definition, above the node when added to a Pipeline, and Details pane.

If you have added an Action to a Pipeline that is now discontinued, it will show as deactivated in the Canvas. You'll soon be able to see all the Actions with updates available in the Actions view.

Actions List

See this table to understand what each Action does, when to use it, and how to get the most value from your Pipelines. Click an Action name to see its article.

Action
Description
Example use case

Maintain state across event streams.

Track rolling count of failed logins by IP.

Use models hosted on Amazon Bedrock to enrich log content.

Enrich logs by extracting insights like key entities.

Mask, hash, or redact sensitive fields.

Obfuscate usernames or IPs in real-time.

Extract text from images or diagrams.

OCR screenshots of phishing sites.

Run custom Python in an isolated container.

NLP on messages, custom alert logic.

Execute ML models via hosted APIs.

Classify log severity with ML.

Drop or allow events based on logic.

Filter out successful health check logs.

Add generated fields (timestamp, random, static...)

Tag events with trace ID and pipeline time.

Apply math, encoding, parsing, or string operations to fields.

Hash IPs, defang URLs, convert timestamps.

Flatten nested JSON to dot-notation keys.

Flatten AWS logs for easy indexing in SIEM.

Iterate array fields and emit per-item events.

Split DNS records into individual log lines.

Redact sensitive data via Google API.

Remove SSNs, emails from customer logs.

Use Google’s LLM to enrich log content.

Summarize error logs for dashboards.

Aggregate by key(s) over a time window.

Count logins per user every minute.

Trigger external HTTP(S) calls inline.

Notify PagerDuty, call enrichment APIs.

Remap or rename JSON fields and structure.

Standardize custom app logs to a shared schema.

Convert arrays into individual events.

Split one event with 5 IPs into 5 separate events.

Apply open-source LLMs to event text.

Translate or tag non-English log data.

Add fields from a reference table.

Add business unit or geolocation to IPs.

Compute values using event fields.

Calculate duration = end_time - start_time.

Compose structured output for downstream tools.

Create Slack-friendly JSON alerts.

Convert events to Open Cybersecurity Schema.

Standardize endpoint data for SIEM ingestion.

Parse text using regex or pattern to extract fields.

Convert syslog strings into structured events.

Use Redis for state lookups or caching.

Limit login attempts per user per hour.

Run any hosted model from Replicate.

Enrich logs using anomaly detection models.

Randomly pass only a portion of events.

Keep 10% of debug logs for cost control.

Match events against threat rule patterns.

Detect C2 activity or abnormal auth behavior.

Emit only first-seen values.

Alert on first-time-seen device IDs or IPs.

Pipelines

A Pipeline is Onum's way of streamlining your data

Overview

Use Pipelines to transform your events and build a data flow linking Actions from Listeners and to Data sinks.

Select the Pipelines tab at the left menu to visualize all your Pipelines in one place. Here's what you will find all the actions you can perform in this area:

  • The graph at the top plots the data volume going through your Pipelines. The purple line graph represents the events in, and the blue one represents the events going out. Use the buttons above the graph to switch between Events/Bytes, and the Frequency slider bar to choose how frequently you want to plot the events/bytes in the chart.

  • At the bottom, you will find a list of all the Pipelines in your tenant. You can switch between the Cards view, which shows each Pipeline in a card, and the Table view, which displays Pipelines listed in a table. Learn more about the cards and table views in this article.

Narrow Down Your Data

There are various ways to narrow down what you see in this view, both the Pipeline list and the informative graphs. To do it, use the options at the top of this view:

Add Filters

Add filters to narrow down the Pipelines you see in the list. Click the + Add filter button and select the required filter type(s). You can filter by:

  • Name: Select a Condition (Contains, Equals, or Matches) and a Value to filter Pipelines by their names.

  • Status: Choose the status(es) you want to filter by: Draft, Running, and/or Stopped. You'll only see Pipelines with the selected status(es).

  • Created by: Filter for the creator of the Pipeline in the window that appears.

  • Updated by: Filter for users to see the Pipeline they last updated.

The filters applied will appear as tags at the top of the view.

Note that you can only add one filter of each type.

Select a Time Range

If you wish to see data for a specific time period, this is the place to click. Go to Selecting a Time Range to dive into the specifics of how the time range works.

Select Tags

You can choose to view only those Pipelines that have been assigned the desired tags. You can create these tags in the Pipeline settings or from the cards view. Press the Enter key to confirm the tag, then Save.

To filter by tags, click the + Tags button and select the required tag(s).

Metrics

Below the filters, you will see 3 metrics informing you about various components in your Pipelines.

Note that these metrics are affected by the time range selected.

Listeners

View the events per second (EPS) ingested by all Listeners in your Pipelines for the selected time range, as well as the difference in percentage compared to the previous lapse.

Data Sink

View the events per second (EPS) sent by all Data Sinks in your Pipelines for the selected time range, as well as the difference in percentage compared to the previous.

Data Volume

See the overall data volume processed by all Pipelines for the selected time range, and the difference in percentage with the previous.

Visualize Your Data In/Out

Select between In and Out to see the volume received or sent by your Pipelines for the selected time range. The line graph represents the Events and the bar graph represents Bytes.

Hover over a point on the chart to show a tooltip containing the Events and Bytes in/out for the selected time, as well as a percentage of how much increase/decrease has occurred since the previous lapse of time since the one currently selected.

You can also analyze a different time range directly on the graph. To do it, click a starting date in the map and drag the frame that appears until the required ending date. The time range above will be also updated.

Pipelines List

At the bottom, you have a list of all the Pipelines in your tenant.

Use the Group by drop-down menu at the right area to select a criterion to organize your Pipelines in different groups (Status or None). You can also use the search icon to look for specific Pipelines by name.

Use the bottoms at the left of this area to display the Pipelines as Cards or listed in a Table:

Cards View

In this view, Pipelines are displayed as cards that display useful information. Click a card to open the Pipeline detail view, or double-click it to access it.

This is the information you can check on each card:

  • The percentage at the top left corner indicates the amount of data that goes out of the Pipeline compared to the total incoming events, so you can check how data is optimized at a glance. Hover over it to see the in/out data in bytes and the estimation over the next 24 hours.

  • You can also see the status of the Pipeline (Running, Draft, or Stopped).

  • Next to the status, you can check the Pipeline current version.

  • Click the Add tag button to define tags for the Pipeline. To assign a new tag, simply type the name you wish to assign, make sure to press Enter, and then select the Save button. If the Pipeline has tags defined already, you'll see the number of tags next to the tag icon.

  • Click the ellipses in the right-hand corner of the card to reveal the options to Edit, Copy ID, or Remove it.

Table view

In this view, Pipelines are displayed in a table, where each row represents a Pipeline. Click a row to open the Pipeline detail view, or double-click it to access it.

Click the cog icon at the top left corner to rearrange the column order, hide columns, or pin them. You can click Reset to recover the default configuration.

Pipeline detail view

Click a Pipeline to open its settings in the right-hand pane. Here you can see Pipeline versions and edit the Pipeline. Click the ellipses in the top right to Copy ID or Duplicate / Remove it.

The details pane is split into three tabs showing the Pipeline at different statuses:

Tab
Description

Running

This is the main tab, where you can see details of the Pipeline versions that are currently running.

Select the drop-down next to the Pipeline version name to see which clusters the Pipeline is currently running in.

Draft

Check the details of the draft versions of your Pipeline.

Stopped

Check the details of the version of your Pipeline that are currently stopped.

Once you have located the Pipeline to work with, click Edit Pipeline to open it.

Duplicate a Pipeline

If you wish to use a Pipeline just like the one you are currently working on, click the ellipses in the Card or Table view and select Duplicate, or from the Configuration pane.

Create a Pipeline

Depending on your permissions, you can create a new Pipeline from this view. There are several ways to create a new Pipeline:


From the Pipelines view


From the Home page


This will open the new Pipeline, ready to be built.

Give your Pipeline a name and add optional Tags to identify it. You can also assign a Version in the top-right.

Keep reading to learn how to build a Pipeline from this view.

Build your Pipeline

See Building a Pipeline to learn step by step.

Multiply Operation

Description

This operation allows you to multiply the numbers in a list separated by a specified delimiter. This is useful for scaling data, performing simple arithmetic, or manipulating numerical datasets.


Data types

These are the input/output expected data types for this operation:

Input data

- Input string containing numbers to multiply, separated by a specified delimiter.

Output data

- The result of the multiplication.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Delimiter*

Choose the delimiter that separates the numbers in your input data:

  • Line feed - Select this to have each line of text as a separate value.

  • Space - Your numbers are separated by spaces.

  • Comma - Your numbers are separated by commas (,)

  • Semi-colon - Your numbers are separated by semi-colons (;)

  • Colon - Your numbers are separated by colons (:)

  • CRLF - Carriage return line feed (CRLF) is a control character that originally referred to moving the carriage on typewriters back to the starting position. In computing, it’s used in classic Mac OS and some Windows-based systems to mark the end of a line. If your input uses \r as the line-ending character, you can set the delimiter to \r to correctly separate values. For example: 100\r200\r300


Example

Suppose you want to multiply a series of numbers in your input strings. They are separated by commas (,). To do it:

  1. In your Pipeline, open the required Action configuration and select the input Field.

  2. In the Operation field, choose Multiply Operation.

  3. Set Delimiter to Comma.

  4. Give your Output field a name and click Save. You'll get the multiplication of the numbers in your input data. For example:

2, 3, 5 -> 30

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Subtract Operation

Description

This operation performs arithmetic subtraction between numbers separated by a specified delimiter. This operation is useful for calculations, data manipulation, and analyzing numerical differences.


Data types

These are the input/output expected data types for this operation:

Input data

- Input string containing numbers to subtract, separated by a specified delimiter.

Output data

- The result of the subtraction.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Delimiter*

Choose the delimiter that separates the numbers in your input data. enter one of the following:

  • Line feed - Select this to have each line of text as a separate value.

  • Space - Your numbers are separated by spaces.

  • Comma - Your numbers are separated by commas (,)

  • Semi-colon - Your numbers are separated by semi-colons (;)

  • Colon - Your numbers are separated by colons (:)

  • CRLF - Carriage return line feed (CRLF) is a control character that originally referred to moving the carriage on typewriters back to the starting position. In computing, it’s used in classic Mac OS and some Windows-based systems to mark the end of a line. If your input uses \r as the line-ending character, you can set the delimiter to \r to correctly separate values. For example: 100\r200\r300


Example

Suppose you want to get the subtraction of a series of numbers in your input strings. They are separated by commas (,). To do it:

  1. In your Pipeline, open the required Action configuration and select the input Field.

  2. In the Operation field, choose Subtract Operation.

  3. Set Delimiter to Comma.

  4. Give your Output field a name and click Save. You'll get the subtraction of the numbers in your input data. For example:

10, 5, 2 -> 3

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

JSON Minify

Description

This operation compresses JSON data by removing unnecessary whitespace, line breaks, and formatting while retaining the full structure and functionality of the JSON. It is handy for reducing the size of JSON files or strings when storage or transfer efficiency is required.


Data types

These are the input/output expected data types for this operation:

Input data

- Strings representing the JSON data you want to optimize.

Output data

- Optimized versions of the JSON data in your input strings.


Example

Suppose you want to minify the JSON data in your input strings. To do it:

  1. In your Pipeline, open the required Action configuration and select the input Field.

  2. In the Operation field, choose Json Minify.

  3. Give your Output field a name and click Save. Your JSON data will be optimized and formatted properly.

For example, the following JSON:

{
    "name": "John Doe",
    "age": 30,
    "isActive": true,
    "address": {
        "city": "New York",
        "zip": "10001"
    }
}

will be formatted like this:

{"name":"John Doe","age":30,"isActive":true,"address":{"city":"New York","zip":"10001"}}

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Apache Kafka

Most recent version: v2.0.0

See the changelog of this Listener type .

This is a Pull Listener and therefore should not be used in environments with more than one cluster.

Overview

Onum supports integration with . Select Apache Kafka from the list of Listener types and click Configuration to start.

Configuration

Now you need to specify how and where to collect the data and how to establish a connection with Apache Kafka.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Configuration

Now, add the configuration to establish the connection.

Parameter
Description

Authentication

Leave as None or select the authentication type to enable the settings:

Authentication Type
Description

Click Create labels to move on to the next step and define the required if needed.

Graph Calculations

Overview

This article outlines the more complex calculations that go on behind the graphs you see.

In the , , and views, you will see detailed metrics on your events and bytes in/out, represented in a graph at the top of these areas.

The line graph represents the events in/out, and the bar graph represents bytes in/on. Hover over a point on the chart to show a tooltip containing the events and bytes in for the selected time, as well as a percentage of how much increase/decrease has occurred since the previous lapse of time since the one currently selected.

The chart in the Pipelines area is slightly different and includes some additional features. Learn more in the section.

Events

The values on the left-hand side represent the events in/out for the selected period.

Bytes

The values on the right-hand side represent the bytes in/out for the selected period.

Stacked view

By default, these graphs give an overview calculation of all the Listeners/Sinks in your Tenants. If you wish to see each Listener or Sink individually, use the Stack toggle.

Llama

Most recent version: v0.1.0

See the changelog of this Action type .

Note that this Action is only available in certain Tenants. if you need to use it and don't see it in your Tenant.

Overview

This action enriches based on the evaluation of the LLaMa2 Chat model. This model offers a flexible, advanced prompt system capable of understanding and generating responses across a broad spectrum of use cases for text logs.

By integrating LLaMA 2, Onum not only enhances its data processing and analysis capabilities but also becomes more adaptable and capable of offering customized and advanced solutions for the specific challenges faced by users across different industries.

In order to configure this action, you must first link it to a Listener. Go to to learn how to link.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find MLLLaMa2 in the Actions tab (under the Enrichment group) and drag it onto the canvas. Link it to the required and .

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

Click Save to complete.

Lookup

Most recent version: v0.1.2

See the changelog of this Action type .

Overview

The Lookup action allows you to retrieve information from your uploaded lookups. To learn more about how to upload data, go to .

In order to configure this action, you must first link it to a Listener. Go to to learn how to link.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Lookup in the Actions tab (under the Enrichment group) and drag it onto the canvas. Link it to the required and .

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description
4

Click Save to complete.

Unescape String

Description

This operation is used to decode escape sequences in a string back to their original characters. Escaped strings are often used in programming, web development, or data transmission to represent special characters that cannot be directly included in text.


Data types

These are the input/output expected data types for this operation:

Input data

- String with escape characters.

Output data

- Resulting unescaped string.


Example

Suppose you want to unescape characters in a series of input strings. To do it:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose Unescape string.

  3. Give your Output field a name and click Save. All the escape characters will be removed. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Divide Operation

Description

This operation divides a list of numbers provided in the input string, separated by a specified delimiter.


Data types

These are the input/output expected data types for this operation:

Input data

- List of numbers you want to divide, separated by a specified delimiter.

Output data

- Result of the division of the numbers in your input string.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Delimiter*

Choose the delimiter that separates the numbers in your input data:

  • Line feed - Select this to have each line of text as a separate value.

  • Space - Your numbers are separated by spaces.

  • Comma - Your numbers are separated by commas (,)

  • Semi-colon - Your numbers are separated by semi-colons (;)

  • Colon - Your numbers are separated by colons (:)

  • CRLF - Carriage return line feed (CRLF) is a control character that originally referred to moving the carriage on typewriters back to the starting position. In computing, it’s used in classic Mac OS and some Windows-based systems to mark the end of a line. If your input uses \r as the line-ending character, you can set the delimiter to \r to correctly separate values. For example: 100\r200\r300


Example

Suppose you want to divide a series of numbers in your input strings. They are separated by colons (:). To do it:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose Divide Operation.

  3. Set Delimiter to Colon.

  4. Give your Output field a name and click Save. You'll get the division results. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

AI Assistant

Just ask, and the assistant helps you building your Pipelines

Onum offers you two different types of AI-powered assistants to help you build powerful Pipelines:

  • - Build your Pipeline structure using this assistant.

  • - Once you've defined your Pipeline structure, you can configure each Action settings using this assistant.

AI Pipeline Assistant

Note that this feature is only available for certain Tenants. Contact us if you need to use it and don't see it in your Tenant.

Overview

The Pipeline Assistant is an AI-powered chat feature designed to help users design and build their . Any configuration requested through the chat will be automatically applied. Simply enter the results you expect from your Pipeline and the AI will generate a Pipeline structure according to your needs.

To start using it, create a new Pipeline, drag a Listener, and just click this icon at the bottom left corner:

Note that this AI Assistant only creates Pipeline structures. The individual Actions in the generated Pipeline won't be configured. You can use our to help you configure your Actions.

Examples

Here are some example use cases where we ask for help from the Pipeline Assistant. Check the prompts we use and the resulting configuration in each example picture.

Filter most common priorities

Send a report of aggregated data to Jira

Median

Description

This operation calculates the median value of a set of numbers separated by a specified delimiter. The median is a statistical measure representing the middle value of a sorted dataset. It divides the data into two halves, with 50% of the data points below and 50% above the median.


Data types

These are the input/output expected data types for this operation:

Input data

- List of numbers separated by a specified delimiter.

Output data

- The result of the median.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Delimiter*

Choose the delimiter that separates the numbers in your input data:

  • Line feed - Select this to have each line of text as a separate value.

  • Space - Your numbers are separated by spaces.

  • Comma - Your numbers are separated by commas (,)

  • Semi-colon - Your numbers are separated by semi-colons (;)

  • Colon - Your numbers are separated by colons (:)

  • CRLF - Carriage return line feed (CRLF) is a control character that originally referred to moving the carriage on typewriters back to the starting position. In computing, it’s used in classic Mac OS and some Windows-based systems to mark the end of a line. If your input uses \r as the line-ending character, you can set the delimiter to \r to correctly separate values. For example: 100\r200\r300


Example

Suppose you want to calculate the median a series of numbers in your input strings. They are separated by commas (,). To do it:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose Median.

  3. Set Delimiter to Comma.

  4. Give your Output field a name and click Save. You'll get the median of the numbers in your input data. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Sum Operation

Description

This operation calculates the sum of a series of numbers provided as input, separated by a specified delimiter. It is a simple yet powerful tool for numerical data analysis, enabling quick summation of datasets or values.


Data types

These are the input/output expected data types for this operation:

Input data

- Input string containing numbers to sum, separated by a specified delimiter.

Output data

- The result of the total sum.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Delimiter*

Choose the delimiter that separates the numbers in your input data. enter one of the following:

  • Line feed - Select this to have each line of text as a separate value.

  • Space - Your numbers are separated by spaces.

  • Comma - Your numbers are separated by commas (,)

  • Semi-colon - Your numbers are separated by semi-colons (;)

  • Colon - Your numbers are separated by colons (:)

  • CRLF - Carriage return line feed (CRLF) is a control character that originally referred to moving the carriage on typewriters back to the starting position. In computing, it’s used in classic Mac OS and some Windows-based systems to mark the end of a line. If your input uses \r as the line-ending character, you can set the delimiter to \r to correctly separate values. For example: 100\r200\r300


Example

Suppose you want to get the sum of a series of numbers in your input strings. They are separated by commas (,). To do it:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose Sum Operation.

  3. Set Delimiter to Comma.

  4. Give your Output field a name and click Save. You'll get the sum of the numbers in your input data. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Palo Alto

Where the vendor is paloalto, it's product is cortex_xdr. For Cortex XDR, right now we have three different product types/endpoints:

Inside each of those endpoints we have the YAML file to configure.

Pipeline Assistant
Action Assistant
alerts
alerts_multi
incidents

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Bootstrap Servers

The initial host-port pair that acts as the starting point to access the full set of alive servers in the cluster. This is a comma-separated list of host and port pairs using : as the separator, e.g. localhost:9092,another.host:9092

Enter your value and click Add element to add the required elements.

Group ID

The group ID is a string that uniquely identifies the group of consumer processes. Find this in your Kafka Cluster at Home > Configuration > Consumer Properties.

Topics

The topic to connect to. Use kafka-topics --bootstrap-server :9092 --describe and write the result here.

Auto offset reset policy*

This policy defines the behavior when there are no committed positions available or when an offset is out of range. Choose between Eariest, Latest, or None.

Plain

  • Username

  • Password

Scram

  • Username

  • Password

  • SCRAM Mechanism* - Either SHA-256 and SHA-512.

TLS

  • CA Certificate - The path containing the CA certificates.

  • Certificate* - This is the predefined TLS certificate.

  • Client key*- The private key of the corresponding certificate.

  • Skip Verify - Select true to skip or false to require verification.

  • Server Name - Enter the name of the server to connect to.

  • Minimum TLS version* - Select the required version from the menu.

Apache Kafka
Labels
here

AVG EPS

The average events per second ingested or sent by all listeners/Data sinks in your Tenant.

MAX EPS

The maximum number of events per second ingested or sent by all Listeners/Data sinks in your Tenant.

MIN EPS

The minimum number of events per second ingested or sent by all Listeners/Data sinks in your Tenant.

AVG Bytes

The average kilobytes per second ingested or sent by all Listeners/Data sinks in your Tenant.

MAX Bytes

The maximum kilobytes per second ingested or sent by all Listeners/Data sinks in your Tenant.

MIN Bytes

The minimum kilobytes per second ingested or sent by all Listeners/Data sinks in your Tenant.

Listeners
Pipelines
Data sinks
Pipelines
Pipelines
Action Assistant
She said, \"Hello, world!\" -> She said, "Hello, world!"
Action
26:2:4 -> 3.25
Action
10, 5, 20, 15, 25 -> 15
Action
10, 5, 2 -> 17
Action
Accumulator
Amazon GenAI
Anonymizer
BLIP-2
Bring Your Own Code
Cog
Conditional
Field Generator
Field Transformation
Flat JSON
For Each
Google DLP
Google GenAI
Group By
HTTP Request
JSON Transformation
JSON Unroll
Llama
Lookup
Math Expression
Message Builder
OCSF
Parser
Redis
Replicate
Sampling
Sigma Rules
Unique

Token*

The API token of the model you wish to

Model*

The name of the model to connect to. It’s possible to select between the three available Llama2 models: Llama2-7b-Chat, Llama2-13b-Chat and Llama2-70b-Chat.

Prompt

This will be the input field to call the model.

Temperature

This is the randomness of the responses. If the temperature is low, the data sampled will be more specific and condensed, whereas setting a high temperature will acquire more diverse but less precise answers.

System Prompt

Describe in detail the task you wish the AI assistant to carry out.

Max Length

The maximum number of characters for the result.

Out field*

Specify a name for the output event.

Building a Pipeline
Listener
Data sink

Select table*

Select the table you wish to retrieve data from. The tables that show here in the list will be those you previously uploaded in the Enrichment view.

Key columns

The Key column you selected during the upload will automatically appear. Select the field to search for this key column.

Outputs

Choose the field that will be injected from the drop-down and assign it a name. Add as many as required.

Enrichment
Building a Pipeline
Listener
Data sink

Azure Event Hubs

Most recent version: v0.0.1

See the changelog of this Listener type .

This is a Pull Listener and therefore should not be used in environments with more than one cluster.

Overview

The Azure Event Hubs Listener lets you receive messages from an Azure Event Hub for real-time data streaming, providing support for message batching, retries, and secure connection options.

Select Azure Event Hubs from the list of Listener types and click Configuration to start.

Configuration

Now you need to specify how and where to collect the data, and how to establish a connection with Microsoft Azure Event Hubs.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Configuration

Now, add the configuration to establish the connection.

Parameter
Description

Connection params*

The URL for your Event Hub. To get it:

  1. Click your Event Hubs namespace to view the Hubs it contains.

  2. Scroll down to the bottom and click the specific event hub to connect to.

  3. In the left menu, go to Shared Access Policies.

  4. If there is no policy created for an event hub, create one with Manage, Send, or Listen access.

  5. Select the policy from the list.

  6. Select the copy button next to the Connection string-primary key field.

Depending on the version of Azure you are using, the corresponding field may have a different name, so to help you fins it look for the a string with the same format:

Endpoint=sb://.servicebus.windows.net/; SharedAccessKeyName=RootManageSharedAccessKey; SharedAccessKey=

Click Create labels to move on to the next step and define the required Labels if needed.

Cog

Most recent version: v0.1.0

See the changelog of this Action type .

Overview

The Cog Action evaluates any AI model built with the Cog library and deployed as an endpoint anywhere (SageMaker, HuggingFace, Replicate…) and adds new values.

This Action integrates with models deployed on any platform, including the client's infrastructure. It enables the utilization of state-of-the-art machine learning models and the execution of code specific to a client's unique use case.

In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how this works.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Cog in the Actions tab (under the AI group) and drag it onto the canvas.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

Endpoint*

Enter the endpoint used to establish a connection to the model.

Token

If the model has a token for API connection, enter it here.

Version

Optionally, enter the version model here.

Input*

Enter the JSON with the input parameters required by the model.

Output*

Enter a name for the output model evaluation.

4

Click Save to complete.

Cards and Table Views

Viewing and modifying elements in the table.

Overview

In the Listeners, Pipelines, and Data sinks areas, you can view all the resources in your Tenant as cards or in a table.

In both views, you can:

  • Click the magnifying glass icon to look for specific elements in the list. You can search by name, status, or tag.

  • Display all the elements individually in a list or grouped by Status or Type. These grouping options vary depending on the area you are.

Table View

In the Table view, you can click the cog icon to begin customizing the table settings. You can reorder the columns in the table, hide or display the required ones or pin them.

Changes will be automatically applied. Click the Reset button to recover the original configuration.

  • Use the buttons at the top right part of the table to expand or collapse each row in the table. This will change the level of detail of each element.

  • Click the ellipsis button on each row to edit the element, copy its ID, or remove it.

Cards View

In this view, each element is displayed as a card that shows details about it.

  • Click the ellipsis button on each card to edit the element, copy its ID, or remove it.

  • Click the Add tag button and add the required tags to an element. For each tag you enter in the box, hit the Enter key. Click Save to add the tags.

Google Cloud Storage

Most recent version: v1.0.1

See the changelog of this Listener type .

This is a Pull Listener and therefore should not be used in environments with more than one cluster.

Overview

Source events from a Google Cloud Storage bucket using the HTTP protocol.

Select Google Cloud Storage from the list of Listener types and click Configuration to start.


Configuration

Now you need to specify how and where to collect the data, and establish a connection with your Google account.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Configuration

Now add the configuration to establish the connection.

Parameter
Description
Parameter
Description
Parameter
Description
Parameter
Description

Click Create labels to move on to the next step and define the required if needed.

Data Types

Easily identify data types using the color legend

Since Onum can process any data type, you may be wondering how to identify which is which. See the color legend below:

Field type
Description
Example

BLIP-2

Most recent version: v0.1.0

See the changelog of this Action type .

Note that this Action is only available in certain Tenants. if you don't see it and want to access it.

Overview

This action integrates with the advanced AI model Blip 2 (Bootstrapped Language-Image Pre-training). This multi-modal AI offers improved performance and versatility for tasks requiring simultaneous understanding of images and text.

Integrating Blip 2 into Onum can transform how you interact with and derive value from data, particularly by leveraging the power of visual content and analysis.

In order to configure this action, you must first link it to a Listener. Go to to learn how to link.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find MLBLip2 in the Actions tab (under the Enrichment group) and drag it onto the canvas. Link it to the required and .

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description
4

Click Save to complete.

Sigma Rules

Most recent version: v0.0.1

See the changelog of this Action type .

Note that this Action is only available in certain Tenants. if you don't see it and want to access it.

Overview

The Sigma Rules Action detects whether an event matches one or several . By evaluating Sigma rules inline on raw events, threats can be detected as logs are created.

This Action allows you to explicitly map these rule fields to the corresponding fields in your log schema.

In order to configure this Action, you must first link it to a Listener. Go to to learn how to link.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - All the events processed by the Action without errors will exit through this output, regardless of the result of the evaluation against the Sigma rules activated in the Action.

  • Positive port - Events matched against at least one of the Action's Sigma rules. The events will come out through this port bearing a new field (specified by the user) containing the full information about the match(es).

  • Negative port - Events that did not match against any of the Action's Sigma rules.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Sigma Rules in the Actions tab (under the Detection group) and drag it onto the canvas. Link it to the required and .

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Click Add rule to start configuring the required Sigma rules.

4

You'll see a list of all the available Sigma rules. Choose the one that you need to match your events against.

5

Configure the required rule fields and click Add Rule.

6

You'll see the rule in the Action configuration window. Activate it by switching on the toggle button next to it. Click Add rule if you need to add any other rules.

7

Finally, give a name to the field that will contain the detected threats.

8

Click Save to complete.

Redis

Most recent version: v1.1.0

See the changelog of this Action type .

Overview

is a powerful in-memory data structure store that can be used as a database, cache, and message broker. It provides high performance, scalability, and versatility, making it a popular choice for real-time applications and data processing.

The Redis Action allows users to set and retrieve data from a Redis server.

In order to configure this action, you must first link it to a Listener. Go to to learn how to link.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Installing Redis

To use this Action, you must install Redis and Redis CLI.

As installing Redis via a Docker is generally preferable, we will brief you on this procedure. To install it locally, check .

1

Start your local Redis Docker instance:

2

Now, connect to the Redis container:

3

Use this command to get the IP:

4

Paste this IP in the Redis endpoint field of your Redis Action.

For more help and in-depth detail, see .

Configuration

1

Find Redis in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required and .

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

4

Click Save to complete.

Float to String

Description

This operation transforms a float into a string using a Go format string. The provided format is passed to fmt.Sprintf to control how the number is rendered. Common examples include:

  • %.1f → one decimal place (1 becomes "1.0")

  • %.2f → two decimal places (1 becomes "1.00")

  • %e → scientific notation (1 becomes "1.000000e+00")

See the for a complete list of formatting verbs and flags.


Data types

These are the input/output expected data types for this operation:

Input data

- Input float values.

Output data

- Resulting string after being converted.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Format*

Specify the required verb/flag. See the for a complete list of formatting verbs and flags.


Example

Suppose you want to convert a series of float values into strings following a specific format:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose Float to String.

  3. Set Format to %.1f

  4. Give your Output field a name and click Save. The values in your input field will be transformed into strings according to the given format. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

5.218 -> 5.2
fmt package documentation
fmt package documentation
Action

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Credentials File*

The Google Cloud connector uses OAuth 2.0 credentials for authentication and authorization. Create a secret containing these credentials or select one already created.

  • To find the Google Cloud credentials file, go to Settings>Interoperability.

  • Scroll down to the Service Account area.

  • You need to generate and download a service account key from the Google Cloud Console. You will not be able to view this key, so you must have it copied somewhere already. Otherwise, create one here and save it to paste here.

  • To see existing Service Accounts, go to the menu in the top left and select APIs & Services>Credentials.

Delimiter Char Codes

Assign an optional delimiter to simulate a hierarchical directory structure within a flat namespace.

Read Bucket Once

Select true and enter a prefix and start time in the fields that appear.

  • Prefix: The optional string that acts like a folder path or directory structure when organizing objects within a bucket.

Project ID*

This is a unique string with the following format my-project-123456

  • Go to the Google Cloud Console.

  • In the top left corner, click on the project drop-down next to the Google Cloud logo (where your current project name is shown).

  • Each project will have a Project Name and a Project ID.

  • You can also find it in the Settings tab on the left-hand side.

Subscription*

Follow these steps to get the subscription name:

  1. Go to Pub/Sub in the Google Cloud Console.

  2. In the top left corner, click on the menu and select View all Products.

  3. Then go to Analytics and find Pub/Sub and click it to go to Pub/Sub (you can also use the search bar and type "Pub/Sub").

  4. In the Pub/Sub dashboard, select the Subscriptions tab on the left.

  5. The Subscription Name will be displayed in this list.

Labels

A sequence of characters that is used primarily for textual data representation.

hello world!

A list of string values separated by commas.

hello, my, name, is, John

Used to represent whole numbers without any fractional or decimal component. Integers can be positive, negative, or zero.

25

A list of integer values separated by commas.

1, 2, 3, 4

Sequence of characters or encoded information that identifies the precise time at which an event occurred.

2024-05-17T14:30:00Z

A list of timestamps separated by commas.

2024-05-17T14:30:00Z, 2022-10-19T14:30:04Z, 1998-04-10T14:49:00Z

Used to represent real numbers with fractional parts, allowing for the representation of a wide range of values, including decimals.

1.2

A list of float values separated by commas.

0.1, -1.0, 2.0

A fundamental data type in computer programming that represents one of two possible values: true or false.

true

A list of boolean values separated by commas.

true, false, true

Characters that separate individual fields or columns of data. The delimiter ensures that each piece of data within a row is correctly identified and separated from the others.

/

In a JSON, fields are represented by keys within objects, and the corresponding values can be of any JSON data type. This flexibility allows a JSON to represent structured data in a concise and readable manner, making it suitable for various applications, especially in web development and API communication.

{"items": [{ "id": 1, "name": "Apple" }, { "id": 2, "name": "Banana" }, { "id": 3, "name": "Cherry" }]}

A simple and widely used file format for storing tabular data, such as a spreadsheet or database. In a CSV file, each line of the file represents a single row of data, and fields within each row are separated by a delimiter, usually a comma.

id,name,price 1,Apple,0.99 2,Banana,0.59 3,Cherry,1.29

A key-value pair is a data structure commonly used in various contexts, including dictionaries, hash tables, and associative arrays. It consists of two components: a key and its corresponding value.

{ "name": "Alice", "age": 30, "city": "Paris" }

A literal data type, often referred to simply as a literal, represents a fixed value written directly into the source code of a program.

Token*

The API token of the model you wish to

URL*

Specify the incoming field that contains the URL value.

Context

Add an optional description for your event.

Question

This is the question you wish to ask the AI model.

Temperature

This is the randomness of the responses. If the temperature is low, the data sampled will be more specific and condensed, whereas setting a high temperature will acquire more diverse but less precise answers.

Output

Specify a name for the output event.

Building a Pipeline
Listener
Data sink
FROM redis:latest

EXPOSE 6379

CMD ["redis-server"]

## build

docker build -t my-redis-image

## run

docker run -d --name my-redis my-redis-image
docker run -d --name my-redis -p 6379:6379 redis/redis-stack-server:latest
docker exec -it {{ContainerID}} sh

> redis-cli
# Read Value

127.0.0.1:6379> GET key

# Set Value

SET key value
Redis
Building a Pipeline
this article
these use cases
Listener
Data sink
Sigma rules
Building a Pipeline
Listener
Data sink

Bring Your Own Code

Most recent version: v0.0.1

See the changelog of this Action type .

Note that this Action is only available in certain Tenants. if you don't see it and want to access it.

Overview

The Bring Your Own Code Action enables dynamic execution of user-provided Python code in isolated environments in an Onum pipeline. This way, you can use your own Python code to enrich or reduce your events directly.

In order to configure this Action, you must first link it to a Listener or another Action. Go to Building a Pipeline to learn how this works.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Bring Your Own Code in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Configuration

To indicate where you want to execute your code, you must either choose a Docker client instance or enter its corresponding IP/port in the configuration options below.

Parameter
Description

Docker client

Choose one of the available Docker instances to execute your code.

IP

Enter the instance IP to execute your code.

Port

Enter the instance port to execute your port.

Timeout connection

Enter the milliseconds to wait for the Docker connection.

Buffer size

Size in bytes to batch events.

Code

In future updates of this Action, you'll be able to update your code as a .zip file. This option is currently not available

Paste your Python File in this area. You can include any required Dependencies in the corresponding tab.

AI Assistant

You can use the AI Assistant to generate the Python code you require. Simply click the icon at the bottom of the configuration menu and enter the prompt that indicates the results that you need.

Learn more about our AI Assistant in this article.

4

Finally, give your Output Field a name. Click Add field if you need to add any additional fields.

5

Click Save to complete.

Accumulator

Most recent version: v0.0.1

See the changelog of this Action type .

Note that this Action is only available in certain Tenants. if you don't see it and want to access it.

Overview

The Accumulator Action accumulates events before sending them on.

In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.

AI Action Assistant

This Action has an AI-powered chat feature that can help you configure its parameters. Read more about it in this article.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Accumulator in the Actions tab (under the Aggregation group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

Fields list

Choose the input event fields you would like to accumulate. The values of the selected fields would be included in a new column according to the rules set in the following parameters.

You can select an infinite number of fields using the Add element button.

Accumulate type*

Choose how to accumulate the events:

  • By period - if you select by period, define the number of seconds to accumulate for.

  • By number of events - If you select this option, define how many you want to include in the Number of events parameter.

Accumulate period

The minimum value is 1.

Number of events

Enter the number of events you want to accumulate in the output field. The values of the selected fields will be included in the field as many times as you indicate here. The minimum value is 1.

Output*

Enter a name for the output field that will store the accumulated events.

4

Click Save to complete.

Example

Let's say we want to accumulate the values of a couple of fields (port and method) in a new one.

1

Add the Accumulator Action to your Pipeline and link it to your required Listener.

2

Now, double-click the Accumulator Action to configure it. You need to set the following config:

Parameter
Description

Fields list

We add the fields whose values we want to accumulate: port and method.

Accumulate type

Choose By number of events.

Number of events

We want to add a couple of results to our field so we enter 2.

Output

This is the name of the new field that will store the accumulated event. We'll call it accValues.

3

Now link the Default output port of the Action to the input port of your Data sink.

4

Finally, click Publish and choose which clusters you want to publish the Pipeline in.

5

Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

This is how the new field will return for these events:

port -> 12500 / method -> GET
accValues -> 12500, GET, 12500, GET

Anonymizer

Most recent version: v0.0.1

See the changelog of this Action type .

Overview

The Anonymizer Action modifies sensitive data to remove or mask personally identifiable information, ensuring privacy.

In order to configure this action, you must first link it to a Listener or another Action. Go to Building a Pipeline to learn how this works.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Anonymizer in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

Field to anonymize*

Select an input event field to anonymize.

Anonymize Operation*

  • Hash Anonymizer - Choose this operation to hash any type of data and make it anonymous.

  • IP Anonymizer - Choose this operation if you want to encrypt IP addresses. Note that the input IP addresses must be in IPv4 format.

Salt*

A random value added to the data before it is hashed, typically used to enhance security. Note that the salt length must be 32 characters. Learn more about salt in cryptography .

4

Click Save to complete.

Example

Let's say we have a list of IPs we wish to anonymize in one of our events fields. To do it:

1

Add the Anonymizer Action to your Pipeline and link it to your required Listener.

2

Now, double-click the Anonymizer Action to configure it. You need to set the following config:

Operation
Parameters

Field to anonymize*

We choose the required field with the IPs to be anoymized.

Anonymize Operation*

We need the IP Anonymizer operation.

Salt*

We're adding the following salt value to make decryption more difficult: D;%yL9TS:5PalS/du874jsb3@o09'?j5

3

Now link the Default output port of the Action to the input port of your Data sink.

4

Finally, click Publish and choose which clusters you want to publish the Pipeline in.

5

Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

This is how your data will be transformed:

Input data
Output data

Incident Management - Incidents

Overview

Get a list of incidents filtered by a list of incident IDs, modification time, or creation time. This includes all incident types and severities, including correlation-generated incidents.

  • The response is concatenated using AND condition (OR is not supported).

  • The maximum result set size is >100.

  • Offset is the zero-based number of incidents from the start of the result set.

Configuration

Parameters

Secrets

After entering the required parameters and secrets, you can choose to manually enter the Cortex incident Management fields, or simply paste the given YAML:

Toggle this ON to enable a free text field where you can paste your Cortex XDR multi alerts YAML.

Temporal Window

Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

Authentication Phase

Off

Enumeration Phase

Off

Collection Phase

  • Pagination Type* - fromTo

  • Zero index* - false

  • Limit* - 100

  • Request

    • Response Type* - JSON

    • Method* - POST

    • URL* - https://${parameters.CortexXdrDomain}/public_api/v1/alerts/get_alerts

    • Headers

      • Name - Accept

      • Value - application/json

      • Name - Content-Type

      • Value - application/json

      • Name - Authorization

      • Value - ${secrets.CortexXdrAuthorization}

      • Name - x-xdr-auth-id

      • Value - ${secrets.CortexXdrAuthId}

    • Body type* - raw

    • Body content* - { "request_data": { "search_from": ${pagination.from}, "search_to": ${pagination.to}, "filters": [ { "field": "creation_time", "operator": "gte", "value": ${temporalWindow.from}000 }, { "field": "creation_time", "operator": "lte", "value": ${temporalWindow.to}000 } ] } }

  • Output

    • Select - .reply.alerts

    • Map - .

    • Output Mode - element

This HTTP Pull Listener now uses the data export API to extract events.

Click Create labels to move on to the next step and define the required if needed.

For Each

Most recent version: v0.0.2

See the changelog of this Action type .

Overview

The For Each action divides a list field with different entries into different output events, along with the position they occupy in the list (being the first position 0).

For example, an input list containing [a,b,c] will generate three outputs, with these fields added to the event:

  • elementValueOutField: a; elementIndexOutField: 0

  • elementValueOutField: b; elementIndexOutField: 1

  • elementValueOutField: c; elementIndexOutField: 2

In order to configure this action, you must first link it to a Listener. Go to to learn how this works.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find For Each in the Actions tab (under the Advanced group) and drag it onto the canvas.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description
4

Click Save to complete the process.

Example

Imagine you receive a list-type field containing a string of five IPs:

127.0.0.1,127.0.0.2,127.0.0.3,127.0.0.4,192.168.0.1

1

Add the For Each Action to your Pipeline and link it to your required Data sink.

2

Now, double-click the For Each Action to configure it. You need to set the following config:

Operation
Parameters
3

Click Save to apply the configuration.

4

Now link the Default output port of the Action to the input port of your Data sink.

5

Finally, click Publish and choose in which clusters you want to publish the Pipeline.

6

Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

The Action will create a separate event for each element of the string, each event containing two fields (value and index).

Escape String

Description

This operation is used to encode or "escape" characters in a string so that they can be safely used in different contexts, such as URLs, JSON, HTML, or code. This operation is helpful when you need to format text with special characters in a way that won’t break syntax or cause unintended effects in various data formats.


Data types

These are the input/output expected data types for this operation:

Input data

- Strings with the characters you want to escape.

Output data

- Strings with the required escaped characters.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Escape Level*

Choose how to control the extent of character escaping applied to the input. Enter one of the following:

  • Everything - Escapes all characters that could potentially have any impact in the selected format, including less common or optional characters.

  • Special chars - Escapes only a specific set of characters that have special meanings in the chosen format.

    1. JSON – Special Characters Escape Level

    In addition to the minimal set (", \, control characters), it may include:

    • Unicode characters (e.g., \u2028, \u2029)

    • Any non-ASCII characters: é, ©, etc., escaped as \u00E9, \u00A9

    Example:

    2. HTML – Special Characters Escape Level

    In addition to minimal (<, >, &, ", '), it may escape:

    • All punctuation: !, #, $, %, *, =, +, ?, @

    • Spaces: as &#32; or &nbsp;

    • Non-breaking hyphens, dashes, etc.

    Example:

    3. XML – Special Characters Escape Level

    Similar to HTML, with strong emphasis on:

    • All reserved XML chars: &, <, >, ', "

    • Also can escape all non-ASCII and control characters (e.g., &#169; for ©)

    4. URI / URL Encoding

    • Escapes all characters except unreserved (A-Z, a-z, 0-9, -, _, ., ~)

    • Escaped using %HH notation (hex value)

    Example:

  • Minimal - Escapes only the characters that are strictly necessary to make the string safe for the specific format.

    1. JSON (Minimal Escape)

    • " (double quote) because it delimits strings

    • \ (backslash) because it’s used as an escape character

    • Control characters: \b, \f, , ,

    Example:

    2. HTML (Minimal Escape)

    • < – to prevent tag injection

    • > – to prevent broken tags

    • & – to prevent character entity confusion

    • " – if used inside attribute values wrapped in double quotes

    • ' – if used inside attribute values wrapped in single quotes

    Example:

    3. XML (Minimal Escape)

    • <, >, &, ', " – for both content and attribute values

Escape Quote*

This parameter lets you specify how to handle quote characters (" and ') within the input text. Enter one of the following:

  • Single - Escapes only single quotes (') in the input.

  • Double - Escapes only double quotes (") in the input.

  • Backtick - Escapes only backticks (`) in the input.

JSON compatible

Set this parameter to true if you need to use the output string in a JSON. This ensures that any characters with special meanings in JSON are escaped properly, allowing the resulting string to be safely embedded in JSON objects or arrays.


Example

Suppose you want to escape characters that are between " in a series of input strings. To do it:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose Escape String.

  3. Set Escape Level to Special chars.

  4. Set Escape Quote to Double.

  5. Set JSON compatible to false.

  6. Give your Output field a name and click Save. Matching characters will be escaped. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Tick

Most recent version: v0.0.1

See the changelog of this Listener type .

Note that this Listener is only available in certain Tenants. if you don't see it and want to access it.

Overview

The Tick listener allows you to emit events on a defined schedule.

Select Tick from the list of Listener types and click Configuration to start.


Configuration

Now you need to specify how and where to collect the data, and how to establish a connection with TCP.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Configuration

Ticks

Add as many tick events as required to emit.

Parameter
Description

Click Create labels to move on to the next step and define the required if needed.

Syslog

Most recent version: v1.1.1

See the changelog of this Listener type .

Overview

Onum receives data from Syslog, supporting TCP and UDP protocols.

Select Syslog from the list of Listener types and click Configuration to start.


Configuration

Now you need to specify how and where to collect the data, and how to establish a connection with Syslog.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Configuration

Note that you won't see the Port and Protocol settings in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum.

Parameter
Description

TLS configuration

  • Note that the parameters in this section are only mandatory if you decide to include TLS authentication in this Listener. Otherwise, leave it blank.

  • Note that you won't see this section in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum. Learn more about Cloud Listeners in .

Parameter
Description

Click Create labels to move on to the next step and define the required if needed.

TCP

Most recent version: v0.1.1

See the changelog of this Listener type .

Overview

Onum supports integration with Transmission Control Protocol.

Select TCP from the list of Listener types and click Configuration to start.


Configuration

Now you need to specify how and where to collect the data, and how to establish a connection with TCP.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Configuration

Note that you won't see the Port setting in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum. Learn more about Cloud Listeners in .

Parameter
Description

TLS configuration

  • Note the parameters in this section are only mandatory if you decide to include TLS authentication in this Listener. Otherwise, leave it blank.

  • Note that you won't see this section in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum. Learn more about Cloud Listeners in .

Parameter
Description

Click Create labels to move on to the next step and define the required if needed.

Amazon GenAI

See the changelog of this Action type .

Note that this Action is only available in Tenants with access to Amazon Bedrock. if you don't see it and want to access it.

Overview

The Amazon GenAI Action allows users to enrich events by generating structured outputs using models hosted on Amazon Bedrock, such as Claude, Titan, or Jurassic.

In order to configure this action, you must first link it to a Listener. Go to to learn how this works.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Amazon GenAI in the Actions tab (under the AI group) and drag it onto the canvas.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description
4

Click Save to complete.

Use conditional logic upstream to prevent sending unstructured or non-informative prompts to the model, helping to optimize costs and relevance.

Example

Read our use case to learn how to use this Action in a real cybersecurity scenario.

Sampling

Most recent version: v0.0.1

See the changelog of this Action type .

Overview

The Sampling Action allows only a specific number of allowed events out of every set of n events to go through it.

In order to configure this action, you must first link it to a Listener. Go to to learn how to link.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Sampling in the Actions tab (under the Filtering group) and drag it onto the canvas. Link it to the required and .

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description
4

Click Save to complete.

Example

Let's say we want to filter the first event out of each set of 3 of our input events. To do it:

1

Add the Sampling Action to your Pipeline and link it to your required Listener.

2

Now, double-click the Sampling Action to configure it. You need to set the following config:

Parameter
Description
3

Now link the Default output port of the Action to the input port of your Data sink.

4

Finally, click Publish and choose which clusters you want to publish the Pipeline in.

5

Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

From this set of events:

We will get the following (the first event for each set of 3 events):

withTemporalWindow: true
temporalWindow:
  duration: 5m
  offset: 5m
  tz: UTC
  format: Epoch
withAuthentication: false
withEnumerationPhase: false
collectionPhase:
  paginationType: "fromTo"
  limit: 100
  request:
    method: "POST"
    url: "https://${parameters.CortexXdrDomain}/public_api/v1/incidents/get_incidents"
    headers:
      - name: Accept
        value: "application/json"
      - name: Content-Type
        value: "application/json"
      - name: Authorization
        value: "${secrets.CortexXdrAuthorization}"
      - name: x-xdr-auth-id
        value: ${secrets.CortexXdrAuthId}
    bodyType: raw
    bodyRaw: |
      {
        "request_data": {
          "search_from": ${pagination.from},
          "search_to": ${pagination.to},
          "filters": [
            {
              "field": "creation_time",
              "operator": "gte",
              "value": ${temporalWindow.from}000
            },
            {
              "field": "creation_time",
              "operator": "lte",
              "value": ${temporalWindow.to}000
            }
          ]
        }
      }
  output:
    select: ".reply.incidents"
    map: "."
    outputMode: "element"
Labels
here
here
here
here
here
here

Input*

Choose the field that contains the list you want to divide.

Output field*

Name of the new field where each iterated element will be stored. This will be the same type as the input field list.

Index field*

Name of the new field that will show the position of each element in the list.

Input

In this case, we choose the field ipsList that contains our IP list.

Output field

We're naming the new field ipValue.

Index field

We're naming the new field ipIndex.

Building a Pipeline
here

Allowed Events*

The number of events to sample. The first n events will be filtered from each set of total events indicated.

Total Events*

Out of how many total events that are entering the action. When the number of allowed events is filtered, a new set with this number of total events will be taken again.

Allowed Events

Enter 1.

Total Events

Enter 3.

{
  {
    "IP_Address": "192.168.1.1",
    "GET_Count": 2,
    "POST_Count": 1
  },
  {
    "IP_Address": "192.168.1.2",
    "GET_Count": 5,
    "POST_Count": 7
  },
  {
    "IP_Address": "192.168.1.3",
    "GET_Count": 9,
    "POST_Count": 2
  },
  {
    "IP_Address": "192.168.1.4",
    "GET_Count": 29,
    "POST_Count": 6
  },
  {
    "IP_Address": "192.168.1.5",
    "GET_Count": 98,
    "POST_Count": 6
  },
  {
    "IP_Address": "192.168.1.6",
    "GET_Count": 12,
    "POST_Count": 16
  },
  ...
}
{
  {
    "IP_Address": "192.168.1.1",
    "GET_Count": 2,
    "POST_Count": 1
  },
  {
    "IP_Address": "192.168.1.4",
    "GET_Count": 29,
    "POST_Count": 6
  },
  ...
}
Building a Pipeline
Listener
Data sink
here
here
Contact us
here
Contact us
here
Get in touch with us
here
Get in touch with us
here
Get in touch with us
here
Get in touch with us
She said, "Hello, world!" -> She said, \"Hello, world!\"
perlCopyEditHello World! → Hello%20World%21
Action
jsonCopyEdit"message": "Caf\u00e9 \u00a9 2025"
jsonCopyEdit{ "message": "He said: \"Hello!\"" }

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Schedule Type Interval*

Select the scheduling type to use for the intervals between sending events.

Interval value*

Enter the number to wait for.

Interval Unit*

Enter what the number corresponds to: seconds, minutes, hours.

Number of events*

Enter how many events to emit.

Event body*

Enter what the event will contain, e.g. the fields.

Get in touch with us
Labels

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Port*

Enter the IP port number. While UDP 514 is the standard, some implementations may use TCP 514 or other ports, depending on specific configurations or security requirements. To determine the syslog port value, check the configuration settings of your syslog server or consult the documentation for your specific device or application.

Protocol*

Onum supports TCP and UDP protocols.

Framing Method*

The Framing Method refers to how characters are handled in log messages sent via the Syslog protocol. Choose between:

  • Auto-Detect - automatically detect the framing method using the information provided.

  • Non-Transparent Framing (newline) - the newline characters (\n) within a log message are preserved as part of the message content and are not treated as delimiters or boundaries between separate messages.

  • Non-Transparent Framing (zero) - refers to the way zero-byte characters are handled. Any null byte (\0) characters that appear within the message body are preserved as part of the message and are not treated as delimiters or boundaries between separate messages.

  • Octet Counting (message length) - the Syslog message is preceded by a count of the length of the message in octets (bytes).

Certificate*

This is the predefined TLS certificate.

Private key for this listener*

The private key of the corresponding certificate.

CA chain

The path containing the CA certificates.

Client authentication method*

Choose between No, Request, Require, Verify, and Require & Verify.

Minimum TLS version*

Select the required version from the menu.

this article
Labels

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Port*

Enter the IP port number.

Trailer Character*

A trailer in TCP typically refers to the end portion of a packet that may contain optional information like checksums, padding, or other metadata. It is part of the TCP header.

  • LF - Line Feed character is a control character used to signify the end of a line of text or the start of a new line.

  • CR+LF - Carriage Return (CR) followed by a Line Feed (LF) character pair, which is commonly used to signify the end of a line in text-based communication.

  • NULL

Certificate*

This is the predefined TLS certificate.

Private Key*

The private key of the corresponding certificate.

CA chain

The path containing the CA certificates.

Client Authentication Method*

Choose between No, Request, Require, Verify, and Require & Verify.

Minimum TLS version*

Select a version from the menu.

this article
this article
Labels

Region*

Choose the Google Cloud location for AWS (e.g., eu-central-1). Your region is displayed in the top right-hand corner of your AWS console.

Model*

Enter your Model ID or Model Inference Profile (arn) e.g. e.g., arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2

  • Go to the Amazon Bedrock console.

  • Go to Model Access in the left sidebar.

  • You’ll see a list of available foundation models (FMs) like Anthropic Claude, AI21, Amazon Titan, Meta Llama, etc.

  • Click on a model to view its Model ID (e.g., anthropic.claude-v2) and ARN (e.g., arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2).

System Instructions

Optional instructions to influence the behavior of the model (e.g., "You are a security analyst...").

Prompt Field*

Select the field in the event containing the prompt to send to the model. Must be string. This field will be sent as-is to the model.

Amazon Bedrock models support both English and multilingual prompts, depending on the model selected.

Temperature

Adjusts randomness of outputs: greater than 1 is random, 0 is deterministic, and 0.75 is a good starting value. Default value is 0.1

Max Tokens

Maximum number of tokens to generate. A word is generally 2-3 tokens. The default value is 128 (min 1, max 8892).

Top P

Top P sets a probability threshold to limit the pool of possible next words. Whereas temperature controls how random the selection is,top_p controls how many options are considered. Range: 0–1. Default is 1.0.

JSON credentials*

Provide the secret JSON credentials used to authenticate against Amazon Bedrock.

Output Field*

Give a name to the output field that will return the evaluation.

Building a Pipeline
Cover

here
Get in touch with us

OpenTelemetry

Most recent version: v0.0.1

See the changelog of this Listener type .

Overview

Onum supports integration with the OpenTelemetry protocol. Select OpenTelemetry from the list of Listener types and click Configuration to start.

Configuration

Now you need to specify how and where to collect the data, and how to establish a connection with the OpenTelemetry protocol.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Configuration

Configure your OTLP/gRPC or OTLP/HTTP endpoint. Set the desired type as true to enable more options.

gRPC Configuration

Set Allow gRPC protocol as true if you want to configure OTLP/gRPC:

Parameter
Description

gRPC port*

The port to establish the connection with the protocol.

HTTP Configuration

Set Allow HTTP protocol as true if you want to configure OTLP/HTTP:

Parameter
Description

HTTP Port*

The port to establish the connection with the protocol.

Traces path

The traces path for the endpoint URL e.g. http://collector:port/v1/traces

Metrics path

The metrics path for the endpoint URL e.g. http://collector:port/v1/metrics

Logs path

The log path for the endpoint URL e.g. http://collector:port/v1/logs

Authentication Configuration

Choose your required authentication method in the Authentication Type parameter:

Parameter
Description

None

Choose this if you don't need any authentication method.

Basic

Enter your Username and Password for basic authentication.

Bearer Token

Enter your Token Name and choose the required Token for authentication.

TLS Configuration

Set Allow TLS configuration as true if you decide to include TLS authentication in this Listener:

Parameter
Description

Certificate*

The SSL certificate content.

Private Key*

The private key of the corresponding certificate.

CA Chain*

The path containing the CA certificates.

Minimum TLS Version*

Select the required version from the menu.

Click Create labels to move on to the next step and define the required Labels.

Cisco NetFlow

Most recent version: v0.1.0

See the changelog of this Listener type .

Note that this Listener type is not available in Cloud tenants. Learn more about Listeners in Cloud deployments in this article.

Overview

Onum supports integration with Cisco NetFlow. Select Cisco NetFlow from the list of Listener types and click Configuration to start.

Configuration

Now you need to specify how and where to collect the data, and how to establish a connection with Cisco NetFlow.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Configuration

Now, add the configuration to establish the connection.

Socket

Parameter
Description

Transport protocol*

Currently, Onum supports the UDP protocol.

Port*

Enter the required IP port number. By default, Cisco NetFlow typically uses UDP port 2055 for exporting flow data.

Flow

Parameter
Description

Protocols to process*

Select the required protocol(s) from the list.

  • NetFlow v5 is the most widely used version.

  • NetFlow v9 is more customizable than v5.

  • IPFIX is based on the IPFIX standard (IP Flow Information Export).

  • Sflowv5 is another flow monitoring protocol that is typically used in high-speed networks.

Fields to include*

Select all the fields you wish to include in the output data.

Access control

Parameter
Description

Access control type*

Selectively monitor traffic based on specific IPs:

  • None - allows all IPs.

  • Whitelist - allows certain IPs through.

  • Blacklist - blocks certain IPs from being captured or exported.

IPs

Enter the IPs you wish to apply the access control to. Click Add element to add as many as required.

Click Create labels to move on to the next step and define the required Labels if needed.

Incident Management - Alerts

Overview

Get a list of all or filtered alerts. The alerts listed are what remains after alert exclusions are applied by Cortex XDR.

  • Response is concatenated using AND condition (OR is not supported).

  • Maximum result set size is 100.

  • Offset is the zero-based number of alerts from the start of the result set. The response indicates whether an PAN NGFW type alert contains a PCAP triggering packet.

Use the Retrieve PCAP Packet API to retrieve a list of alert IDs and their associated PCAP data. Required license: Cortex XDR Prevent, Cortex XDR Pro per Endpoint, or Cortex XDR Pro per GB.

Configuration

Parameters

Secrets

After entering the required parameters and secrets, you can choose to manually enter the Cortex incident Management fields, or simply paste the given YAML:

Toggle this ON to enable a free text field where you can paste your Cortex XDR API YAML.

withTemporalWindow: true
temporalWindow:
  duration: 5m
  offset: 5m
  tz: UTC
  format: Epoch
withAuthentication: false
withEnumerationPhase: false
collectionPhase:
  paginationType: "fromTo"
  limit: 100
  request:
    method: "POST"
    url: "https://${parameters.CortexXdrDomain}/public_api/v1/alerts/get_alerts"
    headers:
      - name: Accept
        value: "application/json"
      - name: Content-Type
        value: "application/json"
      - name: Authorization
        value: "${secrets.CortexXdrAuthorization}"
      - name: x-xdr-auth-id
        value: ${secrets.CortexXdrAuthId}
    bodyType: raw
    bodyRaw: |
      {
        "request_data": {
          "search_from": ${pagination.from},
          "search_to": ${pagination.to},
          "filters": [
            {
              "field": "creation_time",
              "operator": "gte",
              "value": ${temporalWindow.from}
            },
            {
              "field": "creation_time",
              "operator": "lte",
              "value": ${temporalWindow.to}
            }
          ]
        }
      }
  output:
    select: ".reply.alerts"
    map: "."
    outputMode: "element"
        

Temporal Window

Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

Authentication Phase

Off

Enumeration Phase

Off

Collection Phase

  • Pagination Type* - fromTo

  • Zero index* - false

  • Limit* - 100

  • Request

    • Response Type* - JSON

    • Method* - POST

    • URL* - https://${parameters.CortexXdrDomain}/public_api/v1/alerts/get_alerts

    • Headers

      • Name - Accept

      • Value - application/json

      • Name - Content-Type

      • Value - application/json

      • Name - Authorization

      • Value - ${secrets.CortexXdrAuthorization}

      • Name - x-xdr-auth-id

      • Value - ${secrets.CortexXdrAuthId}

    • Body type* - raw

    • Body content* - { "request_data": { "search_from": ${pagination.from}, "search_to": ${pagination.to}, "filters": [ { "field": "creation_time", "operator": "gte", "value": ${temporalWindow.from} }, { "field": "creation_time", "operator": "lte", "value": ${temporalWindow.to} } ] } }

  • Output

    • Select - .reply.alerts

    • Map - .

    • Output Mode - element

This HTTP Pull Listener now uses the data export API to extract events.

Click Create labels to move on to the next step and define the required Labels if needed.

Amazon S3

Most recent version: v1.0.0

See the changelog of this Listener type .

This is a Pull Listener and therefore should not be used in environments with more than one cluster.

Overview

Onum supports integration with . Select Amazon S3 from the list of Listener types and click Configuration to start.

Minimum requirements

Before configuring and starting to send data with the Amazon S3 Listener, you need to take into consideration the following requirements:

  • Your Amazon user needs at least permission to use the GetObject operation (S3) and the ReceiveMessage and DeleteMessageBatch operations (SQS Bucket) to make this Listener work.

  • You need to configure your Amazon S3 bucket to send notifications to an Amazon Simple Queue Service (SQS) queue when new files are added. Learn how to do it below:

Configure your Amazon S3 bucket to send notifications to an Amazon SQS when new files are added

1. Create an Amazon SQS Queue

  1. Sign in to the AWS Management Console and open the Amazon SQS console.

  2. Choose Create Queue and configure the queue settings as needed.

  3. After creating the queue, note its Amazon Resource Name (ARN), which follows this format: arn:aws:sqs:<region>:<account-id>:<queue-name>.

2. Modify the SQS Queue Policy to Allow S3 to Send Messages

  1. In the Amazon SQS console, select your queue.

  2. Navigate to the Access Policy tab and choose Edit.

  3. Replace the existing policy with the following, ensuring you update the placeholders with your specific details:

Save the changes. This policy grants your S3 bucket permission to send messages to your SQS queue.

3. Configure S3 Event Notifications

  1. Open the Amazon S3 console and select the bucket you want to configure.

  2. Go to the Properties tab and find the "Event notifications" section.

  3. Click on Create event notification.

  4. Provide a descriptive name for the event notification.

  5. In the Event types section, select All object create events or specify particular events that should trigger notifications.

  6. In the Destination section, choose SQS Queue and select the queue you configured earlier.

  7. Save the configuration.

4. Test the Configuration

  1. Upload a new file to your S3 bucket.

  2. Check your SQS queue to verify that a message has been received, indicating that the notification setup is functioning correctly.

Additional Considerations

  • Cross-Region Configurations: Ensure that your S3 bucket and SQS queue are in the same AWS Region, as S3 event notifications do not support cross-region targets.

  • Permissions: Confirm that the AWS Identity and Access Management (IAM) roles associated with your S3 bucket and SQS queue have the necessary permissions.

  • Object Key Name Filtering: If you use special characters in your prefix or suffix filters for event notifications, ensure they are URL-encoded.

Configuration

Now you need to specify how and where to collect the data, and how to establish a connection with AWS S3.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Configuration

Now, add the configuration to establish the connection.

Objects

Parameter
Description

Bucket

Parameter
Description

Bucket Advanced

Proceed with caution when modifying these advanced options. Default values should be enough in most cases.

Parameter
Description

Queue

Parameter
Description

Queue Advanced

Proceed with caution when modifying these advanced options. Default values should be enough in most cases.

Parameter
Description

General Advanced

Proceed with caution when modifying these advanced options. Default values should be enough in most cases.

Parameter
Description

Click Create labels to move on to the next step and define the required if needed.

Convert Area

Description

This operation converts values from one unit of measurement to another, such as square feet, acres, square meters, and even smaller or less common units used in physics (like barns or nanobarns).


Data types

These are the input/output expected data types for this operation:

Input data

- Values whose unit of measurement you want to transform. They must be strings representing numbers.

Output data

- Resulting values after transforming them to the selected unit of measurement.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Input units*

Enter the unit of measurement of your input data. You must indicate one of the following:

Metric

  • Square metre (sq m)

  • Square kilometre (sq km)

  • Centiare (ca)

  • Deciare (da)

  • Are (a)

  • Decare (daa)

  • Hectare (ha)

Imperial

  • Square inch (sq in)

  • Square foot (sq ft)

  • Square yard (sq yd)

  • Square mile (sq mi)

  • Perch (sq per)

  • Rood (ro)

  • International acre (ac)

US customary units

  • US survey acre (ac)

  • US survey square mile (sq mi)

  • US survey township

Nuclear physics

  • Yoctobarn (yb)

  • Zeptobarn (zb)

  • Attobarn (ab)

  • Femtobarn (fb)

  • Picobarn (pb)

  • Nanobarn (nb)

  • Microbarn (μb)

  • Millibarn (mb)

  • Barn (b)

  • Kilobarn (kb)

  • Megabarn (Mb)

  • Outhouse

  • Shed

  • Planck area

Comparisons

  • Washington D.C.

  • Isle of Wight

  • Wales

  • Texas

Output units*

Enter the required unit of measurement of your output data. You must indicate one of the following:

Metric

  • Square metre (sq m)

  • Square kilometre (sq km)

  • Centiare (ca)

  • Deciare (da)

  • Are (a)

  • Decare (daa)

  • Hectare (ha)

Imperial

  • Square inch (sq in)

  • Square foot (sq ft)

  • Square yard (sq yd)

  • Square mile (sq mi)

  • Perch (sq per)

  • Rood (ro)

  • International acre (ac)

US customary units

  • US survey acre (ac)

  • US survey square mile (sq mi)

  • US survey township

Nuclear physics

  • Yoctobarn (yb)

  • Zeptobarn (zb)

  • Attobarn (ab)

  • Femtobarn (fb)

  • Picobarn (pb)

  • Nanobarn (nb)

  • Microbarn (μb)

  • Millibarn (mb)

  • Barn (b)

  • Kilobarn (kb)

  • Megabarn (Mb)

  • Outhouse

  • Shed

  • Planck area

Comparisons

  • Washington D.C.

  • Isle of Wight

  • Wales

  • Texas


Example

Suppose you want to convert a series of events from square feet into square meters:

  1. In your Pipeline, open the required configuration and select the input Field.

  2. In the Operation field, choose Convert area.

  3. Set Input units to Square foot (sq ft).

  4. Set Output units to Square metre (sq m).

  5. Give your Output field a name and click Save. The values in your input field will be transformed. For example:

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Message Builder

Most recent version: v1.0.0

See the changelog of this Action type .

Overview

The Message Builder Action allows users to define new messages by combining different input fields.

In order to configure this Action, you must first link it to a Listener. Go to to learn how to link.

AI Action Assistant

This Action has an AI-powered chat feature that can help you configure its parameters. Read more about it in .

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Message Builder in the Actions tab (under the Formatting group) and drag it onto the canvas. Link it to the required and .

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description
4

Save

Click save when you have composed your message.

To include a field in your message, drag it from the Fields area and drop it into the Message area.

You can add a Field Delimiter to separate the fields in your message string. Choose between : , , , | , ; .

This will generate an output CSV.

You can generate a JSON file.

To include a field in your message, drag it from the Fields area and drop it into the Message area.

This will automatically add the field value separated by : followed by the source action and field. A comma separates each JSON value.

Click New Register to manually type the values and fields.

This will generate a JSON file.

Create a key-value file.

To include a field in your message, drag it from the Fields area and drop it into the Message area.

This will automatically add the field value separated by : followed by the source action and field. A :separates each key-value pair.

To change the Value and Pair separators, use the drop-down menus and choose between : , ; , and |

Click New Register to manually type the values and fields.

To include a field in your message, drag it from the Fields area and drop it into the Message area.

The expressions should be strings that, optionally, may contain field names. For example:

where ${myField} will be replaced with the actual value in the event.

The action provides the following features depending on the argument delimiter behavior and the given delimiter and replacement values:

  • REPLACE: replaces delimiter with replacement on each event field.

  • DELETE: deletes delimiter on each event field.

  • QUOTE: adds double quotes surrounding an event field if it contains delimiter.

  • ESCAPE: adds a backslash (\) before each delimiter on each event field.

To select more than one at once, click a field in the Fields area and select the checkboxes next to the name, then select Add fields.

Example

Let's say you have received raw data in JSON format and wish to extract the fields and format them as a CSV.

1

Raw data

2

Parse the JSON

Add a Parser to the canvas and extract the fields using the automatic parsing.

You have extracted the endpoint, ip, method, status and username into separate fields.

3

Build the message

Now use the Message Builder to create a CSV containing these fields as one message.

Drag the following fields to the Message area:

  • method

  • description

  • object

  • endpoint

  • ip

  • status

  • username

  • port

Fields delimiter: ,

if delimiter matches: Put "" in quotes.

AI Action Assistant

Just ask, and the assistant helps you

Note that this feature is only available for certain Tenants. Contact us if you need to use it and don't see it in your Tenant.

Overview

The Action Assistant is an AI-powered chat feature designed to help users configure their within a . Any configuration requested through the chat will be automatically applied. This is especially useful for requesting specific use cases, as the AI will automatically apply the necessary fields and settings to achieve the desired result.

To start using it, open the Action configuration and just click this icon at the bottom left corner:

The Action Assistant is only available for a specific set of Actions, but it will soon be expanded to cover more. These are the Actions where you can currently use it:

Examples

Here are some example use cases where we ask for help from the Action Assistant. Check the prompts we use and the resulting configuration in each example picture.

Conditional

Prompt: Please could you identify common windows logs event ids and create a condition for each value?

  • In this example, we request a condition for each of the most common Windows event IDs:

  • In this case, we request conditions for each of the most common FortiGate log IDs:

  • Here, we are filtering events with Success status only:

Group By

Prompt: Group events every 5 minutes by host_ip and count the occurrences.

  • In this example, we need to identify each unique IP address for every 10 minutes:

  • In this case, we need all the unique app name values every 5 seconds, grouped by source ports and IP addresses:

Math Expression

Prompt: Convert the priority field to an integer, convert the source and destination ips to he format, identify the appnames starting with windows

  • In this case, we ask the assistant to transform a series of amounts from bytes to megabytes:

  • Here we are transforming our epoch dates in milliseconds into seconds:

  • In this example, we want to calculate the time difference between a series of from and to dates:

Message Builder

Prompt: Please build me a message in json format with the most important fields.

  • In this example, we ask for the most relevant fields but in key-value format:

  • Here we are requesting the most relevant fields as a message in JSON format:

  • In this case, we want to order all our fields in alphabetical order:

  • Here we want to filter only string-type fields:

Unique

Prompt: Please identify the unique message IDs and codify them in 8 bits.

  • In this example, we want to identify the unique message IDs and codify them in 8 bits.

Microsoft 365

Most recent version: v0.0.1

See the changelog of this Listener type .

This is a Pull Listener and therefore should not be used in environments with more than one cluster.

Overview

Onum supports integration with Microsoft 365.

Select Microsoft 365 from the list of Listener types and click Configuration to start.


Configuration

Now you need to specify how and where to collect the data, and how to establish a connection with Office365.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Configuration

Parameter
Description

Content Type

Assign your data a Content Type in the form of reusable columns, document templates, workflows, or behaviors.

Content Type values:

  • Audit.AzureActiveDirectory

  • Audit.Exchange

  • Audit.SharePoint

  • Audit.General

  • DLP.All

Content type example (this will subscribe you to active directory and exchange):

Parameter
Description
Parameter
Description

Click Create labels to move on to the next step and define the required .

here
5000 Square foot (sq ft) -> 464.5152 Square metre (sq m)
Action
  {
    "Version": "2012-10-17",
    "Id": "S3ToSQSPolicy",
    "Statement": [
      {
        "Sid": "AllowS3Bucket",
        "Effect": "Allow",
        "Principal": {
          "Service": "s3.amazonaws.com"
        },
        "Action": "SQS:SendMessage",
        "Resource": "arn:aws:sqs:<region>:<account-id>:<queue-name>",
        "Condition": {
          "ArnLike": {
            "aws:SourceArn": "arn:aws:s3:::<bucket-name>"
          },
          "StringEquals": {
            "aws:SourceAccount": "<account-id>"
          }
        }
      }
    ]
  }

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Compression*

Select the compression method used in the ingested S3 files. This accepts the standard compression codecs (gzip, zlib, bzip2), none for no compression, and auto to autodetect the compression type from the file extension.

Format*

Select the format of the ingested S3 files. This currently accepts JSON array (a big JSON array containing a JSON object for each event), JSON lines (a JSON object representing an event on each line), and auto to autodetect the format from the file extension (.json or .jsonl, respectively).

Region*

Choose the region the bucket is found in, also found in your Buckets area, next to the name.

Name

The AWS bucket your data is stored in. This is the bucket name found in your Buckets area. You can fill this if you want to check that notifications come from that bucket, or leave it empty to avoid such checks.

Authentication Type*

Choose manual to enter your access key ID and secret access key manually in the parameters below, or auto to authenticate automatically. The default value is manual.

Access key ID*

Select the access key ID from your Secrets or click New secret to generate a new one.

The Access Key ID is found in the IAM Dashboard of the AWS Management Console.

  1. In the left panel, click on Users.

  2. Select your IAM user.

  3. Under the Security Credentials tab, scroll to Access Keys, and you will find existing Access Key IDs (but not the secret access key).

Secret access key*

Select the secret access key from your Secrets or click New secret to generate a new one.

Under Access keys, you can see your Access Key IDs, but AWS will not show the Secret Access Key. You must have it saved somewhere. If you don't have the secret key saved, you need to create a new one.

Service endpoint

Optionally, Amazon S3 provides different types of service endpoints based on the region and access type.

  1. Select your bucket.

  2. Go to the Properties tab.

  3. Under Bucket ARN & URL, find the S3 endpoint URL.

Amazon Service Endpoint will usually be chosen automatically, so you should not normally have to fill this up. However, in case you need to override the default access point, you can do it here.

Region

Choose the region your queue is created in from the drop-down provided.

URL*

The URL of your existing Amazon SQS queue to send the data to.

  1. Go to the AWS Management Console.

  2. In the Search Bar, type SQS and click on Simple Queue Service (SQS).

  3. Click on Queues in the left panel.

  4. Locate your queue from the list and click it.

  5. The Queue URL will be displayed in the table under URL.

This is the correct URL format: https://sqs.region.localhost/awsaccountnumber/storedinenvvar

Authentication Type*

Choose manual to enter your access key ID and secret access key manually in the parameters below, or auto to authenticate automatically.

Access key ID

Select the access key ID from your Secrets or click New secret to generate a new one.

The Access Key ID is found in the IAM Dashboard of the AWS Management Console.

  1. In the left panel, click on Users.

  2. Select your IAM user.

  3. Under the Security Credentials tab, scroll to Access Keys, and you will find existing Access Key IDs (but not the secret access key).

Note that this can be the same as in the bucket, in which case you don't need to repeat it here, or it can be different, depending on how you have configured your bucket & queue

Secret access key

Select the secret access key from your Secrets or click New secret to generate a new one.

This can be the same as for the bucket, in which case you don't need to repeat it here, or it can be different, depending on how you have configured your bucket & queue.

Under Access keys, you can see your Access Key IDs, but AWS will not show the Secret Access Key. You must have it saved somewhere. If you don't have the secret key saved, you need to create a new one. Note that this can be the same as in the bucket, in which case you don't need to repeat it here, or it can be different, depending on how you have configured your bucket & queue.

Event name

When you configure your bucket to send notifications to your SQS queue, you choose a name for those notification events. You can provide that name here to check the notifications to match that name when they are read by the Listener, or leave this empty to avoid such checks.

Service endpoint

If you have a custom endpoint, enter it here. The default SQS regional service endpoint will be used by default.

Maximum number of messages*

Set a limit for the maximum number of messages to receive in the notifications queue for each request. The minimum value is 1, and the maximum and default value is 10.

Visibility timeout*

Set how many seconds to leave a message as hidden in the queue after being delivered, before redelivering it to another consumer if not acknowledged. The minimum value is 30s, and the maximum value is 12h. The default value is 1h.

Wait time*

When the queue is empty, set how long to wait for messages before deeming the request as timed out. The minimum value is 5s, and the maximum and default value is 20s.

Event batch size*

Enter a limit for the number of events allowed through per batch. The minimum value is 1, and the maximum and default value is 1000000.

Minimum retry time*

Set the minimum amount of time to wait before retrying. The default and minimum value is 1s, and the maximum value is 10m.

Maximum retry time*

Set the maximum amount of time to wait before retrying. The default value is 5m, and the maximum value is 10m. The minimum value is the one set in the parameter above.

Amazon S3
Labels

Fields*

This is where you specify the fields you wish to include in your message, color coded by type.

Fields beginning with _ are internal fields.

Destination Field Name*

Give your message a name to identify it by in the end destination.

Output format*

Choose how to send your message from the following formats: CSV, JSON, Key Value, Free Mode. See the tabs below for the settings specific to each one.

this is an example with the value: ${myField}
[
  {
    "username": "user_1",
    "method": "POST",
    "endpoint": "breach log",
    "ip": "10.XXX.XX.XX",
    "description": "[Role] User performed an action on breach log",
    "viewport": [1920, 955],
    "usage": true
  },
  {
    "username": "user_1",
    "method": "POST",
    "endpoint": "event log",
    "ip": "10.XXX.XX.XX",
    "description": "[Role] User performed an action on event log from breach log",
    "viewport": [1920, 955],
    "usage": true
  },
  {
    "username": "service_user",
    "method": "POST",
    "endpoint": "/admin/age",
    "ip": "127.0.0.1",
    "status": 400
  },
  {
    "username": "user_2",
    "method": "POST",
    "endpoint": "/sso/login",
    "ip": "10.XXX.XX.XX",
    "status": 302
  }
]
Building a Pipeline
this article
Listener
Data sink
here

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Tenant ID*

This gives access to your

Find this in the Azure Active Directory>Overview, or in the Properties pane.

Client ID*

Needed when accessing Microsoft 365 through APIs or applications. For applications registered in other directories, the Application (Client) ID is located in the application credentials.

  • Go to the Azure Portal.

  • Find Microsoft Entra ID in the left menu.

  • Click App registrations under the Manage section.

  • Select the application you registered (or search for it).

  • Under Essentials, find Application (client) ID.

  • Click "Copy to clipboard" to save it.

contentType=Audit.AzureActiveDirectory,Audit.Exchange

Client Secret*

The Client Secret (also called an Application Secret) is used for authentication in Microsoft Entra ID (formerly Azure AD) when accessing APIs.

  • Click App registrations under the Manage section.

  • Select your registered application.

  • In the left menu, click Certificates & secrets.

  • Under Client secrets, check if an existing secret is available. You cannot view it so you must have it saved somewhere.

  • If you need a new one, create one and copy the value immediately.

Subscription Plan*

Choose your plan from the list.

  • Find this in the Microsoft Account Portal under Billing>Your Products.

Polling Interval*

Enter the frequency in minutes with which to grab events, e.g. every 5 minutes.

Labels
Actions
Pipeline
Accumulator
Conditional
Field Transformation
Group By
Math Expression
Message Builder
Unique

Convert Data Units

Description

This operation converts values between different units of digital data, such as bits, bytes, kilobytes, megabytes, and so on. It’s especially useful when you’re dealing with data storage or transfer rates and you need to switch between binary (base 2) and decimal (base 10) units.


Data types

These are the input/output expected data types for this operation:

Input data

- Values whose unit of data you want to transform. They must be strings representing numbers.

Output data

- Resulting values after transforming them to the selected unit of data.


Parameters

These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):

Input units*

Enter the unit of your input data. You must indicate one of the following:

  • Bits (b)

  • Nibbles

  • Octets

  • Bytes (B)

Binary bits (2^n)

  • Kibibits (Kib)

  • Mebibits (Mib)

  • Gibibits (Gib)

  • Tebibits (Tib)

  • Pebibits (Pib)

  • Exbibits (Eib)

  • Zebibits (Zib)

  • Yobibits (Yib)

Decimal bits (10^n)

  • Decabits

  • Hectobits

  • Kilobits (Kb)

  • Megabits (Mb)

  • Gigabits (Gb)

  • Terabits (Tb)

  • Petabits (Pb)

  • Exabits (Eb)

  • Zettabits (Zb)

  • Yottabits (Yb)

Binary bytes (8 x 2^n)

  • Kibibytes (KiB)

  • Mebibytes (MiB)

  • Gibibytes (GiB)

  • Tebibytes (TiB)

  • Pebibytes (PiB)

  • Exbibytes (EiB)

  • Zebibytes (ZiB)

  • Yobibytes (YiB)

Decimal bytes (8 x 10^n)

  • Kilobytes (KB)

  • Megabytes (MB)

  • Gigabytes (GB)

  • Terabytes (TB)

  • Petabytes (PB)

  • Exabytes (EB)

  • Zettabytes (ZB)

  • Yottabytes (YB)

Output units*

Enter the required unit of your output data. You must indicate one of the following:

  • Bits (b)

  • Nibbles

  • Octets

  • Bytes (B)

Binary bits (2^n)

  • Kibibits (Kib)

  • Mebibits (Mib)

  • Gibibits (Gib)

  • Tebibits (Tib)

  • Pebibits (Pib)

  • Exbibits (Eib)

  • Zebibits (Zib)

  • Yobibits (Yib)

Decimal bits (10^n)

  • Decabits

  • Hectobits

  • Kilobits (Kb)

  • Megabits (Mb)

  • Gigabits (Gb)

  • Terabits (Tb)

  • Petabits (Pb)

  • Exabits (Eb)

  • Zettabits (Zb)

  • Yottabits (Yb)

Binary bytes (8 x 2^n)

  • Kibibytes (KiB)

  • Mebibytes (MiB)

  • Gibibytes (GiB)

  • Tebibytes (TiB)

  • Pebibytes (PiB)

  • Exbibytes (EiB)

  • Zebibytes (ZiB)

  • Yobibytes (YiB)

Decimal bytes (8 x 10^n)

  • Kilobytes (KB)

  • Megabytes (MB)

  • Gigabytes (GB)

  • Terabytes (TB)

  • Petabytes (PB)

  • Exabytes (EB)

  • Zettabytes (ZB)

  • Yottabytes (YB)


Example

Suppose you want to convert a series of events from megabits into kilobytes:

  1. In your Pipeline, open the required Action configuration and select the input Field.

  2. In the Operation field, choose Convert data units.

  3. Set Input units to Megabits (Mb).

  4. Set Output units to Kilobytes (KB).

  5. Give your Output field a name and click Save. The data type of the values in your input field will be transformed. For example:

2 Megabits (Mb) -> 250 Kilobytes (KB)

You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.

Google Pub/Sub

Most recent version: v0.0.2

See the changelog of this Listener type .

This is a Pull Listener and therefore should not be used in environments with more than one cluster.

Overview

Onum supports integration with Google Pub/Sub. Select Google Pub/Sub from the list of Listener types and click Configuration to start.

Configuration

Now you need to specify how and where to collect the data, and establish a connection with your Google account.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Configuration

Now, add the configuration to establish the connection.

Parameter
Description

Project ID*

This is a unique string with the following format my-project-123456

  1. Go to the Google Cloud Console.

  2. In the top left corner, click on the project drop-down next to the Google Cloud logo (where your current project name is shown).

  3. Each project will have a Project Name and a Project ID.

  4. You can also find it in the Settings tab on the left-hand side.

Subscription Name*

Follow these steps to get the subscription name:

  1. Go to Pub/Sub in the Google Cloud Console.

  2. In the top left corner, click on the menu and select View all Products.

  3. Then go to Analytics and find Pub/Sub and click it to go to Pub/Sub (you can also use the search bar and type "Pub/Sub").

  4. In the Pub/Sub dashboard, select the Subscriptions tab on the left.

  5. The Subscription Name will be displayed in this list.

Credentials File*

The Google Cloud connector uses OAuth 2.0 credentials for authentication and authorization. Select the credentials from your or click New secret to generate a new one.

  1. To find the Google Cloud credentials file, go to Settings>Interoperability.

  2. Scroll down to the Service Account area.

  3. You need to generate and download a service account key from the Google Cloud Console. You will not be able to view this key, so you must have it copied somewhere already. Otherwise, create one here and save it to paste here.

  4. To see existing Service Accounts, go to the menu in the top left and select APIs & Services>Credentials.

Bulk Messages Configuration

Parameter
Description

Enabled*

Decide whether or not to activate the bulk message option.

Message Format

Choose the required message format.

Delimiter Character Codes

Enter the characters you want to use as delimiters, if required. A delimiter character code refers to the numerical representation (usually in ASCII or Unicode) of a delimiter.

Click Create labels to move on to the next step and define the required Labels if needed.

Transformation

Building a Pipeline

Overview

The Pipeline canvas provides infinite possibilities to use your data.


1. General settings

This pane shows the general properties of your Pipeline. Click the ellipses next to its name to Copy ID.

Depending on your permissions, you can view or modify:

  • Name: When you create a pipeline by default, the first recommendation is to change the default name. But you can modify the name at any time clicking on the pencil icon close to the Pipeline's name

  • Tags: Click the tag icon to open the menu.

  • Clusters: Here you can see how many clusters your Pipeline is running in, as well as update them.

  • Versions: View and run multiple versions of the Pipeline.

  • Stop/Start Pipeline: Stop and start the Pipeline in some or all of the clusters it is running in.

  • Publish

When you modify your Pipeline, you will be creating a new version. When your modifications are complete, you can Publish this new version using this button in the top right.

Go to to learn more.

You can carry out all these actions in if you wish to modify more than one Pipeline at a time.


2. The metrics bar

If the Pipeline is running, the Metrics bar provides a visual, graphical overview of the data being processed in your Pipeline.

  • Events In: View the total events in per second for the selected period, compared to the previous range (in %).

  • Bytes In: The total bytes in per second for the selected time range, compared to the previous (in %).

  • Events Out: View the total events out per second for the selected period, compared to the previous range (in %).

  • Bytes Out: The total bytes out per second for the selected time range, compared to the previous (in %).

  • Latency: The time (in nanoseconds) it takes for data to travel from one point to another, compared to the previous (in %).

Set a time range

You can set a time range to view the metrics for a specific period of time. This will be used to calculate the percentages, compared to the previous time of the same period selected.

Go to to learn more about the specifics of how this works.

Hide/Show metrics

Use the Hide metrics/Show metrics button to hide/show the metrics pane.


3. Add to the Pipeline

Simply drag and drop an element from the left-hand side onto the canvas to add it to your Pipeline.

For Listeners, you can drag the specific down to the required level. Once in the Pipeline, you can see which Listener the label belongs to by hovering over it, or in the Metrics area of the configuration pane.


4. Canvas

The canvas is where you will build your Pipeline. Drag and drop an element from the left pane to add it to your Pipeline.

Click it in the canvas to open its .

Delete a node

If you have enough permissions to modify this Pipeline, click the node in the canvas and select the Remove icon.

Create links between your nodes to create a flow of data between them. Learn more about below.


5. Navigation options

Zoom in/out, Center, undo, and redo changes using the buttons on the right.

Use the window in the bottom-right to move around the Canvas.

Connect the separate nodes of the canvas to form a Pipeline from start to finish.

Simply click the port you wish to link from and drag to the port you wish to link to. When you let go, you will see a link form between the two.

To Unlink, click anywhere on the link and select unlink in the menu.


Ports

Notice the ports of each element in the canvas. Ports are used as connectors to other nodes of the Pipeline, linking either incoming or outgoing data.

Listener: As a Listener is used to send information on, there are no in ports, and one out port.

Action: Actions generally have one in port, injecting it with data. When information is output, it will be sent via the default port. If there are problems sending on the data, it will not be lost, bur rather output via the error port.

Datasink: A datasink is the end stop for our data, so there is only one in port that receives your processed data.

Click one to read more about how to configure them:

Google GenAI

Most recent version: v0.0.1

See the changelog of this Action type .

Note that this Action is only available in certain Tenants. if you don't see it and want to access it.

Overview

The Google GenAI Action allows users to enrich their data using Google Gemini AI models.

In order to configure this action, you must first link it to a Listener. Go to to learn how this works.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Google GenAI in the Actions tab (under the AI group) and drag it onto the canvas.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description
4

Click Save to complete.

Incident Management - Multi Alerts

Overview

Get a list of alerts with multiple events.

  • The response is concatenated using AND condition (OR is not supported).

  • The maximum result set size is 100.

  • Offset is the zero-based number of alerts from the start of the result set.

Cortex XDR displays in the API response whether a PAN NGFW type alert contains a PCAP triggering packet. Use the Retrieve PCAP Packet API to retrieve a list of alert IDs and their associated PCAP data.

Required license: Cortex XDR Prevent, Cortex XDR Pro per Endpoint, or Cortex XDR Pro per GB.

Configuration

Parameters

Secrets

After entering the required parameters and secrets, you can choose to manually enter the Cortex incident Management fields, or simply paste the given YAML:

Toggle this ON to enable a free text field where you can paste your Cortex XDR multi alerts YAML.

Temporal Window

Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

Authentication Phase

Off

Enumeration Phase

Off

Collection Phase

  • Pagination Type* - fromTo

  • Zero index* - false

  • Limit* - 100

  • Request

    • Response Type* - JSON

    • Method* - POST

    • URL* - https://${parameters.CortexXdrDomain}/public_api/v1/alerts/get_alerts

    • Headers

      • Name - Accept

      • Value - application/json

      • Name - Content-Type

      • Value - application/json

      • Name - Authorization

      • Value - ${secrets.CortexXdrAuthorization}

      • Name - x-xdr-auth-id

      • Value - ${secrets.CortexXdrAuthId}

    • Body type* - raw

    • Body content* - { "request_data": { "search_from": ${pagination.from}, "search_to": ${pagination.to}, "filters": [ { "field": "creation_time", "operator": "lte", "value": ${temporalWindow.to} } ] } }

  • Output

    • Select - .reply.alerts

    • Map - .

    • Output Mode - element

This HTTP Pull Listener now uses the data export API to extract events.

Click Create labels to move on to the next step and define the required if needed.

Parameter
Description

Redis endpoint*

Enter the endpoint used to establish the connection to the Redis server.

Read Timeout*

Enter the maximum amount of milliseconds to wait to receive data after the connection has been established and the request has been sent.

Write Timeout*

Enter the maximum amount of milliseconds to wait while trying to send data to the server.

Commands*

The command to read or write data from the Redis server.

  • SET

    • Redis Key*- If the model has a version, enter it here. Choose the input field that contains it.

    • Event in field* - Choose the field that contains the events you want to input to Redis.

    • Expiration - Optionally, enter how long the key will be available in the Redis server. The minimum value is 0.

  • HSET

    • Redis Key*- If the model has a version, enter it here. Choose the input field that contains it.

    • Field/Value pairs - Add as many fields and pipeline values as required.

  • GET

    • Redis Key*- If the model has a version, enter it here. Choose the input field that contains it.

    • Event out field* - Enter a name for the output field that will store the output data.

  • HGET

    • Redis Key*- If the model has a version, enter it here. Choose the input field that contains it.

    • Event out field* - Enter a name for the output field that will store the output data.

    • HGET field* - Select the field from the Listener or Action that serves as the HGET field.

withTemporalWindow: true
temporalWindow:
  duration: 5m
  offset: 5m
  tz: UTC
  format: Epoch
withAuthentication: false
withEnumerationPhase: false
collectionPhase:
  paginationType: "fromTo"
  limit: 100
  request:
    responseType: json
    method: "POST"
    url: "https://${parameters.CortexXdrDomain}/public_api/v2/alerts/get_alerts_multi_events"
    headers:
      - name: Accept
        value: "application/json"
      - name: Content-Type
        value: "application/json"
      - name: Authorization
        value: "${secrets.CortexXdrAuthorization}"
      - name: x-xdr-auth-id
        value: ${secrets.CortexXdrAuthId}
    bodyType: raw
    bodyRaw: |
      {
        "request_data": {
          "search_from": ${pagination.from},
          "search_to": ${pagination.to},
          "filters": [
            {
              "field": "creation_time",
              "operator": "lte",
              "value": ${temporalWindow.to}
            }
          ]
        }
      }
  output:
    select: ".reply.alerts"
    map: "."
    outputMode: "element"
        
Labels

Location*

Enter the Google Cloud location for Vertex AI (e.g., us-central1).

Model*

Choose the Vertex AI model version to use from the menu.

System Instructions*

Enter the required system instructions.

Prompt Field*

Enter the prompt you want to send to the model.

Temperature

Adjusts randomness of outputs: greater than 1 is random, 0 is deterministic, and 0.75 is a good starting value. Default value is 0.7

MaxLength

Maximum number of tokens to generate. A word is generally 2-3 tokens. The default value is 128 (min 1, max 8892).

Output Format*

Choose the required output format.

JSON credentials*

choose the required JSON credentials.

Output Field*

Give a name to the output field that will return the evaluation.

Building a Pipeline
here
Get in touch with us
Secrets
Managing versions
bulk
Selecting a Time Range
Label
Properties
links

Learn about how to set up and use Listeners

Discover Actions to manage and customize your data

Add the final piece of the puzzle for simpler data

Group By

Most recent version: v1.1.0

See the changelog of this Action type here.

Overview

The Group By Action summarizes data by performing aggregations using keys and temporal keys (min, hour, or day).

In order to configure this Action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.

AI Action Assistant

This Action has an AI-powered chat feature that can help you configure its parameters. Read more about it in this article.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Group By in the Actions tab (under the Aggregation group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Grouping configuration

Parameter
Description

Fields to group*

Lists the fields from the linked Listener or Action for you to choose from. Choose one or more fields to group by.

Grouping time*

Having defined which fields to group by, choose or create a Grouping time. You can write the amount and unit (seconds, minutes, hours, days), or select a common amount.

Aggregations

Parameter
Description

Aggregations*

Now you can add aggregation(s) to your grouping using the following operations:

  • average - calculates the average of the values of each grouping.

  • count - calculates the total occurrences for each grouping.

  • countNotNull - calculates the total occurrences for each grouping, excluding null values.

  • first - finds the first value found for each grouping. The first value will be the first in the workers' queue.

  • firstNotNull - finds the first not null value found for each grouping. The first value will be the first in the workers' queue.

  • ifthenelse - the operation will only be executed if the given conditions are met.

  • last - finds the last value found for each grouping. The last value will be the last in the workers' queue.

  • lastNotNull - finds the last not null value found for each grouping. The last value will be the last in the workers' queue.

  • max - finds the highest value found.

  • min - finds the lowest value found.

  • sum - calculates the total of the values for each grouping.

To add another aggregation, use the Add item option.

You can also use the arrow keys on your keyboard to navigate up and down the list.

Conditions

You can also carry out an advanced configuration by Grouping By Conditionals.

Use the Add Condition option to add conditions to your Aggregation.

4

Click Save to complete.

Example

In this example, we will use the Group By action to summarize a large amount of data, grouping by IP address every 5 minutes and aggregate the number of requests by type per IP address.

1

Raw data

Consider events with the following fields:

  • IP_Address

  • Request_Type

  • Timestamp

[
  {"IP_Address": "192.168.1.1", "Request_Type": "GET", "Timestamp": "2025-01-09T08:00:00Z"},
  {"IP_Address": "192.168.1.2", "Request_Type": "POST", "Timestamp": "2025-01-09T08:05:00Z"},
  {"IP_Address": "192.168.1.1", "Request_Type": "POST", "Timestamp": "2025-01-09T08:10:00Z"},
  {"IP_Address": "192.168.1.3", "Request_Type": "GET", "Timestamp": "2025-01-09T08:15:00Z"},
  {"IP_Address": "192.168.1.2", "Request_Type": "GET", "Timestamp": "2025-01-09T08:20:00Z"},
  {"IP_Address": "192.168.1.1", "Request_Type": "GET", "Timestamp": "2025-01-09T08:25:00Z"},
  {"IP_Address": "192.168.1.3", "Request_Type": "POST", "Timestamp": "2025-01-09T08:30:00Z"}
]
2

Group by

We add the Group By Action to the canvas and link it to the incoming data.

Group the logs by IP_Addressover a period of five minutes by selecting the field containing them in Fields to group and five minutes as the grouping time.

3

Aggregate

Aggregate the number of requests per IP address, broken down by request type (e.g., GET vs POST).

  • Operation: count

  • Field: Request_Type

  • Output field: count

4

Output

The Group By Action will emit the following results via the default output port:

{
  "aggregated_requests": [
    {
      "IP_Address": "192.168.1.1",
      "GET_Count": 2,
      "POST_Count": 1,
      "Total_Requests": 3
    },
    {
      "IP_Address": "192.168.1.2",
      "GET_Count": 1,
      "POST_Count": 1,
      "Total_Requests": 2
    },
    {
      "IP_Address": "192.168.1.3",
      "GET_Count": 1,
      "POST_Count": 1,
      "Total_Requests": 2
    }
  ]
}

You now have one event per grouping and aggregation match.

HTTP

Most recent version: v1.3.0

See the changelog of this Listener type .

Overview

Onum supports integration with . Select HTTP from the list of Listener types and click Configuration to start.

Configuration

Now you need to specify how and where to collect the data and how to establish an HTTP connection.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Configuration

Cloud Listeners

Note that you won't see the Socket and TLS configuration sections in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum. Learn more about Cloud Listeners in .

Socket

Parameter
Description

TLS configuration

Note that the parameters in this section are only mandatory if you decide to include TLS authentication in this Listener. Otherwise, leave blank.

Parameter
Description

Authentication

Parameter
Description

Authentication credentials

The options provided will vary depending on the type chosen to authenticate your API. This is the type you have selected in the API end, so it can recognize the request.

Choose between the options below, or select None if you don't need any authentication.

Basic
  • Username - the user sending the request.

  • Password - choose the basic auth password from your list of secrets or .

Bearer

Bearer Token Authentication

Enter your Token Secret for the API request using an existing Secret or if you haven't stored it in Onum yet.

This grants access without needing to send credentials (like username and password) in every request.

API Key

Enter the following:

  • API key name - a label assigned to the API key for identification. You can find it depending on where the API key was created.

  • API Key - API keys are usually stored in developer portals, cloud dashboards, or authentication settings. Choose the existing Secret or if you haven't stored this key within Onum.

Note that the HTTP Listener expects the API Key to be included in the URL, as a query parameter. For example:

Endpoint

Parameter
Description

Message extraction

Parameter
Description

General behavior

Parameter
Description

Click Create labels to move on to the next step and define the required if needed.

Listeners
Actions
Datasinks
here
here
here
here
here
here
here
here
here
here

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Port*

Enter the port number used by the server or client to establish an HTTP connection.

Certificate*

This is the predefined TLS certificate.

Private key for this listener*

The private key of the corresponding certificate.

CA chain

The path containing the CA certificates.

Client authentication method*

Choose between No, Request, Require, Verify, and Require & Verify.

Minimum TLS version*

Select the required version from the menu.

Authentication Type*

If your connection does not require authentication, leave as None. Otherwise, choose the authentication type and enter the details.

curl --location 'http://customer.in.prod.onum.com:2250/test?My-Token=1234567890qwerty' \
--header 'Content-Type: application/json' \
--data '{"message": "hello, how are you doing? :)"}'

HTTP Method*

Choose GET, POST, or PUT method.

Request path*

Path to the resource being requested from the server.

Strategy*

The strategy defines how data extraction should be performed. It is the overall methodology or approach used to extract relevant information from HTTP messages. Choose between:

  • Single event with the whole request - Choose this option if you want to include the whole request in each event.

  • Single event from request path - Choose this option if you want to include the request paths in each event.

  • Single event as query string - Choose this option if you want to include the requests with their whole query strings.

  • Single event as query parameter - Choose this option if you want to include a specific request parameter in your events. Specify the required parameter name in the Extraction info option (for example: msg)

  • Single event as header - Choose this option if you want to include a specific header in your events. Specify the required header in the Extraction info option (for example: Message)

  • Single event as body (partially) - Choose this option if you want to include a part of the request body in your events. Specify the required RegEx rule to match the required part in the Extraction info option (for example: \\[BODY: (.+)\\])

  • Single event as body (full) - Choose this option if you want to include the whole request body in your events. Specify the required RegEx rule to match the required part in the Extraction info option (for example: \\[BODY: (.+)\\])

  • Multiple events at body with delimiter - Choose this option if you want to include several messages in the same event separated by a delimiter. You must specify the delimiter in the Extraction info option.

  • Multiple events at body as JSON array - Choose this option if you want to include several messages formatted as a JSON array in your events.

  • Multiple events at body as stacked JSON - Choose this option if you want to include several messages formatted as a stacked JSON in your events.

Extraction info

The extraction info defines what specific data elements should be extracted based on the selected strategy. Check the strategy descriptions above for more details.

Propagate headers strategy

Choose between None (default option), Allow (enter the required header keys below), or All (all headers will be retrieved in the headers field).

Header keys

Enter the required header keys in this field. Click Add element for each one.

Exported headers format

Choose the required format for your headers.

Maximum message length

Maximum characters of the message. The default value is 4096.

Response code

Specify the response code to show when successful.

Response Content-Type

The Content-Type: xxx/xxx lets the server know the expected format of the incoming message or request (application/json by default):

  • Application/XML: the message body is formatted as XML.

  • Application/Json: the message body is formatted as JSON.

  • Text/Plain: the message body contains plain text.

  • Text/HTML: the message body contains HTML.

Response Text

The text that will show in case of success.

HTTP
this article
create a new one
creating a new one
create a new one
Labels
here
htmlCopyEdit<p>10 &times; 20 = 200 &euro;</p>
htmlCopyEdit<a href="page.html?msg=Hello &amp; Welcome">Link</a>

Google DLP

Most recent version: v0.0.1

See the changelog of this Action type here.

Overview

The Google DLP Action is designed to integrate with Google's Data Loss Prevention (DLP) API. This Action allows detecting and classifying sensitive information, enabling workflows to comply with data protection requirements.

This Action does not generate new events. Instead, it processes incoming events to detect sensitive information based on the configured Info Types and returns the corresponding findings.

In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how this works.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Google DLP in the Actions tab (under the Advanced group) and drag it onto the canvas.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

Info Types*

Type(s) of sensitive data to detect. You can choose as many types as needed.

Data to Inspect*

Choose the input field that contains the data to be inspected by the DLP API.

JSON credentials*

JSON object containing the credentials required to authenticate with the Google DLP API.

Output Field*

Name of the new field where the results of the DLP evaluation will be stored.

Minimum Likelihood

For each potential finding that is detected during the scan, the DLP API assigns a likelihood level. The likelihood level of a finding describes how likely it is that the finding matches an Info Type that you're scanning for. For example, it might assign a likelihood of Likely to a finding that looks like an email address.

The API will filter out any findings that have a lower likelihood than the minimum level that you set here.

The available values are:

  • Very Unlikely

  • Unlikely

  • Possible (This is the default value)

  • Likely

  • Very Likely

For example, if you set the minimum likelihood to Possible, you get only the findings that were evaluated as Possible, Likely, and Very likely. If you set the minimum likelihood to Very likely, you get the smallest number of findings.

Include Quote

If true, includes a contextual quote from the data that triggered a finding. The default value is true.

Exclude Info Types

If true, excludes type information of the findings. The default value is false.

4

Click Save to complete the process.

Example

Imagine you want to ensure that logs sent to a third-party service do not contain sensitive information such as credit card numbers, personal identification numbers, or passwords. To do it:

1

Add the Google DLP Action to your Pipeline and link it to your required Data sink.

2

Now, double-click the Google DLP Action to configure it. You need to set the following config:

Parameter
Description

Info Types

Choose the following info types:

  • Credit Card Number

  • Email Address

  • Password

Data to Inspect

Choose the input field that contains the data to be inspected by the DLP API.

JSON credentials

JSON object containing the credentials required to authenticate with the Google DLP API.

Output Field

Name of the new field where the results of the DLP evaluation will be stored.

Minimum Likelihood

We set the likelihood to Possible, as we want the right balance between recall and precision.

Include Quote

We want contextual info of the findings, so we set this to true.

Exclude Info Types

Set this to true, as we want to include type information of the findings.

3

Click Save to apply the configuration.

4

Now link the Default output port of the Action to the input port of your Data sink.

5

Finally, click Publish and choose in which clusters you want to publish the Pipeline.

6

Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

This is the input data field we chose for our analysis:

{
  "Info": "My credit card number is 4111-1111-1111-1111"
}

And this is a sample output data with the corresponding results of the DLP API:

{
  "dlpFindings": {
    "findings": [
      {
        "infoType": "CREDIT_CARD_NUMBER",
        "likelihood": "VERY_LIKELY",
        "quote": "4111-1111-1111-1111"
      }
    ]
  }
}

Field Generator

Most recent version: v0.0.4

See the changelog of this Action type here.

Overview

The Field Generator action allows you to add new fields to your events using a given operation. You can select one or more operations to execute, and their resulting values will be set in user-defined event fields.

In order to configure this action, you must first link it to a Listener or another Action. Go to Building a Pipeline to learn how this works.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find Field Generator in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Choose which operations you want to use to define the new fields in your events:

Operation
Parameters

Now

  • Now - Select true to create a new field with the current Epoch time in the selected time unit.

  • Now output field* - Give a name to the new field.

  • Now time unit* - Choose the required time unit. The available time units are nanoseconds, microseconds, milliseconds & seconds.

Today

  • Today - Select true to create a new field with the Epoch time corresponding to the current day at 00:00:00h in the selected time unit.

  • Today output field* - Give a name to the new field.

  • Today time unit* - Choose the required time unit. The available time units are nanoseconds, microseconds, milliseconds & seconds.

Yesterday

  • Yesterday - Select true to create a new field with the Epoch time corresponding to the previous day at 00:00:00h in the selected time unit.

  • Yesterday output field* - Give a name to the new field.

  • Yesterday time unit* - Choose the required time unit. The available time units are nanoseconds, microseconds, milliseconds & seconds.

Random number

  • Random number - Select true to create a new field with a random value.

  • Random output field* - Give a name to the new field.

Custom field

  • Allow custom field - Select true to create a new field with a custom value.

  • New custom field name* - Give a name to the new field.

  • Custom field value* - Set the value you want to add in the new field.

  • Custom field data type* - Choose the data type of the new field between integer, boolean, float or string.

4

Click Save to complete the process.

Example

Imagine we want to add a new couple of fields to our events. We want a new field that indicates the current Epoch time and another that adds the string Test in each event. To do it:

1

Add the Field Generator Action to your Pipeline and link it to your required Data sink.

2

Now, double-click the Field Generator Action to configure it. You need to set the following config:

Operation
Parameters

Now

  • Now - Set it to true.

  • Now output field - We're naming the new field Now.

  • Now time unit - Choose seconds.

Custom field

  • Allow custom field - Set it to true.

  • New custom field name - We're naming the new field Custom.

  • Custom field value - Enter Test.

  • Custom field data type - Choose string.

3

Left the rest of the parameters as default and click Save to apply the configuration.

4

Now link the Default output port of the Action to the input port of your Data sink.

5

Finally, click Publish and choose in which clusters you want to publish the Pipeline.

6

Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

This is how your data will be transformed with the new fields:

Amazon SQS

Most recent version: v0.0.1

See the changelog of this Listener type here.

This is a Pull Listener and therefore should not be used in environments with more than one cluster.

Overview

Onum supports integration with Amazon SQS.

Amazon Simple Queue Service (AWS SQS) is a fully managed message queuing service. Among its many features, the following ones are of special interest to our use case:

  • It supports both standard queues (with at-least-once, occasionally unordered delivery semantics) and FIFO queues (exactly-once and fully ordered delivery semantics).

  • It supports scaling through the concept of visibility timeout (a period after a consumer reads one message during which this becomes invisible to other consumers). That allows a consumer group to read from the same queue and distribute messages without duplication.

So, what we want is a Listener that we can configure to read from an existing SQS queue and inject queue messages as events into our platform. Please note that because of the nature of the API offered to access SQS messages (HTTP-based, max 10 messages each time), this is not a high-throughput Listener.

Select Amazon SQS from the list of Listener types and click Configuration to start.

Configuration

Now you need to specify how and where to collect the data, and how to establish a connection with Amazon SQS.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Configuration

Now, add the configuration to establish the connection.

Queue

Parameter
Description

Region

The region of your AWS data center. Your region is displayed in the top right-hand corner of your AWS console.

Queue URL*

The URL of your existing Amazon SQS queue, acting as the endpoint to interact with the desired queue. Use the GetQueueUrl command or:

  1. Go to the AWS Management Console.

  2. In the Search Bar, type "SQS" and click on Simple Queue Service (SQS).

  3. Click on Queues in the left panel.

  4. Locate your queue from the list and click it.

  5. The Queue URL will be displayed in the table under URL.

This is the correct URL format: sqs.region.localhost/awsaccountnumber/storedinenvv

Auth

Authentication is not specific to SQS but rather AWS IAM (Identity and Access Management). If you are connecting from an IAM console, enter the authentication credentials here.

Parameter
Description

Access key ID*

Add the access key from your or create one. The Access Key ID is found in the IAM Dashboard of the AWS Management Console.

  1. In the left panel, click on Users.

  2. Select your IAM user.

  3. Under the Security Credentials tab, scroll to Access Keys and you will find existing Access Key IDs (but not the secret access key).

Secret access key*

Add the secret access key from your or create one.

Under Access keys, you can see your Access Key IDs, but AWS will not show the Secret Access Key. You must have it saved somewhere. If you don't have the secret key saved, you need to create a new one

Response

Parameter
Description

Message system attributes

Optionally, specify which system attributes are wanted in the response. The set of system attributes chosen by the user correspond to attributes inlined in the message/event.

  1. In the Queues area, click on More or scroll down and go to the Monitoring tab.

  2. You will see some system attributes (like deduplication and group ID). However, detailed system attributes are typically accessed via the CLI or SDKs.

Advanced

Proceed with caution when modifying these advanced options. Default values should be enough in most cases.

Parameters
Description

Maximum number of messages*

Set a limit for the maximum number of messages to receive in the notifications queue for each request. The minimum value is 1, and the maximum and default value is 10.

Visibility timeout*

Set a limit for the maximum number of messages to receive in the notifications queue for each request. The minimum value is 1, and the maximum and default value is 10.

Wait time*

Set a limit for the maximum number of messages to receive in the notifications queue for each request. The minimum value is 5, and the maximum and default value is 10.

Minimum retry time*

Set the minimum amount of time to wait before retrying. The default and minimum value is 1s, and the maximum value is 10m.

Maximum retry time*

Set the minimum amount of time to wait before retrying. The default and minimum value is 1s, and the maximum value is 10m.

Click Create labels to move on to the next step and define the required Labels if needed.

Secrets
Secrets

HTTP Request

Most recent version: v0.0.2

See the changelog of this Action type here.

Overview

The HTTP Request action allows you to configure and execute HTTP requests with custom settings for methods, headers, authentication, TLS, and more.

In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

Configuration

1

Find HTTP Request in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Enter the required parameters:

Parameter
Description

HTTP Method*

The HTTP method for the request. Choose between GET, POST, PUT, DELETE, or PATCH.

Server URL*

The target URL for the HTTP request.

Field that holds the request body

Enter the name of the field that includes the request body.

Field where the response will be stored

Enter the name of the field that will store the HTTP response.

HTTP Headers

Optionally, you can enter a map of header key-value pairs to include in the request.

Timeout (seconds)

Enter the timeout for the HTTP request in seconds.

Disable Redirects

Select true to disable HTTP redirects or false to ignore.

Content-Type

Set the request content-type:

  • text/plain - Plain text with no formatting.

  • application/json - Data in JSON format. This is the default value.

  • application-xml - Data in XML format.

  • text/html - Data in HTML format.

Authentication Configuration

Choose the type of authentication for the request.

Parameter
Description

Authentication Type*

Choose between None, Basic, Bearer, or API Key.

Authentication Credentials

Depending on the option you chose above, you must enter the required authentication information in this section:

Parameter
Description

Basic Authentication

Username and Password for basic authentication. For the password, choose one of the secrets defined in your Tenant or create a new one by clicking New secret. Learn more about secrets .

Bearer Token

Token for Bearer authentication. Choose one of the secrets defined in your Tenant or create a new one by clicking New secret. Learn more about secrets .

API Key

Define the API Key Name and API Key for API Key configuration. For the API key, choose one of the secrets defined in your Tenant or create a new one by clicking New secret. Learn more about secrets .

Bulk Configuration

Parameter
Description

Bulk allow*

Set this to true and configure the options below if you want to set bulk sending in your HTTP requests. Otherwise, set it to false.

Store as*

Decide how to store events in your responses. Choose between:

  • Delimited - Events in a batch are stored separated by a delimiter. Set the required delimiter in the option below. The default option is newline (\n).

  • Without Delimeter - Events are concatenated without any separator.

  • JSON Array - Events are structured in a JSON array.

Events per batch*

Set the number of individual events per bulk request.

Maximum number of buffers per server URL

Set the maximum number of buffers per server URL. The default value is 25, and the maximum value is 50.

Event time limit

Time in seconds to send the events.

Rate Limiter Configuration

Establish a limit for the number of HTTP requests permitted per second.

Parameter
Description

Number of requests per second

Enter the maximum number of requests that can be sent per second. The minimum is 1.

TLS Configuration

Parameter
Description

Allow TLS configuration*

Set this option to true if you need to configure the TLS config of the Data sink. Otherwise, set it to false.

Certificate*

Choose the predefined TLS certificate.

Private Key*

The private key of the corresponding certificate.

CA Chain*

The path containing the CA certificates.

Minimum TLS version*

Minimum TLS version required for incoming connections. The default version is v1.2

Proxy Configuration

If your organization uses proxy servers, set it using these options:

Parameter
Description

URL

Enter the required proxy URL.

Username

Enter the username used in the proxy.

Password

Enter the password used in the proxy.

Retry Configuration

Set how you want to manage retry attempts in case of errors in the requests:

Parameter
Description

Max attempts

Set the maximum number of attempts before returning an error. The minimum value is 1.

Wait between attempts

Choose the milliseconds to wait between attempts in case of an error. The minimum value is 100.

Backoff interval

Define how the wait time should increase between attempts, in seconds. The minimum value is 1.

4

Click Save to complete.

Example

{
    "payloadField": "correlationIDKey",
    "outField": "outputField",
    "serverUrl": "http://localhost:8080/${path_from_event}?${impactKey}=${correlationIDKey}",
    "method": "POST",
    "authentication": {
        "authType": "apiKey",
        "credentials": {
            "apiKey": {
                "apiKeyName": "x-api-key",
                "apiKeyValue": {
                    "id": "apiKey",
                    "value": "ad1dewfwef2321323"
                }
            }
        }
    }
}

Click Save to complete.

Conditional

Most recent version: v1.1.0

See the changelog of this Action type here.

Overview

The Conditional Action evaluates a list of conditions for an event. If an event meets a given condition, it will be sent through an output port specific to that condition. The event will be sent through the default output if it does not meet any conditions.

Set any number of conditions on your data for filtering and alerting.

In order to configure this Action, you must first link it to a Listener or other Action. Go to Building a Pipeline to learn how to link.

AI Action Assistant

This Action has an AI-powered chat feature that can help you configure its parameters. Read more about it in this article.

Ports

These are the input and output ports of this Action:

Input ports
  • Default port - All the events to be processed by this Action enter through this port.

Output ports
  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

  • Condition port - Each condition you add will have its own port. There is currently a limit of 8 conditions per Action, however, if you link another Conditional to the Default port, you can use the events to continue creating more conditions.

Configuration

1

Find Conditional in the Actions tab (under the Filtering group) and drag it onto the canvas. Link it to the required Listener and Data sink.

2

To open the configuration, click the Action in the canvas and select Configuration.

3

Choose how to start adding conditions using the View mode buttons. Select your conditions using the buttons available in the Visual view (default mode), or write them in Code mode.

4

Now, start adding your conditions. Each of the conditions you define will create a new output port in the Action. Give a name to the Port.

5

Choose the Field with the input data you want to use in the condition. This allows you to choose not only the field to filter by, but also the specific Action to take it from, if there are multiple options.

6

Choose a Condition for the filter. The options you see here will differ depending on the data type of the field you have selected:

Condition
Data types
Description

Contains

This condition checks if your input data strings contain certain keywords (either matching the data with another field or entering a specific literal).

In code mode, this condition is represented like this:

  • ${field1} contains ${field2}

  • ${field1} contains "test"

Doesn't contain

This condition checks if your input data strings do not contain certain keywords (either matching the data with another field or entering a specific literal).

In code mode, this condition is represented like this:

  • ${field1} does not contain ${field2}

  • ${field1} does not contain "test"

Equal / Equal to

This condition checks if your input data values are the same as other values (either matching the data with another field or entering a specific literal).

In code mode, this condition is represented like this:

  • ${field1} == ${field2}

  • ${field1} == "test"

  • ${field1} == 5

Not equal / Not equal to

This condition checks if your input data values are not the same as other values (either matching the data with another field or entering a specific literal).

In code mode, this condition is represented like this:

  • ${field1} != ${field2}

  • ${field1} != "test"

  • ${field1} != 5

Is null

This condition checks if your input data values are null.

In code mode, this condition is represented like this:

  • ${field1} is null

Is not null

This condition checks if your input data values are not null.

In code mode, this condition is represented like this:

  • ${field1} is not null

Matches

This condition checks if your input data strings match a given RegEx.

Enter your RegEx in the Regular expression field that appears, or type it directly in the code mode. Click the flag icon in the editor to add additional conditions (you can combine as many as required):

  • multiline - This flag affects the behavior of ^ and $. In multiline mode, matches occur not only at the beginning and the end of the string, but also at the start/end of each line. In code mode, add m at the end of your RegEx to include this condition.

  • insensitive - Add this flag if you want to make the matches case insensitive. In code mode, add i at the end of your RegEx to include this condition.

  • single - In code mode, add s at the end of your RegEx to include this condition.

  • ungreedy - Add this flag if you want to apply an ungreedy (lazy) matching, that is to say, you want to get as few characters as needed to complete the pattern in a single match. In code mode, add U at the end of your RegEx to include this condition.

In code mode, this condition is represented like this:

  • ${field1} matches `\d{3}`

  • ${field1} matches `\d{3}`i

  • ${field1} matches `\d{3}`misU

Does not match

This condition checks if your input data strings do not match a given RegEx.

Enter your RegEx in the Regular expression field that appears, or type it directly in the code mode. Click the flag icon in the editor to add additional conditions (check their description in the Matches condition above).

In code mode, this condition is represented like this:

  • ${field1} does not match `\d{3}`

  • ${field1} does not match `\d{3}`m

  • ${field1} does not match `\d{3}`misU

Less than

This condition checks if your input data numbers are less than other values (either matching the data with another field or entering a specific literal).

In code mode, this condition is represented like this:

  • ${field1} < ${field2}

  • ${field1} < 5

  • ${field1} < 1.4

Less than or equal to

This condition checks if your input data numbers are less than or equal to other values (either matching the data with another field or entering a specific literal).

In code mode, this condition is represented like this:

  • ${field1} <= ${field2}

  • ${field1} <= 5

  • ${field1} <= 1.4

Greater than

This condition checks if your input data numbers are greater than other values (either matching the data with another field or entering a specific literal).

In code mode, this condition is represented like this:

  • ${field1} > ${field2}

  • ${field1} > 5

  • ${field1} > 1.4

Greater than or equal to

This condition checks if your input data numbers are greater than or equal to other values (either matching the data with another field or entering a specific literal).

In code mode, this condition is represented like this:

  • ${field1} >= ${field2}

  • ${field1} >= 5

  • ${field1} >= 1.4

7

Now you can add AND/OR clauses to your condition, or add a new condition entirely using the Add Condition option. You can add a maximum of 8 conditions/ports.

In code mode, AND/OR clauses are represented like this:

  • ${field1} contains "test" and ${field2} == 10

  • ${field1} contains "test" or ${field2} contains "test"

Only one level of grouping allowed in conditions When defining conditions through the user interface, only one level of grouping using parentheses is allowed. This means you can use and, or, and parentheses to group expressions, but you cannot nest groups within other groups.

Allowed: (${A} != null and ${A} != "" and (${A} == "x" or ${A} == "y"))

Not allowed: (((${A} == "x" or ${A} == "y") and ${A} != null) or (${B} == "internet")) In code mode: If you're configuring conditions directly in code mode, you can use multiple levels of grouping without restrictions.

8

Click Save to complete.

Example

Let's say you have data on error and threat detection methods in storage devices and you wish to detect threats and errors using the Cyclic Redundancy Check methods crc8, crc16 and crc24.

1

Conditional

Add a Conditional to the canvas and link it to the Listener or Action, providing your data.

2

Condition 1

  • Field: crc

  • Condition: equals

  • Field: crc8

Any events meeting this condition will exit via this port. Each condition has its own port.

3

Condition 2

  • Field: crc

  • Condition: equals

  • Field: crc16

4

Condition 3

  • Field: crc

  • Condition: equals

  • Field: crc24

5

Output

Now you have a Conditional action with three output ports, crc8, crc16 and crc24, as well as the default and error ports.

6

Conditional 2

Add another conditional to the canvas and enter the following:

  • Field: msg

  • Condition: contains

  • Field: threat

Now when the message contains "threat", an event will be generated and sent via the threat port.

in this section
in this section
in this section

HTTP Pull

Most recent version: v0.0.1

See the changelog of this Listener type here.

Note that this Listener is only available in certain Tenants. Get in touch with us if you don't see it and want to access it.

Overview

Onum supports integration with HTTP Pull. Select HTTP Pull from the list of Listener types and click Configuration to start.

Configuration

Now you need to specify how and where to collect the data and how to establish an HTTP connection.

Metadata

Enter the basic information for the new Listener.

Parameter
Description

Name*

Enter a name for the new Listener.

Description

Optionally, enter a description for the Listener.

Tags

Add tags to easily identify your Listener. Hit the Enter key after you define each tag.

Configuration

Cloud Listeners

Note that you won't see the Socket and TLS configuration sections in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum. Learn more about Cloud Listeners in this article.

Parameters

Parameter
Description

Name

Enter the name of the parameter to search for in the YAML below, used later as ${parameters.name} e.g. ${parameters.domain}.

Value

Enter the value or variable to fill in when the given parameter name has been found, e.g. “domain.com”. With the name set as “domain” and the value set as “domain.com” , the expression to execute on the YAML would be: ${parameters.domain}., which will be automatically replaced by the variable. Add as many name and value pairs as required.

Secrets

Parameter
Description

Name

Enter the name of the parameter to search for in the YAML below, used later as ${secrets.name}.

Value

Select the containing the connection credentials if you have added them previously, or select New Secret to add it. This will add this value as the variable when the field name is found in the YAML. Add as many as required.

Config as YAML

Toggle ON to configure the HTTP as a YAML and paste it here.

The system supports interpolated variables throughout the HTTP request building process using the syntax:

${prefix.name} Each building block may:

  • Use variables depending on its role (e.g., parameters, secrets, pagination state).

  • Expose variables for later phases (e.g., pagination counters, temporal window bounds).

Not all variable types are available in every phase. Each block has access to a specific subset of variables.

Variables can be defined in the configuration or generated dynamically during execution. Each variable has a prefix that determines its source and scope.

Supported Prefixes:

Parameters
Secrets
temporalWindow
Pagination
Inputs

User-defined values configured manually.

Available in all phases.

Sensitive values such as credentials or tokens.

Available in all phases

Automatically generated from the Temporal Window block.

Available in Enumeration and Collection phases.

Values produced by the pagination mechanism (e.g., offset, cursor).

Available in Enumeration and Collection phases.

Values derived from the output of the Enumeration phase.

Available only in the Collection phase.

Temporal Window

Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

Parameter
Description

Duration*

Add the duration in milliseconds that the window will remain open for.

Offset*

How far behind the current time the window should end (e.g., 5m behind "now").

Time Zone*

This value is usually automatically set to your current time zone. If not, select it here.

Format*

Choose between Epoch or RCF3339 for the timestamp format.

Authentication

If your connection requires authentication, enter the credentials here.

Parameter
Description

Authentication Type*

Choose the authentication type and enter the details.

Authentication credentials

The options provided will vary depending on the type chosen to authenticate your API. This is the type you have selected in the API end, so it can recognize the request.

Choose between the options below.

Basic
  • Username* - the user sending the request.

  • Password* - the password eg: ${secrets.password}

API Key

Enter the following:

  • API Key - API keys are usually stored in developer portals, cloud dashboards, or authentication settings. Set the a secret, eg: ${secrets.api_key}

  • Auth injection:

    • In* - Enter the incoming format of the API: Header or Query.

    • Name* - A label assigned to the API key for identification. You can find it depending on where the API key was created.

    • Prefix - Enter a connection prefix if required.

    • Suffix - Enter a connection suffix if required.

Token

Token Retrieve Based Authentication

  • Request -

    • Method* - Choose between GET or POST

    • URL*- Enter the URL to send the request to.

  • Headers - Add as many headers as required.

    • Name

    • Value

  • Query Params - Add as many query parameters as required.

    • Name

    • Value

  • Token Path* - Enter your Token Path for used to retrieve an authentication token.

  • Auth injection:

    • In* - Enter the incoming format of the API: Header or Query.

    • Name* - A label assigned to the API key for identification. You can find it depending on where the API key was created.

    • Prefix - Enter a connection prefix if required.

    • Suffix - Enter a connection suffix if required.

Enumeration Phase

Identify the available endpoints, methods, parameters, and resources exposed by the API. This performs initial data discovery to feed the collection phase and makes the results available to the Collection Phase via variable interpolation (inputs.*).

Can use:

  • ${parameters.xxx}

  • ${secrets.xxx}

  • ${temporalWindow.xxx} (if configured)

  • ${pagination.xxx*} Pagination variables

Parameter
Description

Pagination Type*

Select one from the drop-down. Pagination type is the method used to split and deliver large datasets in smaller, manageable parts (pages), and how those pages can be navigated during discovery.

Each pagination method manages its own state and exposes specific variables that can be interpolated in request definitions (e.g., URL, headers, query params, body).

None

  • Description: No pagination; only a single request is issued.

  • Exposed Variables: None

PageNumber/PageSize

  • Description: Pages are indexed using a page number and fixed size.

  • Configuration:

    • pageSize: page size

  • Exposed Variables:

    • ${pagination.pageNumber}

    • ${pagination.pageSize}

Offset/Limit

  • Description: Uses offset and limit to fetch pages of data.

  • Configuration:

    • Limit: max quantity of records per request

  • Exposed Variables:

    • ${pagination.offset}

    • ${pagination.limit}

From/To

  • Description: Performs pagination by increasing a window using from and to values.

  • Configuration: limit: max quantity of records per request

  • Exposed Variables:

    • ${pagination.from}

    • ${pagination.to}

Web Linking (RFC 5988)

  • Description: Parses the Link header to find the rel="next" URL.

  • Exposed Variables: None

Next Link at Response Header

  • Description: Follows a link found in a response header.

  • Configuration:

    • headerName: header name that contains the next link

  • Exposed Variables: None

Next Link at Response Body

  • Description: Follows a link found in the response body.

  • Configuration:

    • nextLinkSelector: path to next link sent in response payload

  • Exposed Variables: None

Cursor

  • Description: Extracts a cursor value from each response to request the next page.

  • Configuration:

    • cursorSelector: path to the cursor sent in response payload

  • Exposed Variables:

    • ${pagination.cursor}

Output

Parameter
Description

Select*

If your connection does not require authentication, leave as None. Otherwise, choose the authentication type and enter the details. A JSON selector expression to pick a part of the response e.g. '.data'.

Filter

A JSON expression to filter the selected elements. Example: '.films | index("Tangled")'.

Map

A JSON expression to transform each selected element into a new event. Example: '{characterName: .name}'.

Output Mode*

Choose between

  • Element: emits each transformed element individually as an event.

  • Collection: emits all transformed items as a single array/collection as an event.

Collection Phase*

The collection phase is mandatory. This is where the final data retrieval happens (either directly or using IDs/resources generated by an enumeration phase).

The collection phase involves gathering actual data from an API after the enumeration phase has mapped out endpoints, parameters, and authentication methods. It supports dynamic variable resolution via the variable resolver and can use data exported from the Enumeration Phase, such as:

  • ${parameters.xxx}

  • ${secrets.xxx}

  • ${temporalWindow.xxx}

  • ${inputs.xxx} (from Enumeration Phase)

  • ${pagination.xxx}*

Inputs

In collection phases, you can define variables to be used elsewhere in the configuration (for example, in URLs, query parameters, or request bodies). Each variable definition has the following fields:

Parameter
Description

Name

The variable name (used later as ${inputs.name} in the configuration).

Source

Usually "input", indicating the value comes from the enumeration phase’s output.

Expression

A JSON expression applied to the input to extract or transform the needed value.

Format

Controls how the variable is converted to a string (see Variable Formatting below). Eg: json.

Parameter
Description

Pagination Type*

Choose how the API organizes and delivers large sets of data across multiple pages—and how that affects the process of systematically collecting or extracting all available records.

Output

Parameter
Description

Select*

If your connection does not require authentication, leave as None. Otherwise, choose the authentication type and enter the details. A JSON selector expression to pick a part of the response e.g. '.data'.

Filter

A JSON expression to filter the selected elements. Example: '.films | index("Tangled")'.

Map

A JSON expression to transform each selected element into a new event. Example: '{characterName: .name}'.

Output Mode*

Choose between

  • Element: emits each transformed element individually as an event.

  • Collection: emits all transformed items as a single array/collection as an event.

Click Create labels to move on to the next step and define the required Labels if needed.

Ports

The HTTP Pull sink has two output ports:

  • Default port - Events are sent through this port if no error occurs while processing them.

  • Error port - Events are sent through this port if an error occurs while processing them.

The error message is provided in a free-text format and may change over time. Please consider this if performing any post-processing based on the message content.

Integrations

HTTP Puller integrations are parametrized and organized by vendor, categorized by product/API.

Inside each endpoint, you will find a yaml configuration. This configuration is used in the Onum HTTP Puller action in order to start feeding that information into the platform. Check the articles under this section to learn more.

Secret
Using Amazon GenAI to classify HTTP logs

Field Transformation Operations

A comprehensive list of the operations available in the Field Tranformation Action.

Operation
Description
Example

Converts a size in bytes to a human-readable string.

  • Input data - 134367

  • Output data - 131.22 KiB

Converts values from one unit of measurement to another.

  • Input data - 5000

  • Input units - Square foot (sq ft)

  • Output units - Square metre (sq m)

  • Output data - 464.515215

Converts a unit of data to another format.

  • Input data - 2

  • Input units - Megabits (Mb)

  • Output units - Kilobytes (KB)

  • Output data - 250

Converts values from one unit of length to another.

  • Input data - 100

  • Input units - Metres (m)

  • Output units - Yards (yd)

  • Output data - 109.3613298

Converts values from one unit of mass to another.

  • Input data - 100

  • Input units - Kilogram (kg)

  • Output units - Pound (lb)

  • Output data - 220.4622622

Converts values from one unit of speed to another.

  • Input data - 200

  • Input units - Kilometres per hour (km/h)

  • Output units - Miles per hour (mph)

  • Output data - 124.2841804

Counts the amount of times a given string occurs in your input data.

  • Input data - This is a sample test

  • Search - test

  • Search Type - simple

  • Output data - 1

Calculates an 8-bit Cyclic Redundancy Check (CRC) value for a given input.

  • Input data - hello 1234

  • Output data - C7

Calculates an 16-bit Cyclic Redundancy Check (CRC) value for a given input.

  • Input data - hello 1234

  • Output data - 57D4

Calculates an 24-bit Cyclic Redundancy Check (CRC) value for a given input.

  • Input data - hello 1234

  • Output data - 3B6473

Calculates an 32-bit Cyclic Redundancy Check (CRC) value for a given input.

  • Input data - hello 1234

  • Output data - 7ED8D648

Obfuscates all digits of a credit card number except for the last 4 digits.

  • Input data - 1111222233334444

  • Output data - ************4444

Converts a CSV file to JSON format.

  • Input data -

First name,Last name,Age,City John,Wick,20,New-York Tony,Stark,30,Madrid

  • Cell delimiter - ,

  • Format - Array of dictionaries

  • Output data -

[ { "First name": "John", "Last name": "Wick", "Age": "20", "City": "New-York" }, { "First name": "Tony", "Last name": "Stark", "Age": "30", "City": "Madrid" } ]

Defangs an IP address to prevent it from being recognized.

  • Input data - 192.168.1.1

  • Output data - 192[.]168[.]1[.]1

Defangs a URL to prevent it from being recognized as a clickable link.

  • Input data - https://example.com

  • Escape Dots - true

  • Escape HTTP - true

  • Escape ://* - false

  • Process Type - Everything

  • Output data - hxxps://example[.]com

Divides a list of numbers provided in the input string, separated by a specific delimiter.

  • Input data - 26:2:4

  • Delimiter - Colon

  • Output data - 3.25

Escapes specific characters in a string.

  • Input data - She said, "Hello, world!"

  • Escape Level - Special chars

  • Escape Quote - Double

  • JSON compatible -false

  • Output data - She said, \"Hello, world!\"

Extracts all the IPv4 and IPv6 addresses from a block of text or data.

  • Input data -

User logged in from 192.168.1.1. Another login detected from 10.0.0.5.

  • Output data -

192.168.1.1,10.0.0.5

Makes defanged IP addresses valid.

  • Input data - 192[.]168[.]1[.]1

  • Output data - 192.168.1.1

Makes defanged URLs valid.

  • Input data - hxxps://example[.]com

  • Escape Dots - true

  • Escape HTTP - true

  • Escape ://* - false

  • Process Type - Everything

  • Output data - https://example.com

Splits the input string using a specified delimiter and filters.

  • Input data -

Error: File not found Warning: Low memory Info: Operation completed Error: Disk full

  • Delimiter - Line feed

  • Regex - ^Error

  • Invert - false

  • Output data -

Error: File not found Error: Disk full

Finds values in a string and replace them with others.

  • Input data - The server encountered an error while processing your request.

  • Substring to find - error

  • Replacement - issue

  • Global Match - true

  • Case Insensitive - true

  • Multiline - false

  • Dot Matches All - false

  • Output data - The server encountered an issue while processing your request.

This operation transforms a float into a string using a Go format string.

  • Input data - 5.218

  • Radix (Base) - %.1f

  • Output data - 5.2

Converts a number from a specified base (or radix) into its decimal form.

  • Input data - 100

  • Radix (Base) - 2

  • Output data - 4

Decodes data from a Base32 string back into its raw format.

  • Input data - NBSWY3DP,!

  • Alphabet - RFC4648 (Standard)

  • Remove non-original chars - true

  • Output data - hello

Decodes data from a Base64 string back into its raw format.

  • Input data - SGVsbG8sIE9udW0h

  • Strict Mode - true

  • Output data - Hello, Onum!

Decodes a binary string into plain text.

  • Input data - 01001000 01101001

  • Delimiter - Space

  • Byte Length - 8

  • Output data - Hi

Converta hexadecimal-encoded data back into its original form.

  • Input data - 48 65 6c 6c 6f 20 57 6f 72 6c 64

  • Delimiter - Space

  • Output data - Hello World

Converts a timestamp into a human-readable date string.

  • Input data - 978346800

  • Time Unit - Seconds

  • Timezone Output - UTC

  • Format Output - Mon 2 January 2006 15:04:05 UTC

  • Output data - Mon 1 January 2001 11:00:00 UTC

Extracts a specific element from a list of boolean values.

  • Input data - true, false, true, false

  • Index - 1

  • Output data - false

Extracts a specific element from a list of float values.

  • Input data - 0.0, -1.0, 2.0

  • Index - 1

  • Output data - -0.1

Extracts a specific element from a list of integer values.

  • Input data - 0, 1, 2, 3

  • Index - 1

  • Output data - 1

Extracts a specific element from a list of strings.

  • Input data - test0, test1, test2

  • Index - 1

  • Output data - test1

Extracts a specific element from a list of timestamps.

  • Input data - 1654021200, 1700000000,1750000000

  • Index - 1

  • Output data - 1700000000

Converts an IP address (either IPv4 or IPv6) to its hexadecimal representation.

  • Input data - 192.168.1.1

  • Output data - c0a80101

Reduces the size of a JSON file by removing unnecessary characters from it.

  • Input data -

{ "name": "John Doe", "age": 30, "isActive": true, "address": { "city": "New York", "zip": "10001" } }

  • Output data -

{"name":"John Doe","age":30,"isActive":true,"address":{"city":"New York","zip":"10001"}}

Converts a JSON file to CSV format.

  • Input data -

[ { "First name": "John", "Last name": "Wick", "Age": "20", "City": "New-York" }, { "First name": "Tony", "Last name": "Stark", "Age": "30", "City": "Madrid" } ]

  • Cell delimiter - ,

  • Row delimiter - /n

  • Output data -

First name,Last name,Age,City John,Wick,20,New-York Tony,Stark,30,Madrid

Decodes the payload in a JSON Web Token string.

  • Input data - eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c

  • Output data - {"sub":"1234567890","name":"John Doe","iat":1516239022}

Generates a Keccak cryptographic hash function from a given input.

  • Input data - Hello World!

  • Size - 256

  • Output data -3ea2f1d0abf3fc66cf29eebb70cbd4e7fe762ef8a09bcc06c8edf641230afec0

Returns the number of Unicode characters in your input strings.

  • Input data - hello world!

  • Output data - 12

Converts a list of comma-separated values into a string of values divided by a specific separator.

  • Input data - hello,my,world

  • Separator - /

  • Output data - hello/my/world

Produces a MD2 hash string from a given input.

  • Input data - Hello World!

  • Output data - 315f7c67223f01fb7cab4b95100e872e

Produces a MD4 hash string from a given input.

  • Input data - Hello World!

  • Output data -b2a5cc34fc21a764ae2fad94d56fadf6

Produces a MD5 hash string from a given input.

  • Input data - Hello World!

  • Output data -ed076287532e86365e841e92bfc50d8c

Calculates the median of given values, separated by a specific delimiter.

  • Input data - 10, 5, 20, 15, 25

  • Delimiter - ,

  • Output data - 15

Calculates the result of the multiplication of given values, separated by a specific delimiter.

  • Input data - 2, 3, 5

  • Delimiter - ,

  • Output data - 30

Pads each input line with a specified number of characters.

  • Input data - Apple Banana Cherry

  • Pad position - Start

  • Pad line - 7

  • Character - >>>

  • Output data -

>>> >>>Apple >>> >>>Banana >>> >>>Cherry

Parses a string and returns an integer of the specified base.

  • Input data - 100

  • Base - 2

  • Output data -4

Takes Unix file permission strings and converts them to code format or vice versa.

  • Input data - -rwxr-xr--

  • Output data -

Textual representation: -rwxr-xr-- Octal representation: 0754 +---------+-------+-------+-------+ | | User | Group | Other | +---------+-------+-------+-------+ | Read | X | X | X | +---------+-------+-------+-------+ | Write | X | | | +---------+-------+-------+-------+ | Execute | X | X | | +---------+-------+-------+-------+

Analyzes a URI into its individual components.

  • Input data -

https://user:[email protected]:8080/path/to/resource?key=value#fragment

  • Output data -

Scheme: https Host: example.com:8080 Path: /path/to/resource Arguments: map[key:[value]] User: user Password: pass

Converts events in Protobuf (Protocol Buffers) format into JSON.

  • Input data

  • Proto file

  • Message type

  • Output data

Extracts or manipulates parts of your input strings that match a specific regular expression pattern.

  • Input data - 100

  • Base - 2

  • Output data -4

Removes whitespace and other characters characters from a string.

  • Input data -

Hello World!

This is a test.

  • Spaces - true

  • Carriage returns - false

  • Line feeds - true

  • Tabs - false

  • Form feeds - false

  • Full stops - true

  • Output data -

HelloWorld!Thisisatest

Reverses the order of the characters in a string.

  • Input data - Hello World!

  • Reverse mode - Character

  • Output data - !dlroW olleH

Returns the SHA0 hash of a given string.

  • Input data - Hello World!

  • Output data - 1261178ff9a732aacfece0d8b8bd113255a57960

Returns the SHA1 hash of a given string.

  • Input data - Hello World!

  • Output data - 2ef7bde608ce5404e97d5f042f95f89f1c232871

Returns the SHA2 hash of a given string.

  • Input data - Hello World!

  • Size - 512

  • Output data - f4d54d32e3523357ff023903eaba2721e8c8cfc7702663782cb3e52faf2c56c002cc3096b5f2b6df870be665d0040e9963590eb02d03d166e52999cd1c430db1

Returns the SHA3 hash of a given string.

  • Input data - Hello World!

  • Size - 512

  • Output data - 32400b5e89822de254e8d5d94252c52bdcb27a3562ca593e980364d9848b8041b98eabe16c1a6797484941d2376864a1b0e248b0f7af8b1555a778c336a5bf48

Returns the SHAKE hash of a given string.

  • Input data - Hello World!

  • Capacity - 256

  • Size - 512

  • Output data - 35259d2903a1303d3115c669e2008510fc79acb50679b727ccb567cc3f786de3553052e47d4dd715cc705ce212a92908f4df9e653fa3653e8a7855724d366137

Shuffles the characters of a given string.

  • Input data - Hello, World!

  • Delimiter - Comma

  • Output data - eollH,ro!ld W

Returns the SM3 cryptographic hash function of a given string.

  • Input data - Hello World!

  • Length - 64

  • Output data - 0ac0a9fef0d212aa

Sorts a list of strings separated by a specified delimiter according to the provided sorting order.

  • Input data - banana,apple,orange,grape

  • Delimiter - Comma

  • Order - Alphabetical (case sensitive)

  • Reverse - false

  • Output data - apple,banana,grape,orange

Converts a string composed of values separated by a specific separator into a list of comma-separated values.

  • Input data - hello/my/world

  • Separator - /

  • Output data - hello,my,world

Extracts characters from a given string.

  • Input data - +34678987678

  • Start Index - 3

  • Length - 9

  • Output data - 678987678

Calculates the result of the subtraction of given values, separated by a specific delimiter.

  • Input data - 10, 5, 2

  • Delimiter - Comma

  • Output data - 3

Calculates the sum of given values, separated by a specific delimiter.

  • Input data - 10, 5, 2

  • Delimiter - Comma

  • Output data - 17

Swaps the case of a given string.

  • Input data - Hello World!

  • Output data - hELLO wORLD!

Converts a number into its representation in a specified numeric base (or radix).

  • Input data - 100

  • Radix (Base) - 2

  • Output data - 1100100

Encodes raw data into a Base32 string.

  • Input data - hello

  • Standard - standard

  • Output data - NBSWY3DP

Encodes raw data into an ASCII Base64 string.

  • Input data - Hello, Onum!

  • Output data - SGVsbG8sIE9udW0h

Converts a text string into its binary representation.

  • Input data - Hello

  • Delimiter - Comma

  • Byte Length - 8

  • Output data - 01001000,01100101,01101100,01101100,01101111

Converts a text string into its ordinal integer decimal representation.

  • Input data - Hello

  • Delimiter - Comma

  • Support signed values - false

  • Output data - 72,101,108,108,111

Converts a string to its corresponding hexadecimal code.

  • Output data - Hello World!

  • Delimiter - Space

  • Bytes per line - 0

  • Input data - 48 65 6c 6c 6f 20 57 6f 72 6c 64

Converts the characters of a string to lower case.

  • Input data - Hello World!

  • Output data - hello world!

Transforms a string representing a date into a timestamp.

  • Input data - 2006-01-02

  • Format - DateOnly

  • Output data - 2006-01-02T00:00:00Z

Parses a datetime string in UTC and returns the corresponding Unix timestamp.

  • Input data - 2006-01-02 15:04:05

  • Unit - Seconds

  • Output data - 1136214245

Converts the characters of a string to upper case.

  • Input data - Hello World!

  • Output data - HELLO WORLD!

Converts a date and time from one format to another.

  • Input data - 05-20-2023 10:10:45

  • Input Format - 01-02-2006 15:04:05

  • Input Timezone - UTC+1

  • Output Format - Mon, 2 Jan 2006 15:04:05 +0000

  • Output Timezone - UTC+1

  • Output data - Sat, 20 May 2023 10:10:45 +0000

Removes escape characters from a given string.

  • Input data - She said, \"Hello, world!\"

  • Output data - She said, "Hello, world!"

Decodes a URL and returns its corresponding URL-decoded string.

  • Input data - https%3A%2F%2Fexample.com%2Fsearch%3Fq%3DHello+World%21

  • Output data - https://example.com/search?q=Hello World!

Encodes a URL-decoded string back to its original URL format,

  • Input data - https://example.com/search?q=Hello World!

  • Output data - https://example.com/search?q=Hello%20World!

Byte to Human Readable
Convert Area
Convert Data Units
Convert Distance
Convert Mass
Convert Speed
Count Occurrences
CRC8 Checksum
CRC16 Checksum
CRC24 Checksum
CRC32 Checksum
Credit Card Obfuscator
CSV to JSON
Defang IP Address
Defang URL
Divide Operation
Escape String
Extract IP Address
Fang IP Address
Fang URLs
Filter
Find and Replace
Float to String
From Base
From Base32
From Base64
From Binary
From Hex
From UNIX Timestamp
Index list boolean
Index list float
Index list integer
Index list string
Index list timestamp
IP to hexadecimal
JSON Minify
JSON to CSV
JWT Decode
Keccak
Length
List to String
MD2
MD4
MD5
Median
Multiply Operation
Pad Lines
Parse Int
Parse Unix File Permissions
Parse URI
Protobuf to JSON
Regex
Remove Whitespace
Reverse String
SHA0
SHA1
SHA2
SHA3
Shake
Shuffle
SM3
Sort
String to List
Substring
Subtract Operation
Sum Operation
Swap Case
To Base
To Base32
To Base64
To Binary
To Decimal
To Hex
To Lower Case
To Timestamp
To Unix Timestamp
To Upper Case
Translate Datetime Format
Unescape String
URL Decode
URL Encode