Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 321 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

v1.692.0

Loading...

Getting Started

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

THE WORKSPACE

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Welcome

Falcon Onum helps security and IT leaders focus on the most important data. Gain control of your data by cutting through the noise for deep insights in real-time.

Quick links

Getting Started with Falcon OnumUnderstanding The EssentialsPipelines

Most popular

Deployment

Falcon Onum installation process

Overview

Once you’ve obtained an Onum account, just a few steps are needed to complete the installation, depending on the type of deployment you require. Onum supports flexible deployment options, including both on-premises and cloud environments.

In case you have any question regarding the deployment and installation process, please .

Collect data from AWS products

Supported browsers

Onum supports the following browsers:

  • Google Chrome

Cloud Deployment

For cloud-based installations, either our Customer Success team or a partner will access Onum's internal tenant manager and create the new account. All the necessary infrastructure will be set up based on estimated usage metrics.

The deployment process is fully automated, ensuring quick and streamlined provisioning and configuration.

Cloud Listeners

Note that the Listener configuration process is slightly different if you are using a Cloud deployment. Learn more about Cloud Listeners in this article.

On-Premises Deployment

In on-premises deployments, either our Customer Success team or a partner will set up the new account. Appropriate access permissions are granted to allow Onum to perform the installation.

A validation script is run to confirm all prerequisites are met and connectivity is established, ensuring a smooth installation process. Once installed, you can access your tenant, start ingesting data, invite users, and take full advantage of Onum’s capabilities.

Dependencies:

  • Docker

  • Packages:

    • gpg

    • curl

    • ipvsadm

    • ca-certificates

  • SIEM access

  • Access to sources

Hardware requirements

Hardware (per Virtual Machine):

  • Distribution: Linux (Debian or Red Hat)

  • Server Hardware: 16GB RAM and 8 CPU

  • Disk Storage: 500GB

Access

In case of upcoming system maintenance, we kindly seek permission to access the customer infrastructure. We aim to ensure seamless operations and address any potential issues promptly.

Listeners

Learn about how to set up and use Listeners

Pipelines

Discover Pipelines to manage and customize your data

Data Sinks

Add the final piece of the puzzle for simpler data

Understanding The Essentials

Get to grips with the important concepts & best practices of the Falcon Onum application.

These articles contain information on functionalities across the entire platform.

Getting Started with Falcon Onum

Welcome to Falcon Onum! This guide will help you start working with Onum, a powerful tool designed to enhance your data analysis and processing capabilities.

Accessing Onum

Once you get your Onum credentials, you only have to go to console.onum.com and enter them to access your Tenant.

A Tenant is a domain that contains a set of data in your organization. You can use one or various Tenants and grant access to as many as required. Learn more about working with Tenants in this article.

Logging in

Once in , there are several ways to log in:

  • Log in with email address and password. Your password must be a minimum of 10 characters and include a combination of uppercase letters, lowercase letters, numbers, and symbols.

  • Two-factor authentication

  • Single Sign-On (SSO) with SAML

  • Single Sign-On (SSO) with OpenID

Learn more about the different authentication types in .

An inactive session will be automatically logged out after one hour.

Navigating the Interface

When you access the Onum app, you'll see , where you can see an overview of the activity in your Tenant.

You can access the rest of the areas in Onum using the left panel.

Create Your First Listener

Onum receives any data through Listeners.

These are logical entities created within a Distributor, acting as the gateway to the Onum system. Configuring a Listener involves defining an IP address, a listening port, and a transport layer protocol, along with additional settings depending on the type of Listener specialized in the data it will receive.

Access the Listeners area to start working with them. Learn how to create your first Listener .

Create Your First Data Sink

Onum outputs data via Data sinks. Use them to define where and how to forward the results of your streamlined data.

Access the Data sinks area to start working with them. Learn how to create your first Data sink .

Build Your First Pipeline

Use Pipelines to start transforming your data and build a data flow. Pipelines are made of the following components:

Learn more about Pipelines .

Use cases

Do you want to check the essential steps in Onum through specific Pipelines? Explore the most common use cases in .

Cards and Table Views

Viewing and modifying elements in the table.

Overview

In the Listeners, Pipelines, and Data sinks areas, you can view all the resources in your Tenant as cards or in a table.

In both views, you can:

  • Click the magnifying glass icon to look for specific elements in the list. You can search by name, status, or tag.

  • Display all the elements individually in a list or grouped by types. These grouping options vary depending on the area you are.

Table View

In the Table view, you can click the cog icon to begin customizing the table settings. You can reorder the columns in the table, hide or display the required ones or pin them.

Changes will be automatically applied. Click the Reset button to recover the original configuration.

  • Click a row to open the details window, or double-click it to access the element and edit it.

  • Use the buttons at the top right part of the table to expand or collapse each row in the table. This will change the level of detail of each element.

  • Click the ellipsis button on each row to edit the element, copy its ID, or remove it.

Cards View

In this view, each element is displayed as a card that shows details about it.

  • Click a card to open the details window, or double-click it to access the element and edit it.

  • Click the ellipsis button on each card to edit the element, copy its ID, or remove it.

  • Click the Add tag button and add the required tags to an element. For each tag you enter in the box, hit the Enter key. Click Save to add the tags.

About Falcon Onum

Observability & Orquestration in real time. Any format. Any source.

Overview

The exponential growth of data ingestion volumes can lead to reduced performance, slow response times, and increased costs. With this comes the need to implement optimization strategies & volume reduction control. We help you cut the noise of large data streams and reduce infrastructure by up to 80%.

Gain deep insights from any type of data, using any format, from any source.

All of this...

@ the Edge

By collecting and observing that data at the edge, as close as possible to where it’s being generated, gain real-time observations and take decisive action to prevent network downtime, payment system failures, malware infections, and more.

Unlike most tools that provide data observation and orchestration, Onum is not a data analytics space, which is already served well by security information and event management (SIEM) vendors and other analytics tools. Instead, Onum sits as close as possible to where the data is generated, and well in front of your analytics platforms, to collect and observe data across every aspect of your hybrid network.

Start with the basics

Architecture

Designed for the Edge, created in the Cloud

Easy, flexible deployment in any environment while keeping them as close as possible to where the data is produced delivers unparalleled speed and efficiency, enabling you to cut the infrastructure you have dedicated to orchestration by up to 80%.

The Onum infrastructure consists of:

  • Distributor: this is the service that hosts the Listener before forwarding it to Workers.

  • Worker: this is the service that runs the Pipelines, receiving data from its Distributor and contained within a Cluster.

Collect data from Guardicore

Where the vendor is Akamai, it's product is Guardicore. Right now we have the following product types/endpoints:

Inside each of those endpoints we have the YAML file to configure.

Connections
Incidents
Reputation logs

Cards and Table Views

Data Types

Graph Calculations

The Time Range Selector

Cover
Cover
Cover
Cover
Key Terminology
Understanding The Essentials

Collect data from Google Cloud products

Collect data from Netskope

Where the vendor is Netskope, we have the following product types/endpoints:

  • Alerts

  • Events

Inside each of those endpoints we have the YAML file to configure.

Collect data from Cortex XDR

Integrate with API Logs from the Cortex Platform using the HTTP Pull Listener using the data Integration API.

Collect data from Akamai products

Collect data from Microsoft products

Cluster: a container grouping Distributors and Workers. You can have as many clusters as required per Tenant.

Listeners are hosted within Distributors and are placed as close as possible to where data is generated. The Distributor pulls tasks from the data queue passing through the pipeline and distributes it to the next available worker in a Cluster. As soon as a Worker completes a task it becomes available again, and the Distributor in turn will assign it the next task from the queue.

The installation process creates the Distributor and all Workers for each data source in the cluster.

How it works

Deployment types

The Onum Platform supports any deployment type ― including on-premises, the Onum public cloud, or your own private cloud.

In a typical SaaS-based deployment, most processing activities are conducted in the Cloud.

Client-side components can be deployed on a Linux machine or on a Kubernetes cluster for easy, flexible deployment in any environment. Onum supports all major cloud environments, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

When the deployment type is on-premises, the communication between the management console and the process cluster will be encrypted with TLS and controlled by pull updates from the process cluster at configurable intervals.

Learn more about Deployment requirements here.

Delivery methods

Onum supports all major standards such as Netflow, Syslog, and Kafka to orchestrate data streams to any desired destination, including popular data analytics tools such as Splunk and Devo, as well as storage environments such as S3.

console.onum.com
this section
the Home page
in this article
in this article
Listeners
Actions
Data sinks
in this section
this section

Collect data from Cisco Umbrella products

Where the vendor is Cisco, it's product is Umbrella. For Umbrella, right now we have the following product types/endpoints:

Inside each of those endpoints we have the YAML file to configure.

Collect data from SentinelOne

Where the vendor is SentinelOne, it's product is web API. For Web API, right now we have the following product types/endpoints:

Collect data from CrowdStrike Falcon NG-SIEM

Where the vendor is CrowdStrike, it's product is Falcon API. For Falcon API, right now we have the following product types/endpoints:

Inside each of those endpoints we have the YAML file to configure.

Collect data from Trend Vision One

Where the vendor is Trend Micro, it's product is Vision one. For Vision one, right now we have the following product types/endpoints:

Inside each of those endpoints we have the YAML file to configure.

Reports

Threats

Inside each of those endpoints we have the YAML file to configure.

Activities
Cloud Detections - Alerts
Reports
Alerts
Event Stream
Incidents
Observed Attack Techniques
Workbench Alerts

Any format. Any source.

Collect data from anywhere it’s generated, across every aspect of the network.

All data is aggregated, observed, and seamlessly routed to any destination.

Edge observability

Listeners are placed right on the edge to collect all data as close as possible to where it’s generated.

Centralized management

Onum receives data from Listeners and observes and optimizes the data from all nodes. All data is then sent to the proper data sink.

Cover
Cover
Cover

AI Assistant

Just ask, and the assistant helps you building your Pipelines

Onum offers you two different types of AI-powered assistants to help you build powerful Pipelines:

  • Pipeline Assistant - Build your Pipeline structure using this assistant.

  • Action Assistant - Once you've defined your Pipeline structure, you can configure each Action settings using this assistant.

Collect data from Palo Alto products

The Time Range Selector

Overview

Throughout the entire Onum platform, you can set a period to either narrow down or extend the data shown. You can either select a predefined period or apply a custom time range.

The related graph and resources will be automatically updated to display data from the chosen period. To remove a selected period, simply click the bin icon that appears next to the period to go back to the default time range (1 hour ago).

The intervals will be calculated according to the Timezone of your browser. Keep an eye out for future implementations, where you can manually select a timezone.

Predefined and Custom time ranges

As well as predefined time intervals, you can also define a custom time range. To do it, simply select the required starting and ending dates in the calendar.

Comparisons

The interesting thing about Onum is that you can directly see how much volume you have saved compared to past ingestions, telling you what is going well and what requires further streamlining.

The comparison is direct/equivalent, meaning all data shown is analyzed compared to the previously selected equivalent time range.

For example, if the time range is 1 hour, the calculation of differences will be carried out using the previous one hour before the current selection =

  • Range selected: 10:00-11:00

  • Comparison: 09:00-10:00

Again, let´s say you now wish to view data over the last 7 days. The percentages will be calculated by measuring the volume retrospectively two weeks ago with the previous week.

AI Pipeline Assistant

Note that this feature is only available for certain Tenants. Contact us if you need to use it and don't see it in your Tenant.

Overview

The Pipeline Assistant is an AI-powered chat feature designed to help users design and build their Pipelines. Any configuration requested through the chat will be automatically applied. Simply enter the results you expect from your Pipeline and the AI will generate a Pipeline structure according to your needs.

To start using it, create a new Pipeline, drag a Listener, and just click this icon at the bottom left corner:

Note that this AI Assistant only creates Pipeline structures. The individual Actions in the generated Pipeline won't be configured. You can use our to help you configure your Actions.

Examples

Here are some example use cases where we ask for help from the Pipeline Assistant. Check the prompts we use and the resulting configuration in each example picture.

Filter most common priorities

Send a report of aggregated data to Jira

Generate scheduled events

Most recent version: v0.0.1

See the changelog of the Tick Listener type .

Note that this Listener is only available in certain Tenants. Get in touch with us if you don't see it and want to access it.

Overview

The Tick listener allows you to emit events on a defined schedule.

Onum Setup

1

Log in to your Onum tenant and click Listeners > New listener.

2

Double-click the Tick Listener.

3

Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

Click Create listener when you're done.

Collect data from Agari DMARC Protection

Where the vendor is Agari, it's product is DMARC Protection. For DMARC Protection API, right now we have the following product types/endpoints:

  • Alert events

  • Audits

  • Domains

Listeners

The default tab that opens when in the Pipeline area is the Listeners tab, which shows all Listeners in your Tenant, as well as their labels.

Use the search bar to find a specific Listener or Label.

Edit a Listener

You can edit a label from the list by clicking the ellipses next to its name and selecting Edit.

This will open the Listener Configuration and Labels for you to modify.

Create Listener

If the Listener you wish to use in the Pipeline does not already exist, you can create it directly from this view using the Create Listener button in the bottom right of the tab. This will open the Listener Configuration window.

Add a Listener to your Pipeline

Go to to learn step by step.

Key Terminology

Get to grips with these key concepts to better understand how Onum works and use it to its full potential.

Action

A unit of work performing a given operation on an event.


Graph Calculations

Overview

This article outlines the more complex calculations that go on behind the graphs you see.

In the , , and views, you will see detailed metrics on your events and bytes in/out, represented in a graph at the top of these areas.

The line graph represents the events in/out, and the bar graph represents bytes in/on. Hover over a point on the chart to show a tooltip containing the events and bytes in for the selected time, as well as a percentage of how much increase/decrease has occurred since the previous lapse of time since the one currently selected.

Cloud Listeners

Are you interested in deploying your Onum installation in our Cloud? , and we will configure a dedicated Cloud Tenant for you and your organization.

Overview

If your Onum installation is deployed in our Cloud, the configuration settings of a Listener would be slightly different from Listeners defined in an On-Premise deployment:

Collect data from Check Point NGFW

Check Point's Next-Generation Firewall to Onum Syslog Listener

See the changelog of the Syslog Listener .

Overview

The following article outlines a basic data flow from your Check Point's Next-Generation Firewall (NGFW) to the Onum Syslog

Collect data from Fortinet NGFW

See the changelog of the Syslog Listener .

Overview

The following article outlines a basic data flow from your Fortinet Next-Generation Firewall (NGFW) to the Onum Syslog Listener.

Advanced

Collect data from Palo Alto NGFW

Palo Alto NGFW to Onum Syslog Listener

See the changelog of the Syslog Listener .

Overview

The following article outlines a basic data flow from your Palo Alto Next-Generation Firewall (NGFW) to the Onum Syslog Listener.

Update Listeners

Note that this feature is only available for certain Tenants. if you need to use it and don't see it in your Tenant.

Next to the Listeners tab in the left menu, you'll see a number that indicates the Listeners that have pending updates. In this area, Listeners that have pending updates will show an Update tag.

1

API

Application Programming Interface. A set of defined methods of communication among various components.


Cluster

Various distributors and workers can be grouped and contained within a cluster. You can have as many clusters as required per Tenant.


Data Sink

Where the data is routed after being processed by Onum.


Data source

Where the data is generated before ingesting it into Onum, e.g. application server logs, firewall logs, S3 bucket, Kafka Topic, etc.


Distributor

This service receives and processes the Listener data before sending it on to workers within a cluster.


Event

An event represents semi-structured data such as a log entry. Events can be parsed so that structured data can be generated and eventually processed by the engine. Events are composed of fields, which are referred to as Field. An action that produces a new field will be referred to as outputField.


Label

Used to sort events coming from Listeners into categories or sets that meet given filters to be used in a Pipeline.


Listener

A Listener retrieves events in a given IP address and a port, routing the data to the Pipelines so that it can be processed.


Lookup

A lookup refers to searching for and retrieving information from a specific source or dataset, typically based on a key or reference.


Multitenancy

Multitenancy is an architecture in which tenants share the same underlying infrastructure, including databases and application code, but their data and configurations are kept separate to ensure privacy and security.


Pipeline

A sequence of Actions connected through inputs/outputs to process a stream of data. Data comes from the Listener and eventually is routed to a Datasink.


Role

A role is assigned to a user in order to control the access they have to certain or all Onum features. This way, we can personalise the experience for each user.


Tag

Tags can be assigned to Listeners, Pipelines or Data sinks in order to classify them or make them easier to find. This is particularly useful if you have a wide database and want to avoid lengthy searching for the resources you wish to use.


Tenant

A Tenant is a domain that contains a set of data in your organization. You can use one or various tenants and grant access to as many as required.


Worker

This service runs the Pipelines, receiving data from its distributor and contained within a Cluster.

Organizations
Users
  • Cloud Listeners do not have the TLS configuration settings in their creation form, as the connection is already secured.

  • Cloud Listeners have an additional step in their creation process: Network configuration. Use these details to configure your data source to communicate with Onum. Click Download certificate to get the required certificate for the connection. You can also download it from the Listener details once it is created.

  • Learn more about the configuration steps of each Listener type in this section.

    Important Considerations

    You must consider the following indications before using Cloud Listeners:

    • Cloud Listener endpoints are created in Onum's DNS. This process is usually fast, and Listeners are normally available immediately. However, note that this may last up to 24-48 hours, depending on your organization's DNS configuration.

    • Cloud Listener endpoints require Mutual TLS (mTLS) authentication, which means that your data input must be able to process a TLS connection and be authorized with a certificate.

    • Your data input must use the Server Name Indication (SNI) method, which means it must send its hostname in the TLS authentication process. If SNI is not used, the certificate routing will fail, and data will not be received, even if the certificate is valid.

    If your organization's software cannot meet points 2 and 3, you can use an intermediate piece of software to ensure the client-Onum connection, such as Stunnel.

    Contact us
    Action Assistant
    Building a Pipeline
    4

    Add as many tick events as required to emit.

    • Select the Schedule Type Interval* type to use for the intervals between sending events.

    • Enter the Interval value* number of seconds/minutes/hours to wait for.

    • The Interval Unit* the number corresponds to: seconds, minutes, hours.

    • Enter the Number of events* to emit.

    • Enter what the Event body* will contain, e.g. the fields.

    5

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in this article.

    The chart in the Pipelines area is slightly different and includes some additional features. Learn more in the Pipelines section.

    Events

    The values on the left-hand side represent the events in/out for the selected period.

    AVG EPS

    The average events per second ingested or sent by all listeners/Data sinks in your Tenant.

    MAX EPS

    The maximum number of events per second ingested or sent by all Listeners/Data sinks in your Tenant.

    MIN EPS

    The minimum number of events per second ingested or sent by all Listeners/Data sinks in your Tenant.

    Bytes

    The values on the right-hand side represent the bytes in/out for the selected period.

    AVG Bytes

    The average kilobytes per second ingested or sent by all Listeners/Data sinks in your Tenant.

    MAX Bytes

    The maximum kilobytes per second ingested or sent by all Listeners/Data sinks in your Tenant.

    MIN Bytes

    The minimum kilobytes per second ingested or sent by all Listeners/Data sinks in your Tenant.

    Frequency slider and Stacked view

    • Use the Frequency slider bar to choose how frequently you want to plot the events/bytes in the chart.

    • By default, these graphs give an overview calculation of all the Listeners/Sinks in your Tenants. If you wish to see each Listener or Sink individually, use the Stack toggle.

    Listeners
    Pipelines
    Data sinks
    Listener.

    Prerequisites

    If you're using TLS authentication, to get the cert information needed for TLS communication.

    Check Point NGFW Setup

    Simply enter the required Onum sending address in your firewall configuration.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Syslog Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    Enter the required Port and Protocol (TCP or UDP).

    5

    Choose the required Framing Method, which refers to how characters are handled in log messages sent via the Syslog protocol. Choose between:

    • Auto-Detect - automatically detect the framing method using the information provided.

    • Non-Transparent Framing (newline) - the newline characters (\n) within a log message are preserved as part of the message content and are not treated as delimiters or boundaries between separate messages.

    6

    If you're using TLS authentication, enter the data you received from the Onum team in the TLS configuration section (Certificate, Private key and CA chain). Choose your Client authentication method and Minimum TLS version.

    7

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    Prerequisites

    If you're using TLS authentication, to get the cert information needed for TLS communication.

    Fortinet NGFW Setup

    Simply enter the required Onum sending address in your firewall configuration.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Syslog Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    Enter the required Port and Protocol (TCP or UDP).

    5

    Choose the required Framing Method, which refers to how characters are handled in log messages sent via the Syslog protocol. Choose between:

    • Auto-Detect - automatically detect the framing method using the information provided.

    • Non-Transparent Framing (newline) - the newline characters (\n) within a log message are preserved as part of the message content and are not treated as delimiters or boundaries between separate messages.

    6

    If you're using TLS authentication, enter the data you received from the Onum team in the TLS configuration section (Certificate, Private key and CA chain). Choose your Client authentication method and Minimum TLS version.

    7

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    Prerequisites

    If you're using TLS authentication, to get the cert information needed for TLS communication.

    Palo Alto NGFW Setup

    Simply enter the required Onum sending address in your firewall configuration.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Syslog Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    Enter the required Port and Protocol (TCP or UDP).

    5

    Choose the required Framing Method, which refers to how characters are handled in log messages sent via the Syslog protocol. Choose between:

    • Auto-Detect - automatically detect the framing method using the information provided.

    • Non-Transparent Framing (newline) - the newline characters (\n) within a log message are preserved as part of the message content and are not treated as delimiters or boundaries between separate messages.

    6

    If you're using TLS authentication, enter the data you received from the Onum team in the TLS configuration section (Certificate, Private key and CA chain). Choose your Client authentication method and Minimum TLS version.

    7

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    Click the Listener to open the right-hand panel.
    2

    Click the Update Listener button in the banner that appears.

    3

    This will open the Changelog for this Listener, where you can see a detailed account of what has changed between versions before updating.

    4

    If you decide to update, click Update listener. You will see a window that shows all the Pipelines where that Listener needs to be updated. Select the required Pipeline and click the arrow that appears to access it.

    5

    Once in the Pipeline, locate the Listener you want to update. You'll see an Update available tag over the Listeners with available updates. To update, double-click it to open its details and click the Update listener button.

    6

    You'll see a window with the changelog. Click the Update button to finish the process and update the Listener type selected.

    Data Types

    Easily identify data types using the color legend

    Since Onum can process any data type, you may be wondering how to identify which is which. See the color legend below:

    Field type
    Description
    Example

    A sequence of characters that is used primarily for textual data representation.

    A list of string values separated by commas.

    Listeners

    Everything starts with a good Listener

    Overview

    Essentially, Onum receives any data through Listeners. These are logical entities created within a Distributor, acting as the gateway to the Onum system. Due to this, configuring a Listener involves defining an IP address, a listening port, and a transport layer protocol, along with additional settings depending on the type of Listener specialized in the data it will receive.

    A Push type of Listener passively sources data without explicitly requesting, whereas a Pull type is where the user actively requests data from an external source.

    If you are using more than one Cluster, it is recommended not to use a Pull-type Listener. You can find out the Listener type in the integration-specific articles below.

    Click the Listeners tab on the left menu for a general overview of the Listeners configured in your Tenant and the events generated.

    • The graph at the top plots the volume ingested by your listeners. Click Events to see the events in for all your Listeners, or Bytes to see a bar graph representing the bytes in. Learn more about this graph .

    • Hover over a point on the chart to show a tooltip containing the Events and Bytes OUT for the selected time, as well as a percentage of how much increase/decrease has occurred between the previous lapse of time and the one currently selected.

    At the bottom, you have a list of all the Listeners in your Tenant. You can switch between the Cards view, which shows each Listener in a card, and the Table view, which displays Listeners listed in a table. Learn more about the cards and table views .

    Narrow Down Your Data

    There are various ways to narrow down what you see in this view:

    Add Filters

    Add filters to narrow down the Listeners you see in the list. Click the + Add filter button and select the required filter type(s). You can filter by:

    • Name - Select a Condition (Contains, Equals, or Matches) and a Value to filter Listeners by their names.

    • Version - Filter Listeners by their version.

    • Type - Choose the Listener type(s) you want to see in the list.

    The filters applied will appear as tags at the top of the view.

    Note that you can only add one filter of each type.

    Select a Time Range

    If you wish to see data for a specific time period, this is the place to click. Go to to dive into the specifics of how the time range works.

    Select Tags

    You can choose to view only those Listeners that have been assigned the desired tags. You can create these tags in the Listener settings or from the cards view. Press the Enter key to confirm the tag, then Save.

    To filter by tags, click the Tags button, select the required tag(s) and click Save.

    Create a Listener

    Depending on your permissions, you can create a new Listener from this view. There are several ways to create a new Listener:

    • From the Listeners view:

    • Within a :

    • From

    Configuring your Listener involves various steps. You can open the configuration pane by creating a new Listener or by clicking a Listener in the Listener tab or the Pipeline view and selecting Edit Listener in the pane that opens.

    Alternatively, click the ellipses in the card or table view and select Edit.

    1

    Choose your Listener type

    The first step is to define the Listener Type. Select the desired type in this window and select Configuration.

    Check the list of available Listener types in .

    2

    Collect data from Falcon LogScale

    Falcon LogScale Collector to Onum

    See the changelog of the Falcon LogScale Collector Listener .

    Note that this Listener is only available in certain Tenants. if you don't see it and want to access it.

    Overview

    The following article outlines a basic data flow from Falcon LogScale Collector to the Onum Falcon LogScale Collector Listener.

    In some environments, where direct access to LogScale is prohibited, it may be necessary to configure the proxy server manually.

    The collector attempts to detect the system's proxy automatically. If the collector should use a different proxy than the system's, or instead connect directly, it must be specified in the sink configuration. The proxy option accepts the following keywords: auto, system, and none, but it also accepts a URL specifying the proxy server to use.

    Prerequisites

    • You need to generate your TLS certificates for use in securing the sending of data to Onum. These will be required during the Falcon LogScale Collector Listener configuration and in the Falcon LogScale Collector setup. Learn how to generate these self-signed certificates in .

    • You'll need to know your Onum distributor URL, as it will be required in the Falcon LogScale Collector setup. and we'll send it to you.

    Onum setup

    First, you must configure a new Falcon LogScale Collector Listener in Onum:

    1

    In Onum, go to the Listeners area and click New listener. Select the Falcon LogScale Collector Listener from the list.

    2

    Enter a Name for the Listener. Optionally, add a Description and some Tags to identify the Listener.

    3

    Falcon LogScale Collector setup

    Now, access your Falcon NG-SIEM instance and follow these steps:

    1

    In Falcon NG-SIEM, click Data connectors > Data connections from the left menu, then select the Fleet management tab.

    2

    Access the relevant Falcon LogScale Collector instance's config and add the following information:

    • The token value you added in the Falcon LogScale Collector Listener setup in Onum. This will go into the

    Collect data using Syslog

    Most recent version: v1.1.2

    See the changelog of this Listener type .

    Overview

    Onum receives data from Syslog, supporting TCP and UDP protocols. Select Syslog from the list of Listener types and click Configuration to start.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Syslog Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    Click Create listener when you're done.

    Listener Integrations

    Collect data in real-time, no matter the source

    Overview

    Although there are only a limited number of Listener types available for use in Onum, the integration possibilities are endless. Onum is designed to be source-agnostic, ensuring you can ingest data from virtually any product or technology.

    We achieve this through a strategic, two-pillar approach to our Listeners:

    • First, we offer a growing suite of dedicated Listeners for specific technologies (such as Amazon S3, Microsoft Office, and others). These provide a streamlined configuration process for popular services.

    Collect data from Amazon SQS

    Most recent version: v0.0.1

    See the changelog of this Listener type .

    This is a Pull Listener and therefore should not be used in environments with more than one cluster.

    Home

    A summary of your Tenant activity

    Overview

    When opening Onum, the Home area is the default view. Here you can see an overview of all the activity in your .

    Use this view to analyze the flow of data and the change from stage to stage of the process. Here you can locate the most important contributions to your workflow at a glance.

    Collect data using Cisco NetFlow

    Most recent version: v0.1.0

    See the changelog of this Listener type .

    This is a Pull Listener and therefore should not be used in environments with more than one cluster.

    Collect data from Amazon Kinesis

    Most recent version: v0.0.2

    See the changelog of this Listener type .

    Note that this Listener is only available in certain Tenants. if you don't see it and want to access it.

    Collect data from your databases

    See the changelog of the Relational Databases Listener .

    Note that this Listener is only available in certain Tenants. if you don't see it and want to access it.

    Collect data from Google Pub/Sub

    Most recent version: v0.0.2

    See the changelog of this Listener type .

    This is a Pull Listener and therefore should not be used in environments with more than one cluster.

    Labels

    Overview

    Use Onum's labels to cut out the noise with filters and search criteria based on specific metadata. This way, you can categorize the events that Listeners receive before being processed in your .

    As different log formats are being ingested in real-time, the same Listener may ingest different technologies. Labels are useful for categorizing events based on specific criteria.

    When creating or editing a Listener, use Labels to categorize and assign filters to your data.

    For most Listeners, you will see two main event categories on this screen:

    Collect data from Google Cloud Storage

    Most recent version: v1.0.1

    See the changelog of this Listener type .

    This is a Pull Listener and therefore should not be used in environments with more than one cluster.

    Bring Your Own Code

    Most recent version: v0.0.1

    See the changelog of this Action type .

    Note that this Action is only available in certain Tenants. if you don't see it and want to access it.

    Collect data using TCP

    Most recent version: v0.1.1

    See the changelog of the TCP Listener type .

    Overview

    Onum supports integration with Transmission Control Protocol.

    Workbench Alerts

    Overview

    Displays information about workbench alerts that match the specified criteria in a paginated list.

    • The response contains an array of activities under the items field.

    This method displays up to 10 entries per page. If the response body exceeds 50 MB in size, the method displays only one entry per page.

    contact us

    Non-Transparent Framing (zero) - refers to the way zero-byte characters are handled. Any null byte (\0) characters that appear within the message body are preserved as part of the message and are not treated as delimiters or boundaries between separate messages.

  • Octet Counting (message length) - the Syslog message is preceded by a count of the length of the message in octets (bytes).

  • this article
    contact Onum

    Non-Transparent Framing (zero) - refers to the way zero-byte characters are handled. Any null byte (\0) characters that appear within the message body are preserved as part of the message and are not treated as delimiters or boundaries between separate messages.

  • Octet Counting (message length) - the Syslog message is preceded by a count of the length of the message in octets (bytes).

  • this article
    contact Onum

    Non-Transparent Framing (zero) - refers to the way zero-byte characters are handled. Any null byte (\0) characters that appear within the message body are preserved as part of the message and are not treated as delimiters or boundaries between separate messages.

  • Octet Counting (message length) - the Syslog message is preceded by a count of the length of the message in octets (bytes).

  • this article
    contact Onum
    Created by - Selecting this option opens a User drop-down where you can filter by creator.
  • Updated by - Selecting this option opens a User drop-down where you can filter by the last user to update a pipeline.

  • Configure your Listener

    The configuration is different for each Listener type. Check the different Listener types and how to configure them in this section.

    If your Listener is deployed in the Cloud, you will see an extra step for the network properties. Learn more about Listeners in a Cloud deployment in this article.

    3

    Add Labels

    Use Onum's labels to cut out the noise with filters and search criteria based on specific metadata. This way, you can categorize events sent on and processed in your Pipelines.

    Learn more about labels in this article.

    in this article
    in this article
    this article
    Pipeline
    the Home page:
    this article
    Then, enter the Port we're going to listen to. At this time, all TCP ports from 1024 to 10000 are open.
    4

    Now you need to generate a token that will be used to connect Onum to your Falcon LogScale Collector instance. You can use an online UUID generator tool to get it.

    Note that the Falcon LogScale Collector won’t allow for token values that are just numeric.

    Back to Onum, go to the Authentication section, click the Select an API Key field and select New secret. In the window that appears, give your secret a Name and turn off the Expiration date toggle if not needed. Then, click Add new value and paste the token you generated. Click Save when you're done.

    You'll later use this token in the Falcon LogScale Collector configuration.

    Learn more about Secrets in .

    5

    Now, select the token you've just created.

    6

    In the TLS configuration section, you must enter the required Certificate, Private key and CA Chain. Learn how to generate these self-signed certificates in this article. Once you have them, click New secret in each field and add the corresponding values.

    7

    Finally, click Create labels. Create any required labels if you need to break down your data and then click Create listener.

    token
    field of the configuration.
  • The Onum URL, with the following format: distributorURL:port. You must get your distributor URL from the Onum team, as it is not shown in the platform. Add the port you entered in the Onum configuration and include it in the url field of the configuration.

  • In the tls section at the end, add the path to the CA certificate file you generated before. Add the file in a directory that the Falcon LogScale Collector can read.

  • Check below a Falcon LogScale Collector sample config file:

    If you're using Windows, you need to escape backslashes (\) with an extra backslash in your CA file path.

    3

    Click Publish > Publish draft to publish your FLC config.

    4

    Finally, check your the Fleet Management page to verify the FLC status shows as Okay. You may find the status shows Error if, for example, you dod not enter the right matching port you chose in Onum.

    Contact us
    Get in touch with us
    4

    Enter the required Port and Protocol (TCP or UDP).

    Note that you won't see the Port and Protocol settings in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum.

    While UDP 514 is the standard, some implementations may use TCP 514 or other ports, depending on specific configurations or security requirements. To determine the syslog port value, check the configuration settings of your syslog server or consult the documentation for your specific device or application.

    5

    Choose the required Framing Method, which refers to how characters are handled in log messages sent via the Syslog protocol. Choose between:

    • Auto-Detect - automatically detect the framing method using the information provided.

    • Non-Transparent Framing (newline) - the newline characters (\n) within a log message are preserved as part of the message content and are not treated as delimiters or boundaries between separate messages.

    • Non-Transparent Framing (zero) - refers to the way zero-byte characters are handled. Any null byte (\0) characters that appear within the message body are preserved as part of the message and are not treated as delimiters or boundaries between separate messages.

    • Octet Counting (message length) - the Syslog message is preceded by a count of the length of the message in octets (bytes).

    6

    If you're using TLS authentication, enter the data you received from the Onum team in the TLS configuration section (Certificate, Private key and CA chain). Choose your Client authentication method and Minimum TLS version.

    • Note that the parameters in this section are only mandatory if you decide to include TLS authentication in this Listener. Otherwise, leave it blank.

    • Note that you won't see this section in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum. Learn more about Cloud Listeners in .

    7

    The TLS credentials are saved in Onum as Secrets. In the TLS form, click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    8

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in this article.

  • Second, and crucially, we provide standard protocol Listeners (including HTTP, TCP, and Syslog). This ensures that even if a product does not have a dedicated, named Listener, you can still seamlessly send data to Onum using these widely supported, industry-standard protocols.

  • This dual model guarantees comprehensive coverage, making it clear that whether you need a highly specialized integration or simply a robust, standardized connection, Onum is ready to collect your data.

    You can contact us to request a specific Listener type.

    Dedicated Listeners

    Check the current suite of dedicated Listeners we offer in the Onum platform:

    Standard Protocol Listeners

    Click to see how to configure each of our Listeners for standard protocols:

    Other Listeners

    Overview

    Onum supports integration with Amazon SQS.

    Amazon Simple Queue Service (AWS SQS) is a fully managed message queuing service. Among its many features, the following ones are of special interest to our use case:

    • It supports both standard queues (with at-least-once, occasionally unordered delivery semantics) and FIFO queues (exactly-once and fully ordered delivery semantics).

    • It supports scaling through the concept of visibility timeout (a period after a consumer reads one message during which this becomes invisible to other consumers). That allows a consumer group to read from the same queue and distribute messages without duplication.

    So, what we want is a Listener that we can configure to read from an existing SQS queue and inject queue messages as events into our platform. Please note that because of the nature of the API offered to access SQS messages (HTTP-based, max 10 messages each time), this is not a high-throughput Listener.

    Prerequisites

    You will need an IAM User, role or group with the correct permissions to access and manage SQS.

    Amazon SQS Setup

    Go to IAM (Identity and Access Management) to manage users, groups, roles and permissions.

    Under Permissions Policies, make sure you have assigned the policy AmazonSQSFullAccess to give full access to SQS resources. Alternatively, if you have custom permissions, go to Policies - Create Policy and in the JSON tab, paste your custom JSON e.g.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the AWS SQS Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    Enter the Region displayed in the top right-hand corner of your AWS console.

    5

    Enter the Queue URL* of your existing Amazon SQS queue, acting as the endpoint to interact with the desired queue. Use the GetQueueUrl command or:

    1. Go to the AWS Management Console.

    2. In the Search Bar, type "SQS" and click on Simple Queue Service (SQS).

    6

    Choose your Authentication Type*

    Authentication is not specific to SQS but rather AWS IAM (Identity and Access Management). If you are connecting from an IAM console, enter the authentication credentials here.

    • Access key ID*

      Add the access key from your or create one. The Access Key ID is found in the IAM Dashboard of the AWS Management Console.

    7

    Optionally, specify which Message system attributes are wanted in the response. The set of system attributes chosen by the user correspond to attributes inlined in the message/event.

    1. In the Queues area, click on More or scroll down and go to the Monitoring tab.

    2. You will see some system attributes (like deduplication and group ID). However, detailed system attributes are typically accessed via the CLI or SDKs.

    8

    Proceed with caution when modifying the Advanced options. Default values should be enough in most cases.

    9

    Proceed with caution when modifying the General advanced options. Default values should be enough in most cases.

    • Service endpoint - If you have a custom endpoint, enter it here. The default SQS regional service endpoint will be used by default.

    • Maximum number of messages* - Set a limit for the maximum number of messages to receive in the notifications queue for each request. The minimum value is 1, and the maximum and default value is 10

    10

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabelled.

    Learn more about labels in .

    11

    Click Create listener when you're done.

    All data shown is analyzed compared to the previously selected time range. Use the time range selector at the top of this area to specify the periods to examine.

    For example, if the time range were 1 hour ago (the default period), the calculation of differences will be carried out using the previous one hour before the current selection:

    • Range selected: 10:00-11:00

    • Comparison: 09:00-10:00

    To learn more about time ranges, go to

    Metrics

    The Home view shows various infographics that provide insights into your data flow. Some Listeners or Data Sinks may be excluded from these metrics if they are duplicates or reused.

    The Net Saved/Increased and Estimation graphs will show an info tooltip if some Data sinks are excluded from these metrics. You may decide this during the Data sink creation.

    In those cases, you can hover over the icon to check the total metrics including all the Data sinks.

    Sankey Diagram

    Each column of the Sankey diagram provides information and metrics on the key steps of your flow.

    You can see how the data flows between:

    1. Listeners - each Listener in your Tenant.

    2. Clusters - the Distributor/Worker group receives the Listener data and forwards it to Pipeline.

    3. Labels - the operations and criteria used to filter out the data to be sent on to Pipelines.

    4. Pipelines - the Pipelines used to obtain desired data and results.

    5. - the end destination for data having passed through Listener › Cluster › Label › Pipeline.

    Hover over a part of the diagram to see specific savings.

    Show Metrics

    You can narrow down your analysis even further by selecting a specific node and selecting Show metrics.

    This option is not available for all columns.

    View Details

    Click a node and select View details to open a panel with in-depth details of the selected piece.

    From here, you can go on to edit the selected element.

    This option is not available for all columns.

    Hide/Show Columns

    You can choose which columns to view or hide using the eye icon next to its name.

    Add New Elements

    You can add a new Listener, Label, Pipeline or Data sink using the plus button next to its name.

    You can also create all of the aforementioned elements using the Create new button at the top-right:

    Tenant
    Overview

    Onum supports integration with Cisco NetFlow.

    Cisco NetFlow is a network protocol developed by Cisco for collecting and analyzing IP network traffic data. It enables network administrators to understand traffic patterns, identify potential issues, and optimize network performance.

    Cisco NetFlow Setup

    In order to begin listening for data, you must first:

    • Enable IP routing

    • Enable Cisco Express Forwarding (CEF)

    See the Cisco Netflow configuration guide for help with this.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Cisco NetFlow Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    In the Socket section, enter the following:

    • Transport protocol* - Currently, Onum only supports the UDP protocol.

    • Port* - Enter the required IP port number. By default, Cisco NetFlow typically uses UDP port 2055 for exporting flow data.

    5

    Configure the Flow parameters

    • Protocols to process*

      Select the required protocol(s) from the list.

      • NetFlow v5 is the most widely used version.

    6

    Choose your Access control type* to selectively monitor traffic based on specific IPs:

    • None - allows all IPs.

    • Whitelist - allows certain IPs through.

    7

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    8

    Click Create listener when you're done.

    This is a Pull Listener and therefore should not be used in environments with more than one cluster.

    Overview

    Onum supports integration with Amazon Kinesis Data Stream.

    Amazon Kinesis Data Streams is a fully managed, serverless streaming data service that allows you to ingest, store, and process real-time data streams. It's designed for high-throughput, low-latency data ingestion from various sources, enabling real-time analytics and applications.

    Prerequisites

    In order to use this Listener, you must activate the environment variable in your distributor using docker compose (SINGLETON_LISTENER_EXECUTOR=true)

    Amazon Kinesis Data Stream Setup

    1

    Go to IAM (Identity and Access Management) to manage users, groups, roles and permissions.

    Under Permissions Policies, make sure you have assigned the policy AmazonKinesisFullAccess to give full access to Kinesis resources. Alternatively, if you have custom permissions, go to Policies - Create Policy and in the JSON tab, paste your custom JSON e.g.

    2

    Test the Configuration

    Run the following command:

    If your IAM permission are correct, you'll see a list of streams.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Amazon Kinesis Data Stream Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    In the AWS authentication section, enter the region of your AWS data center. Your region is displayed in the top right-hand corner of your AWS console.

    5

    Select the Access Key ID from your or click New secret to generate a new one.

    The Access Key ID is found in the IAM Dashboard of the AWS Management Console.

    1. In the left panel, click on Users.

    2. Select your IAM user.

    6

    Select the Secret Access Key from your or click New secret to generate a new one.

    Under Access keys, you can see your Access Key IDs, but AWS will not show the Secret Access Key. You must have it saved somewhere. If you don't have the secret key saved, you need to create a new one.

    Learn more about secrets in Onum in .

    7

    Configure your Data Stream.

    • Stream Name*

      1. Go to:

    8

    In the Advanced Configuration section, enter the Custom endpoint if you have a non-default URL that directs API requests to a specific Kinesis service endpoint.

    9

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled.

    Learn more about labels in .

    10

    Click Create listener when you're done.

    Get in touch with us
    Overview

    The following article explains how to collect data from your databases using the Relational Databases Listener in Onum.

    The Relational Databases Listener allows you to read data from a database using the MySQL, Oracle, Postgres, SQL Server and SQLite database management systems. Each row of the data is emitted as a separate event. The data will be stored in the _raw field of the events. You can configure the Listener to execute a SQL query periodically.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Relational Databases Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    In the code box at the top of the Configuration section, enter the SQL query to execute. Then, enter the Database Driver and the query timeout (in seconds) in the Query Timeout field.

    5

    In the Tracking Column Configuration section, you can choose to track the last value of a column to be used as pointer for the next query. This is useful if you want to read data in batches.

    To do it, choose true in the Use Column Value parameter. Then, enter the required Tracking Column and choose the Tracking Column Type.

    Before the first query, value is set to Thursday, 1 January 1970 if your column type is timestamp, and 0

    6

    In the Pagination Params section, you can configure the batches size. Default values are 10000 for Limit and 0 for Offset.

    If you plan to use an offset in your query, you must use it as placeholder. For example, select * from table where id > :offset. In this case, offset will be set to 0 for the first run. In the second run, the offset will be set to the number of records returned in the previous run.

    You don't need a placeholder for the limit. The Listener will handle this for you based on the database driver.

    If you don't provide pagination configuration, the Listener will wrap every query with limit and offset statements. This is done performance reasons and to avoid memory leaks.

    7

    In the Scheduler section, you can schedule the execution of your query. It will generate a cron expression based on the given configuration. If the query execution time overlaps with the scheduled time, tasks will be skipped. It runs in single mode.

    Use the Execute Every and Interval Unit fields to indicate the required execution schedule.

    Examples

    • Execute Every: 1 / Interval Unit: seconds - It will generate the cron expression:

    8

    In the Connection Params section, you can simply provide a Connection URL or insert your Username, Password, Host, Port and Database name.

    If you want to enter a connection URL, you must add it to onum as a secret. To do it, open the Connection URL field and click New secret:

    • Give the token a Name.

    9

    In the Persistent state section, enter the path of the file where the last tracking column will be stored. A file with the name state.json will be created in the specified path.

    10

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    Click Create listener when you're done.

    Get in touch with us
    Overview

    Onum supports integration with Google Pub/Sub.

    Google Pub/Sub is an asynchronous and scalable messaging service that decouples services producing messages from services processing those messages. Pub/Sub allows services to communicate asynchronously.

    Google Cloud Storage Setup

    To source data from Google Cloud Pub/Sub you need to have a Google Cloud project, appropriate roles and permissions to run Pub/Sub, and enable the Pub/Sub API.

    See Google Cloud Pub/Sub documentation for help on how to set these up.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Google Pub/Sub Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    The Project ID* is a unique string with the following format: my-project-123456. To get it:

    1. Go to the Google Cloud Console.

    2. In the top left corner, click on the project drop-down next to the Google Cloud logo (where your current project name is shown).

    3. Each project will have a Project Name and a Project ID.

    5

    Enter your Subscription Name*

    1. Go to Pub/Sub in the Google Cloud Console.

    2. In the top left corner, click on the menu and select View all Products.

    6

    The Google Cloud connector uses OAuth 2.0 credentials for authentication and authorization. Select the credentials from your or click New secret to generate a new one.

    1. To find the Google Cloud credentials file, go to Settings > Interoperability.

    2. Scroll down to the Service Account area.

    7

    Bulk Messages Configuration

    Decide whether or not to activate the bulk message option using the Enabled* field.

    Then, choose the required message format and enter the characters you want to use as delimiters, if required. A delimiter character code refers to the numerical representation (usually in ASCII or Unicode) of a delimiter.

    8

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    Click Create listener when you're done.

    • All Data - Events that follow the structure defined by the specified protocol, for example, Syslog events with the standard fields, or most of them.

    • Unparsed - These are events that do not follow the structure defined in the selected protocol.

    You can define filters and rules for each of these main categories.

    What Are Labels Used For?

    Once you've defined your labels to filter specific events, you can use them in your Pipelines.

    Instead of using the whole set of events that come into your Listeners, you can use your defined labels to use only specific sets of data filtered by specific rules.

    Creating Your First Label

    When you create a new Listener, you'll be prompted to the Labels screen after configuring your Listener data.

    1

    Click the + button under the set of data you want to filter (All Data or Unparsed). You'll see your first label. Click the pencil icon a give it a name that describes the data that will filter out.

    In this example, we want to filter only events whose version is 2.x, so we named our label accordingly:

    2

    Below, see the Add filter button. This is where you add the criteria to categorize the content under that label. Choose the field you want to filter by.

    In this example, we're choosing Version.

    3

    Now, define the filter criteria:

    • Condition - Choose between:

      • Contains - Checks when the indicated value appears anywhere in the log.

    4

    Click Save and see the header appear for your first label.

    From here, you have various options:

    Create a new label

    To create a new subset of data, select the + sign that extends directly from the All data or Unparsed bars. Be aware that if you select the + sign extending from the header bar, you will create a subheader.

    Create a sub-label

    You can create a branch from your primary header by clicking the plus button that extends from the main header. There is no limit to the amount that you can add.

    Notice that the subheader shows a filter icon with a number next to it to indicate the string of filters applied to it already.

    Duplicate your label

    To duplicate a label, simply select the duplicate button in its row.

    Delete a label

    To delete a label, simply select the delete button in its row.

    If you attempt to delete a Label that is being used in a Pipeline, you will be asked to confirm where to remove it from.

    Once you have completed your chain, click Save.


    Unlabeled

    Any data that has not been assigned a label will be automatically categorized as unlabeled. This allows you to see the data that is not being processed by any Pipeline, but has not been lost.

    This label will appear in the list of Labels for use in your Pipeline so that you can process the data in its unfiltered form.

    Your Listener is now ready to use and will appear in the list.

    Pipelines

    Note that this Listener is only available in certain Tenants. Get in touch with us if you don't see it and want to access it.

    Overview

    Onum supports integration with Google Cloud Storage.

    Google Cloud Storage is an online object storage service that allows users to store and retrieve data. It is a managed service, meaning Google handles the underlying infrastructure, making it scalable and reliable. GCS is designed for a variety of use cases, including storing data for web applications, big data analytics, and backups.

    Google Cloud Storage Setup

    To source data from Google Cloud Storage you need to have a GCS bucket with data, appropriate permissions (like Storage Admin) to access the bucket and its objects, and the correct resource path (e.g., gs://bucket-name/object-name).

    See the Google Cloud Storage manual for help.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Google Cloud Storage Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    The Google Cloud connector uses OAuth 2.0 credentials for authentication and authorization. Create a new containing these credentials or select one already created. To get it:

    1. To find the Google Cloud credentials file, go to Settings > Interoperability.

    2. Scroll down to the Service Account area.

    3. You need to generate and download a service account key from the Google Cloud Console. You will not be able to view this key, so you must have it copied somewhere already. Otherwise, create one here and save it to paste here.

    5

    Assign an optional Event Delimiter to simulate a hierarchical directory structure within a flat namespace.

    6

    Choose the compression type for your files (None, Gzip, Bzip2 or Auto).

    Learn more about secrets in Onum in .

    7

    If you set the Read Bucket Once parameter to true, the Listener will read the entire bucket once and stop the execution. You'll be prompted to enter the following:

    • Prefix - The optional string that acts like a folder path or directory structure when organizing objects within a bucket.

    • Bucket* - Enter the GCP bucket name.

    8

    The Project ID* is a unique string with the following format: my-project-123456. To get it:

    1. Go to the Google Cloud Console.

    2. In the top left corner, click on the project drop-down next to the Google Cloud logo (where your current project name is shown).

    9

    Enter your subscription name. Follow these steps to get it:

    1. Go to Pub/Sub in the Google Cloud Console.

    2. In the top left corner, click on the menu and select View all Products.

    10

    In case of a failure to connect, enter the following parameters:

    • Number of retries* - Enter the maximum number of retries to perform in case of a failure. The minimum value is 1, and the maximum value is 5. The default value is 3.

    11

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    Click Create listener when you're done.

    Overview

    The Bring Your Own Code Action enables dynamic execution of user-provided Python code in isolated environments in an Onum pipeline. This way, you can use your own Python code to enrich or reduce your events directly.

    In order to configure this Action, you must first link it to a Listener or another Action. Go to Building a Pipeline to learn how this works.

    Ports

    These are the input and output ports of this Action:

    Input ports
    • Default port - All the events to be processed by this Action enter through this port.

    Output ports
    • Default port - Events are sent through this port if no error occurs while processing them.

    • Error port - Events are sent through this port if an error occurs while processing them.

    Configuration

    1

    Find Bring Your Own Code in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required Listener and Data sink.

    2

    To open the configuration, click the Action in the canvas and select Configuration.

    3

    Enter the required parameters:

    Configuration

    To indicate where you want to execute your code, you must either choose a Docker client instance or enter its corresponding IP/port in the configuration options below.

    Parameter
    Description

    Code

    In future updates of this Action, you'll be able to update your code as a .zip file. This option is currently not available

    Paste your Python File in this area. You can include any required Dependencies in the corresponding tab.

    AI Assistant

    You can use the AI Assistant to generate the Python code you require. Simply click the icon at the bottom of the configuration menu and enter the prompt that indicates the results that you need.

    Learn more about our AI Assistant in .

    4

    Finally, give your Output Field a name. Click Add field if you need to add any additional fields.

    5

    Click Save to complete.

    Get in touch with us
    Prerequisites

    Contact Onum to get the cert information needed for TLS communication, which will be needed on the Listener setup.

    TCP Setup

    Transmission Control Protocol (TCP) is not a collector itself but a transport protocol that a collector component uses to receive data. In the context of observability and OpenTelemetry (OTel), you set up the OpenTelemetry Collector to listen on a TCP port using a specific Receiver component.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the TCP Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    Enter the IP Port* and Trailer Character*

    Note that you won't see the Port setting in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum. Learn more about Cloud Listeners in .

    A trailer in TCP typically refers to the end portion of a packet that may contain optional information like checksums, padding, or other metadata. It is part of the TCP header.

    5

    In the TLS configuration section, enter the data you received from the Onum team (Certificate, Private key and CA chain). Choose No client certificate as Client authentication method and TLS v.1.0 as the Minimum TLS version.

    Note that the parameters in this section are only mandatory if you decide to include TLS authentication in this Listener. Otherwise, leave blank.

    6

    These values are stored as Secrets in Onum. Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    7

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    Click Create listener when you're done.

    Trendmicro uses nextLink-based pagination for the /workbench/alerts endpoint. NextLink-based pagination relies on a link (nextLink) that gives the link for the next call. Each API response contains a nextLink field.

    Configuration

    Parameters

    Name - domain

    Value - trendMicroDomain

    Secrets

    • TrendMicroBearerToken refers to the Bearer Token used to authenticate the connection to Trend Micro.

    After entering the required secrets, you can choose to manually enter the Trend Micro OAT fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your Cortex XDR multi alerts YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - RFC3339

    Authentication Phase

    OFF

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - Next Link at Response body

    • Next Link Selector* - .nextLink

    • Request

    This HTTP Pull Listener now uses the API to extract events.

    Click Create labels to move on to the next step and define the required Labels if needed.

    Used to represent whole numbers without any fractional or decimal component. Integers can be positive, negative, or zero.

    25

    A list of integer values separated by commas.

    1, 2, 3, 4

    Used to represent real numbers with fractional parts, allowing for the representation of a wide range of values, including decimals.

    1.2

    A list of float values separated by commas.

    0.1, -1.0, 2.0

    Sequence of characters or encoded information that identifies the precise time at which an event occurred.

    2024-05-17T14:30:00Z

    A list of timestamps separated by commas.

    2024-05-17T14:30:00Z, 2022-10-19T14:30:04Z, 1998-04-10T14:49:00Z

    A fundamental data type in computer programming that represents one of two possible values: true or false.

    true

    A list of boolean values separated by commas.

    true, false, true

    A simple and widely used file format for storing tabular data, such as a spreadsheet or database. In a CSV file, each line of the file represents a single row of data, and fields within each row are separated by a delimiter, usually a comma.

    id,name,price
    1,Apple,0.99
    2,Banana,0.59
    3,Cherry,1.29

    XML (Extensible Markup Language) is a markup language designed for encoding documents in a format that is both human-readable and machine-readable.

    <Book>
        <Title>Example Title</Title>
        <Author>Author Name</Author>
    </Book>

    In a JSON, fields are represented by keys within objects, and the corresponding values can be of any JSON data type. This flexibility allows a JSON to represent structured data in a concise and readable manner, making it suitable for various applications, especially in web development and API communication.

    {
      "items": [
        {
          "id": 1,
          "name": "Apple"
        },
        {
          "id": 2,
    
    

    A key-value pair is a data structure commonly used in various contexts, including dictionaries, hash tables, and associative arrays. It consists of two components: a key and its corresponding value.

    name = Alice
    age = 30
    city = Paris

    Characters that separate individual fields or columns of data. The delimiter ensures that each piece of data within a row is correctly identified and separated from the others.

    /
    "hello world"
    "hello", "my", "name", "is", "John"

    Users

    Overview

    List of users visible to the currently-scoped User.

    Configuration

    Parameters

    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/domains

    Secrets

    • secrets.client_id will reference to Agari's Client ID

    • secrets.client_secret will reference to Agari's Client Secret.

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the fields, or simply paste the given YAML:

    Configure as YAML

    Toggle this ON to enable a free text field where you can paste your YAML.

    Manually configure

    If you would rather configure each field, follow the steps below.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - initial offset should be 5m used to specify the index, or starting point, for the collection of items in the response.

    • Format - RFC3339

    Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST (we would need to generate the JWT using the secrets client_id and client_secret)

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - offsetLimitTo page through a collection of items, set the offset parameter to the first item of the next page. The offset calculation for the next page is: current_offset + current_count. The end of the collection is reached when the count of returned items is less than the limit.

    • Limit - 200 (Maximum number of collection items to include in a response)

    This HTTP Pull Listener now uses the data export API to extract alert events.

    Click Create labels to move on to the next step and define the required if needed.

    Incident Management - Incidents

    Overview

    Get a list of incidents filtered by a list of incident IDs, modification time, or creation time. This includes all incident types and severities, including correlation-generated incidents.

    • The response is concatenated using AND condition (OR is not supported).

    • The maximum result set size is >100.

    • Offset is the zero-based number of incidents from the start of the result set.

    Configuration

    Parameters

    Name - domain

    Value - CortexXdrDomain

    Secrets

    • CortexXDRAuthorization will reference the Cortex XDR Authorization token.

    • CortexXDRAuthId will reference the .

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Cortex incident Management fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your Cortex XDR multi alerts YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    Authentication Phase

    Off

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required if needed.

    Incident Management - Multi Alerts

    Overview

    Get a list of alerts with multiple events.

    • The response is concatenated using AND condition (OR is not supported).

    • The maximum result set size is 100.

    • Offset is the zero-based number of alerts from the start of the result set.

    Cortex XDR displays in the API response whether a PAN NGFW type alert contains a PCAP triggering packet. Use the Retrieve PCAP Packet API to retrieve a list of alert IDs and their associated PCAP data.

    Required license: Cortex XDR Prevent, Cortex XDR Pro per Endpoint, or Cortex XDR Pro per GB.

    Configuration

    Parameters

    Name - domain

    Value - CortexXdrDomain

    Secrets

    • CortexXDRAuthorization will reference the Cortex XDR Authorization token.

    • CortexXDRAuthId will reference the .

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Cortex incident Management fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your Cortex XDR multi alerts YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    Authentication Phase

    Off

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required if needed.

    Reports

    Overview

    Get the reports that match the filter and the data of the reports.

    Configuration

    Parameters

    • Domain (Domain)

    Secrets

    • cisco_auth corresponds to the API Token used to authenticate the connection to Cisco Umbrella.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Sentinel One Web API Reports fields, or simply paste the desired YAML.

    Configure as YAML

    Manually Configure

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - EpochMillis

    Authentication Stage

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST

    • URL* - https://${parameters.domain}/auth/v2/token

    Collection Phase

    • Pagination Type* - offsetLimit

    • Limit - 100

    • Zero Index - false

    Click Create labels to move on to the next step and define the required if needed.

    Collect data from Apache Kafka

    Most recent version: v2.1.1

    See the changelog of this Listener type .

    This is a Pull Listener and therefore should not be used in environments with more than one cluster.

    Overview

    Onum supports integration with .

    Apache Kafka is a distributed, fault-tolerant, high-throughput, and scalable streaming platform. It's used for building real-time data pipelines and streaming applications.

    Select Apache Kafka from the list of Listener types and click Configuration to start.

    Prerequisites

    In order to use this Listener, you must activate the environment variable in your distributor using docker compose (KAFKA_LISTENER_EXECUTION_ENABLED=true)

    Apache Kafka Setup

    You will need to set up a running Kafka cluster, with optional group IDs and Topics.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Apache Kafka Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    Collect data from Microsoft 365

    Most recent version: v0.0.3

    See the changelog of the Microsoft 365 Listener .

    This is a Pull Listener and therefore should not be used in environments with more than one cluster.

    Overview

    Onum supports integration with Office 365 through the .

    Office 365 provides a suite of cloud-based productivity tools and services, including apps like Word, Excel, PowerPoint, and Teams, along with online storage via OneDrive and advanced security features.

    Prerequisites

    1. You must register an application in Microsoft Entra ID (formerly Azure AD).

    2. After registration, you'll need the Application (Client) ID, the Directory (Tenant) ID, and either a Client Secret (password) or a Certificate for authentication.

    3. You must grant the necessary Microsoft Graph API permissions (e.g., Mail.Read.All, User.Read.All, Sites.Read.All).

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Office 365 Listener.

    3

    Enter a Name for the new Listener. Optionally, add a

    Click Create listener when you're done.

    Collect data from Zscaler

    Zscaler (Nanolog Streaming Service) to Onum HTTP Listener (with TLS)

    See the changelog of the HTTP Listener .

    Overview

    The following article outlines a basic data flow from Zscaler's Nanolog Streaming Service (NSS) to the Onum HTTP Listener.

    Prerequisites

    to get the cert information needed for TLS communication, which will be needed on the Listener setup.

    Zscaler NNS Setup

    Identify the NSS Feeds you want to send in the . Configure the required ingestion setup following the steps in the documentation.

    Important notes

    • The SIEM type will be Other.

    • You must generate a JWT token and add it as an HTTP header. Add the word Bearer before the token value (Bearer <token>). The corresponding secret value will be added in the Onum configuration later.

    if you cannot generate a JWT token.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the HTTP Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    Click Create listener when you're done.

    Collect data using OpenTelemetry

    Most recent version: v0.0.1

    See the changelog of the OpenTelemetry Listener type .

    Overview

    Onum supports integration with the OpenTelemetry.

    OpenTelemetry is a collection of APIs, SDKs, and tools. Use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your software’s performance and behavior.

    Prerequisites

    to get the cert information needed for TLS communication, which will be needed on the Listener setup.

    Open Telemetry Setup

    In order to begin sending data, you must firstly Implement OpenTelemetry SDK and Instrumentation.

    Then, you'll need to configure the OpenTelemetry Collector.

    The Collector is configured via a YAML file (config.yaml), which defines a processing pipeline with three main component types:

    • Receivers: Define how the Collector accepts incoming telemetry. The most common is otlp (OpenTelemetry Protocol), which listens for data from your applications over gRPC (port 4317) or HTTP (port 4318).

    • Processors: Define how data is modified, filtered, or enriched (e.g., batch for efficient export, memory_limiter to prevent crashes, or processors to add metadata).

    • Exporters: Define where the Collector sends the data.

    A minimal configuration to receive OTLP data and export it to an external backend looks like this:

    Start the Collector, pointing it to your configuration file.

    Set an environment variable in your application's host environment:

    • OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 (as you will be running the collector locally over HTTP).

    The application will now generate telemetry and send it to Onum.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Open Telemetry Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    Click Create listener when you're done.

    Collect data from OKTA

    Overview

    Get system logs using the OKTA API.

    Configuration

    Parameters

    • parameters.mydomain will store the value of the API URL, excluding the endpoint paths like or /api/v1/logs

    Secrets

    • Auth Token (OktaAuthorization)

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the OKTA System Log fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset -

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required if needed.

    Anonymizer

    Most recent version: v0.0.1

    See the changelog of this Action type .

    Overview

    The Anonymizer Action modifies sensitive data to remove or mask personally identifiable information, ensuring privacy.

    In order to configure this action, you must first link it to a Listener or another Action. Go to to learn how this works.

    Ports

    These are the input and output ports of this Action:

    Input ports
    • Default port - All the events to be processed by this Action enter through this port.

    Output ports
    • Default port - Events are sent through this port if no error occurs while processing them.

    • Error port - Events are sent through this port if an error occurs while processing them.

    Configuration

    1

    Find Anonymizer in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required and .

    2

    To open the configuration, click the Action in the canvas and select Configuration.

    3

    Example

    Let's say we have a list of IPs we wish to anonymize in one of our events fields. To do it:

    1

    Add the Anonymizer Action to your Pipeline and link it to your required Listener.

    2

    Now, double-click the Anonymizer Action to configure it. You need to set the following config:

    Operation
    Parameters

    This is how your data will be transformed:

    Domains

    Overview

    Get a list of the domains for the currently-scoped Organization.

    Configuration

    Collect data from FortiRecon

    Overview

    Where the vendor is Fortinet, it's product is Fortirecon. For Fortirecon, right now we have the following product types/endpoints:

    • Accounts

    Collect data from Prisma Cloud

    Overview

    Get a list of all audit logs. Retrieves paginated audit logs based on the provided filter criteria.

    Configuration

    Observed Attack Techniques (OAT)

    Overview

    Get a list of observed attack techniques. This endpoint is used to retrieve detailed attack activity observed across the environment and map it to the MITRE ATT&CK framework.

    • The response contains an array of activities under the items field.

    Trendmicro uses nextLink-based pagination for the /oat endpoint. NextLink-based pagination relies on a link (nextLink) that gives the link for the next call. Each API response contains a nextLink field.

    Organizations

    Overview

    List of the descendant Organizations for the currently-scoped Organization.

    Configuration

    Threats

    Overview

    Get a list of all threats. This endpoint is used to retrieve audit and threat logs.

    • The response contains an array of activities under the data field.

    SentinelOne uses cursor-based pagination for the /threats endpoint. Cursor-based pagination relies on a pointer (cursor) that refers to the next set of results. Each API response contains a nextCursor field. You pass that cursor value in your next request using the cursor query parameter to get the next page. For that reason, we define, pagination as cursor and we should define an initialRequest and a nextRequest under collection.

    Reputation logs

    Get a list of all Reputation logs in Guardicore.

    Configuration

    Parameters

    Connections

    Get a list of all connections to Guardicore.

    Configuration

    Parameters

    Incident Management - Alerts

    Overview

    Get a list of all or filtered alerts. The alerts listed are what remains after alert exclusions are applied by Cortex XDR.

    • Response is concatenated using AND condition (OR is not supported).

    Collect data from Azure Blob Storage

    Most recent version: v0.0.1

    See the changelog of the Azure Blob Storage Listener .

    Overview

    Onum supports integration wit .

    The

    Activities

    Overview

    Get a list of all activities. This endpoint is used to retrieve audit and activity logs related to users, agents, threats, policies, etc.

    • The response contains an array of activities under the data field.

    SentinelOne uses cursor-based pagination for the /activities endpoint. Cursor-based pagination relies on a pointer (cursor) that refers to the next set of results. Each API response contains a nextCursor field. You pass that cursor value in your next request using the cursor query parameter to get the next page. For that reason, we define, pagination as cursor and we should define an initialRequest and a nextRequest under collection.

    AI Action Assistant

    Just ask, and the assistant helps you

    Note that this feature is only available for certain Tenants. Contact us if you need to use it and don't see it in your Tenant.

    Overview

    The Action Assistant is an AI-powered chat feature designed to help users configure their within a

    For Each

    Most recent version: v0.0.2

    See the changelog of this Action type .

    Overview

    The For Each action divides a list field with different entries into different output events, along with the position they occupy in the list (being the first position 0

    Redis

    Most recent version: v2.0.0

    See the changelog of this Action type .

    Overview

    is a powerful in-memory data structure store that can be used as a database, cache, and message broker. It provides high performance, scalability, and versatility, making it a popular choice for real-time applications and data processing.

    FLC config file
     flc-to-onum:
        type: hec
        # Replace with generated token entered in Onum.
        token: <token>
        # Replace with Onum distributor URL & port. Must include the "https://" at the beginning. 
        url: <distributorURL:port>
        tls: 
          # Replace with full file path to CA certificate
          caFile: "<filepath>"
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "sqs:CreateQueue",
            "sqs:GetQueueAttributes",
            "sqs:SendMessage"
          ],
          "Resource": "*"
        }
      ]
    }
    {
      "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow"
            "Action": [
            "kinesis:CreateStream",
            "kinesis:DescribeStream",
            "kinesis:PutRecord"
            ],
            "Resource": "*"
     		  }        
    	 ]
    }
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5
      tz: UTC
      format: RFC3339
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "responseBodyLink"
      nextLinkSelector: ".nextLink"
      limit: 100
      request:
        method: GET
        url: "https://${parameters.trendMicroDomain}/v3.0/workbench/alerts"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "Bearer ${secrets.trendMicroBearerToken}"
        queryParams:
          - name: detectedStartDateTime
            value: "${temporalWindow.from}"
          - name: detectedEndDateTime
            value: "${temporalWindow.to}"
      output:
        select: ".items"
        map: "."
        outputMode: element
    "name": "Banana"
    },
    {
    "id": 3,
    "name": "Cherry"
    }
    ]
    }

    Amazon Kinesis Data Stream Listener

    Ingest data streams from Amazon Kinesis

    Amazon S3 Listener

    Receive data from your Amazon S3 buckets

    Amazon SQS Listener

    Inject queue messages from Amazon SQS

    Azure Blob Storage Listener

    Collect data from a container in Azure Blob Storage

    Apache Kafka Listener

    Send data from your Apache Kafka clusters

    Azure Event Hubs Listener

    Receive messages from a hub in Azure Event Hubs

    Falcon LogScale Collector Listener

    Collect data from your Falcon LogScale Collector

    Google Cloud Storage Listener

    Source data from a Google Cloud Storage bucket

    Google Pub/Sub Listener

    Stream data from your Google Pub/Sub subscriptions

    Microsoft 365 Listener

    Send content from your Microsoft 365 products

    Cisco NetFlow Listener

    Listen for NetFlow packet records

    HTTP Listener

    Listen for HTTP requests

    HTTP Pull Listener

    Pull JSON data from HTTP endpoints

    OpenTelemetry Listener

    Process OpenTelemetry metrics, traces and logs

    SNMP Trapd Listener

    Receive SNMP traps from network devices

    Syslog Listener

    Process Syslog messages

    TCP Listener

    Read data from a TCP stream of bytes

    Relational Databases Listener

    Read data from your databases

    Tick Listener

    Emit synthetic events on a defined schedule

    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Cover
    Contact us

    Click on Queues in the left panel.

  • Locate your queue from the list and click it.

  • The Queue URL will be displayed in the table under URL.

  • This is the correct URL format: sqs.region.localhost/awsaccountnumber/storedinenvv

    In the left panel, click on Users.

  • Select your IAM user.

  • Under the Security Credentials tab, scroll to Access Keys and you will find existing Access Key IDs (but not the secret access key).

  • Secret access key* Add the secret access key from your Secrets or create one.

    Under Access keys, you can see your Access Key IDs, but AWS will not show the Secret Access Key. You must have it saved somewhere. If you don't have the secret key saved, you need to create a new one

  • .
  • Visibility timeout*- Set a limit for the maximum number of messages to receive in the notifications queue for each request. The minimum value is 1, and the maximum and default value is 10.

  • Wait time* - Set a limit for the maximum number of messages to receive in the notifications queue for each request. The minimum value is 5, and the maximum and default value is 10.

  • Minimum retry time* - Set the minimum amount of time to wait before retrying. The default and minimum value is 1s, and the maximum value is 10m.

  • Maximum retry time* - Set the minimum amount of time to wait before retrying. The default and minimum value is 1s, and the maximum value is 10m.

  • Secrets
    this article

    NetFlow v9 is more customizable than v5.

  • IPFIX is based on the IPFIX standard (IP Flow Information Export).

  • Sflowv5 is another flow monitoring protocol that is typically used in high-speed networks.

  • Fields to include* - Select all the fields you wish to include in the output data.

  • Blacklist - blocks certain IPs from being captured or exported.

    Enter the IPs you wish to apply the access control to. Click Add element to add as many as required.

    this article

    Under the Security Credentials tab, scroll to Access Keys, and you will find existing Access Key IDs (but not the secret access key).

    Select Data Streams under Amazon Kinesis in the sidebar.
  • The Stream Name will be in the first column e.g. my-kinesis-stream-prod

  • Shard ID

  • The Shard is the basic unit of capacity in a Kinesis Data Stream, acting like a partition for your data stream and determining how your data is ingested, stored, and consumed.

    Click your data stream name to find your Shard ID in the Shards tab e.g.:

    shardId-000000000000 shardId-000000000001

    Secrets
    Secrets
    this article
    https://console.aws.amazon.com/kinesis
    this article
    in case of
    numeric
    .

    If you want to use this feature, you must use the :sql_last_value placeholder in your query.

    For example, select * from table where id > :sql_last_value will generate a query like this: select * from table where id > (?, $1, :param1..) limit 10000. The placeholder will be replaced with the value of the column in the first run and in the second run it will be replaced with the value of the column in the second run.

    MySQL example

    select * from table t where t.created_at > :sql_last_value will generate a query like this: select * from table t where t.created_at > ? limit 10000.

    SQL server and Oracle examples

    • select * from table t where t.created_at > :sql_last_value OFFSET :offset ROWS and t.created_at is of type timestamp. This will generate a query like this: select * from table t where t.created_at > ? OFFSET :offset ROWS FETCH NEXT 10000 ROWS ONLY

    • case without pagination: select * from table t where t.created_at > :sql_last_value and t.created_at is of type timestamp. will generate a query like this: select * from table t where t.created_at > ? FETCH NEXT 10000 ROWS ONLY. It will use the last column value as pointer.

    • case without pagination and tracking column: select * from table t, will generate a query like this: select * from table t OFFSET :offset ROWS FETCH NEXT 10000 ROWS ONLY. It will update offset based on query result.

    "*/1 * * * * *"
    . This will execute the query every second.
  • Execute Every: 1 / Interval Unit: minutes - It will generate the cron expression: "0 */1 * * * * ". This will execute the query every minute.

  • Execute Every: 2 / Interval Unit: hours - It will generate the cron expression: "0 0 */2 * * *". This will execute the query every 2 hours.

  • Turn off the Expiration date option.
  • Click Add new value and paste the secret corresponding to the JWT token you generated before. Remember that the token will be added in the Zscaler configuration.

  • Click Save.

  • You can now select the URL in the Connection URL field.

    this article
  • You can also find it in the Settings tab on the left-hand side.

  • Then go to Analytics and find Pub/Sub. Click it to go to Pub/Sub (you can also use the search bar and type "Pub/Sub").

  • In the Pub/Sub dashboard, select the Subscriptions tab on the left.

  • The Subscription Name will be displayed in this list.

  • You need to generate and download a service account key from the Google Cloud Console. You will not be able to view this key, so you must have it copied somewhere already. Otherwise, create one here and save it to paste here.

  • To see existing Service Accounts, go to the menu in the top left and select APIs & Services > Credentials.

  • Secrets
    this article
    Equals - Filters for exact matches of the value in the log.
  • Matches - Filters for exact matches of the value in the log, allowing for regular expressions.

  • Value - Enter the value to filter by.

  • In this example, we are setting the Condition to Contains and Value to 2.

    To see existing Service Accounts, go to the menu in the top left and select APIs & Services > Credentials.

    Start at* - This will block the Listener from starting until this timestamp. The required date format is DD/MM/YYYY HH:mm.

    Each project will have a Project Name and a Project ID.

  • You can also find it in the Settings tab on the left-hand side.

  • Then go to Analytics and find Pub/Sub. Click it to go to Pub/Sub (you can also use the search bar and type "Pub/Sub").
  • In the Pub/Sub dashboard, select the Subscriptions tab on the left.

  • The Subscription Name will be displayed in this list.

  • Retry delay* - Enter the number of milliseconds to wait between retries. The minimum and default value is 100, and the maximum value is 1000.
    Secret
    this article
    this article
    LF - Line Feed character is a control character used to signify the end of a line of text or the start of a new line.
  • CR+LF - Carriage Return (CR) followed by a Line Feed (LF) character pair, which is commonly used to signify the end of a line in text-based communication.

  • NULL

  • The predefined TLS Certificate*.

  • The Private Key* of the corresponding certificate.

  • The path containing the CA chain certificates.

  • Choose the Client Authentication Method* between No, Request, Require, Verify, and Require & Verify.

  • Select the Minimum TLS version* from the menu.

  • Click Add new value and paste the secret corresponding to the JWT token you generated before. Remember that the token will be added in the Zscaler configuration.
  • Click Save.

  • Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    this article
    this article

    Method* - GET

  • URL* - https://${parameters.trendMicroDomain}/v3.0/workbench/alerts

  • Headers -

    • Name - Accept

    • Value - application/json

    • Name - Authorization

    • Value - Bearer ${secrets.trendMicroBearerToken}

  • Query Params

    • Name - detectedStartDateTime

    • Value - ${temporalWindow.from}

    • Name - detectedEndDateTime

    • Value - ${temporalWindow.to}

  • Body type* - there is no required body type because the parameters are included in the URL. However, these fields are mandatory, so select raw and enter the {} placeholder.

  • Output

    • Select - .items

    • Map - .

    • Output Mode - element

  • URL* - ${parameters.domain}/v1/cp/oauth/token
  • Headers

    • Name - Content-type

    • Value - application/x-www-form-urlencoded

    • Name - Accept

    • Value - application/json

  • BodyType* - UrlEncoded

    • Body params

      • Name - client_id

      • Value -'${secrets.client_id}'

      • Name - client_secret

      • Value - '${secrets.client_secret

  • Token Path* - .access_token

  • Auth Injection

    • In* - header

    • Name* - authorization

    • Prefix - Bearer

    • Suffix - ''

  • Request
    • Response Type - JSON

    • Method* - GET

    • URL* - ${parameters.domain}/v1/cp/users

    • Query Params -

      • Name - offset

      • Value - ${pagination.offset}

      • Name - limit

  • Output

    • Select - .

    • Map - .

    • Output Mode - element

  • this article
    Labels
    Enumeration Phase

    Off

    Collection Phase

    • Pagination Type* - fromTo

    • Zero index* - false

    • Limit* - 100

    • Request

      • Response Type* - JSON

      • Method* - POST

      • URL*

    • Output

      • Select - .reply.alerts

      • Map - .

    Cortex XDR Authorization ID
    this article
    Labels
    Enumeration Phase

    Off

    Collection Phase

    • Pagination Type* - fromTo

    • Zero index* - false

    • Limit* - 100

    • Request

      • Response Type* - JSON

      • Method* - POST

      • URL*

    • Output

      • Select - .reply.alerts

      • Map - .

    Cortex XDR Authorization ID
    this article
    Labels
  • Headers

    • Name - Content-type

    • Value - application/x-www-form-urlencoded

    • Name - Accept

    • Value - application/json

    • Name - Authorization

    • Value - ${secrets.cisco_auth}

  • BodyType* - UrlEncoded

    • Body params

      • Name - grant_type

      • Value - client_credentials

  • Token Path* - .access_token

  • Auth Injection

    • In* - header

    • Name* - authorization

    • Prefix - Bearer

    • Suffix - ''

  • Request

    • Response Type - JSON

    • Method* - GET

    • URL* - https://${parameters.domain}/reports/v2/activity

    • Query Params

      • Name - from

      • Value - ${temporalWindow.from}

  • Output

    • Select - .data

    • Map - .

    • Output Mode - element

  • Retry

    • Status codes - [429, 500, 502, 503, 504]

    • Type - fixed

      • Interval-2s

  • this article
    Labels
    4

    Enter the Bootstrap servers. These are the host-port pairs that act as the starting point to access the full set of alive servers in the cluster. Enter your value with format host:port and click Add element to add as many elements as required.

    5

    Enter the Group ID string, which uniquely identifies the group of consumer processes. Find this in your Kafka Cluster at Home > Configuration > Consumer Properties.

    6

    We need to let the Listener know the Topics to connect to. Use kafka-topics --bootstrap-server :9092 --describe and write the result here. Click Add element to add as many topics as required.

    7

    Auto offset reset policy*

    This policy defines the behavior when there are no committed positions available or when an offset is out of range. Choose between Earliest, Latest, or None.

    8

    Next we define the Authentication settings below, or select None if no authentication is required.

    For the Plain, Scram, or mTLS settings, some parameters will need to be added as secrets (see below for details)

    Click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding value.

    • Click Save.

    Learn more about secrets in Onum in .

    9

    You can now select the secret you just created in the following fields:

    • Plain - Enter your Username* and select your Password* from your Secrets or create a new one.

    • Scram - Enter the required information:

      • Username* - Enter your username.

      • Password* - Select your password from your or create a new one.

      • SCRAM mechanism* - Choose either SHA-256 or SHA-512.

    • mTLS - Enter the required information:

      • CA Certificate* - Select your CA certificate from your or create a new one.

      • Client certificate* - Select your client certificate from your or create a new one.

    10

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabelled.

    Learn more about labels in this article.

    11

    Click Create listener when you're done.

    Apache Kafka
    Description
    and some
    Tags
    to identify the Listener.
    4

    Enter your Office 365 Azure Tenant ID*. Find this in the Azure Active Directory > Overview, or in the Properties pane.

    5

    The Application (client) ID* is needed when accessing Office 365 through APIs or applications. For applications registered in other directories, the Application (Client) ID is located in the application credentials.

    1. Go to the Azure Portal.

    2. Find Microsoft Entra ID in the left menu.

    3. Click App registrations under the Manage section.

    4. Select the application you registered (or search for it).

    5. Under Essentials, find Application (client) ID.

    6. Click Copy to clipboard to save it.

    6

    Assign your data a Content Type in the form of reusable columns, document templates, workflows, or behaviors. Click Add element to add the required content types.

    These are the available content values:

    • Audit.AzureActiveDirectory

    • Audit.Exchange

    • Audit.SharePoint

    • Audit.General (includes all other workloads not included in the previous content types)

    • DLP.All (DLP events only for all workloads)

    For details about the events and properties associated with these content types, see .

    7

    The Client Secret (also called Application Secret) is used for authentication in Microsoft Entra ID (formerly Azure AD) when accessing APIs. To get it:

    1. Click App registrations under the Manage section.

    2. Select your registered application.

    3. In the left menu, click Certificates & secrets.

    4. Under Client secrets, check if an existing secret is available. You cannot view it, so you must have it saved somewhere.

    5. If you need a new one, create one and copy the value immediately.

    Learn more about secrets in Onum in .

    8

    In Onum, open the Secret field and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the JWT token you generated before. Remember that the token will be added in the Zscaler configuration.

    • Click Save.

    You can now select the secret you just created in the corresponding field.

    9

    Choose your Subscription Plan* from the list. Find this in the Microsoft Account Portal under Billing > Your Products.

    10

    Enter the Polling Interval* frequency in minutes with which to grab events. The minimum value is 1, and the maximum value is 60.

    11

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in this article.

    Office 365 Management Activity API
    4

    In the Socket section, enter the required Port. By default, all TCP ports from 1024 to 10000 are open.

    Note that you won't see the Socket and TLS configuration sections in the creation form if you're defining this Listener in a Cloud instance, as Onum already provides these. Learn more about Cloud Listeners in this article.

    5

    In the TLS configuration section, enter the data you received from the Onum team (Certificate, Private key and CA chain). Choose No client certificate as Client authentication method and TLS v.1.0 as the Minimum TLS version.

    Note that the parameters in this section are only mandatory if you decide to include TLS authentication in this Listener. Otherwise, leave blank.

    6

    In the Authentication section, choose Bearer as the Authentication Type. Open the Token Secret field and click New secret to create a new one:

    • Give the token a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the JWT token you generated before. Remember that the token will be added in the Zscaler configuration.

    • Click Save.

    Learn more about secrets in Onum in .

    7

    You can now select the secret you just created in the Token Secret field.

    8

    In the Endpoint section, choose POST as the HTTP Method. In the Request path field, enter /

    9

    In the Message extraction section, choose Multiple events at body as stacked JSON in the Strategy field. You can leave the Extraction info field empty.

    10

    In the General behavior section, set Propagate headers strategy to None (default option).

    11

    Then, configure the following settings:

    • Exported headers format - Choose the required format for your headers. Choose JSON (default value).

    • Maximum message length - Maximum characters of the message. The default value is 4096.

    • Response code - Specify the response code to show when successful. You must choose 200 OK.

    Important

    Note that Zscaler doesn't accept any other response than 200 OK.

    • Response Content-Type - Lets the server know the expected format of the incoming message or request. In this case, choose application/json.

    • Response text - The text that will show in case of success.

    12

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in this article.

    Zscaler documentation
    Contact Onum
    Contact us
    4

    Configure your OTLP/gRPC or OTLP/HTTP endpoint. Set the desired type as true to enable more options.

    Set Allow gRPC protocol as true if you want to configure the gRPC port* to establish the connection with the protocol.

    5

    Set Allow HTTP protocol as true if you want to configure OTLP/HTTP:

    • HTTP Port* to establish the connection with the protocol.

    • The traces path for the endpoint URL e.g. http://collector:port/v1/traces

    • The metrics path for the endpoint URL e.g. http://collector:port/v1/metrics

    • The logs path for the endpoint URL e.g. http://collector:port/v1/logs

    6

    Choose your required authentication method in the Authentication Type parameter (Choose None if you don't need any authentication method).

    Enter your Username and Password for basic authentication, or enter your Token Name and choose the required Bearer Token for authentication.

    7

    The credentials are saved in Onum as Secrets. In the authentication form, click New secret to create a new one:

    • Give the token a Name.

    • Turn off the Expiration date option.

    • Click Add new value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the Token/Password fields.

    8

    Set Allow TLS configuration as true if you decide to include TLS authentication in this Listener:

    • Add your TLS Certificate* from your Secrets or create one.

    • Add your Private Key* from your Secrets or create one.

    • Add your CA Chain* from your or create one.

    • Select the Minimum TLS Version* from the menu.

    9

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in this article.

    Contact Onum
    5m
  • Format - 2006-01-02T15:04:05

  • Authentication Phase

    Off

    Enumeration Phase

    Off

    Collection Phase

    • Pagination Type* - webLinking

    • Zero index* - false

    • Limit* - 100

    • Request

      • Response Type* - JSON

        • Method* - GET

    • Output

      • Select - .

      • Map - .

      • Output Mode

    this article
    Labels
    Enter the required parameters:
    Parameter
    Description

    Field to anonymize*

    Select an input event field to anonymize.

    Anonymize Operation*

    • Hash Anonymizer - Choose this operation to hash any type of data and make it anonymous.

    • IP Anonymizer - Choose this operation if you want to encrypt IP addresses. Note that the input IP addresses must be in IPv4 format.

    Salt*

    A random value added to the data before it is hashed, typically used to enhance security. Note that the salt length must be 32 characters. Learn more about salt in cryptography .

    4

    Click Save to complete.

    Anonymize Operation*

    We need the IP Anonymizer operation.

    Salt*

    We're adding the following salt value to make decryption more difficult: D;%yL9TS:5PalS/du874jsb3@o09'?j5

    3

    Now link the Default output port of the Action to the input port of your Data sink.

    4

    Finally, click Publish and choose which clusters you want to publish the Pipeline in.

    5

    Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

    Field to anonymize*

    Building a Pipeline
    Listener
    Data sink
    Input data
    Output data

    We choose the required field with the IPs to be anoymized.

    Parameters
    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/domains

    Secrets

    • secrets.client_id will reference to Agari's Client ID

    • secrets.client_secret will reference to Agari's Client Secret.

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the fields, or simply paste the given YAML:

    Configure as YAML

    Toggle this ON to enable a free text field where you can paste your YAML.

    Manually configure

    If you would rather configure each field, follow the steps below.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - initial offset should be 5m used to specify the index, or starting point, for the collection of items in the response.

    • Format - RFC3339

    Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST (we would need to generate the JWT using the secrets client_id and client_secret)

    • URL* - ${parameters.domain}/v1/cp/oauth/token

    • Headers

      • Name - Content-type

      • Value - application/x-www-form-urlencoded

    • BodyType* - UrlEncoded

      • Body params

        • Name - client_id

    • Token Path* - .access_token

    • Auth Injection

      • In* - header

      • Name* - authorization

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - offsetLimitTo page through a collection of items, set the offset parameter to the first item of the next page. The offset calculation for the next page is: current_offset + current_count. The end of the collection is reached when the count of returned items is less than the limit.

    • Limit - 200 (Maximum number of collection items to include in a response)

    • Request

      • Response Type - JSON

      • Method* - GET

      • URL* - ${parameters.domain}/v1/cp/domains

    • Output

      • Select - .

      • Map - .

      • Output Mode

    This HTTP Pull Listener now uses the data export API to extract alert events.

    Click Create labels to move on to the next step and define the required Labels if needed.

    ACI
  • BP

  • EASM

  • Inside each of those endpoints we have the YAML file to configure.

    This API endpoint returns the list of assets that have been marked as False Positive.

    Configuration

    Parameters

    • Domain (organizationId)

    Secrets

    • Auth Token (fortireconAuth)

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the EASM endpoint fields, or simply paste the desired YAML.

    Configure as YAML

    Manually Configure

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - RCF3339

    Authentication Phase

    OFF

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - pageNumber/PageSize

    • Zero Index* - false

    • Page Size* - 100

    • Request

      • Response Type* - JSON

      • Method* - GET

    • Output

      • Select - .hits

      • Map - .

    Click Create labels to move on to the next step and define the required Labels if needed.

    Parameters

    Name - Domain

    Value - PrismaCloudEndpoint

    Secrets

    • PrismaCloudAccessKeyId corresponds to the authorization Access Key ID number.

    • PrismaCloudAccessKeySecret corresponds to the Access Key itself.

    After entering the required parameters and secrets, you can choose to manually enter the Cortex incident Management fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your Cortex XDR API YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    Authentication Phase

    • Type - token

    • Request -

      • Method - POST

      • URL - ${parameters.PrismaCloudEndpoint}/login

      • Headers

    • Token Path - .token

    • Auth injection

      • Name - Authorization

      • In - header

    Enumeration Phase

    Off

    Collection Phase

    • Pagination Type* - cursor

    • Cursor* - .nextPageToken

    • Initial Request

    This HTTP Pull Listener now uses the data export API to extract audit logs.

    Click Create labels to move on to the next step and define the required Labels if needed.

    Configuration

    Parameters

    Name - domain

    Value - trendMicroDomain

    Secrets

    • TrendMicroBearerToken refers to the Bearer Token used to authenticate the connection to Trend Micro.

    After entering the required secrets, you can choose to manually enter the Trend Micro OAT fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your Cortex XDR multi alerts YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - RFC3339

    Authentication Phase

    OFF

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - Next Link at Response body

    • Next Link Selector* - .nextLink

    • Request

    This HTTP Pull Listener now uses the API to extract events.

    Click Create labels to move on to the next step and define the required Labels if needed.

    Parameters
    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/domains

    Secrets

    • secrets.client_id will reference to Agari's Client ID

    • secrets.client_secret will reference to Agari's Client Secret.

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the fields, or simply paste the given YAML:

    Configure as YAML

    Toggle this ON to enable a free text field where you can paste your YAML.

    Manually configure

    If you would rather configure each field, follow the steps below.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - initial offset should be 5m used to specify the index, or starting point, for the collection of items in the response.

    • Format - RFC3339

    Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST (we would need to generate the JWT using the secrets client_id and client_secret)

    • URL* - ${parameters.domain}/v1/cp/oauth/token

    • Headers

      • Name - Content-type

      • Value - application/x-www-form-urlencoded

    • BodyType* - UrlEncoded

      • Body params

        • Name - client_id

    • Token Path* - .access_token

    • Auth Injection

      • In* - header

      • Name* - authorization

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - offsetLimitTo page through a collection of items, set the offset parameter to the first item of the next page. The offset calculation for the next page is: current_offset + current_count. The end of the collection is reached when the count of returned items is less than the limit.

    • Limit - 200 (Maximum number of collection items to include in a response)

    • Request

      • Response Type - JSON

      • Method* - GET

      • URL* - ${parameters.domain}/v1/cp/organizations

    • Output

      • Select - .

      • Map - .

      • Output Mode

    This HTTP Pull Listener now uses the data export API to extract alert events.

    Click Create labels to move on to the next step and define the required Labels if needed.

    Configuration

    Parameters

    • Domain (sentinelOneDomain)

    Secrets

    • SentinelOneApiToken corresponds to the API Token used to authenticate the connection to Sentinel One.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Sentinel One Web API Reports fields, or simply paste the desired YAML.

    Configure as YAML

    Manually Configure

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - RFC3339

    Authentication Phase

    OFF

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - cursor

    • Cursor Selector* - the cursor defined is based on the request we get from the API as .pagination.nextCursor.

    • Initial Request

      • Method* - GET

      • URL* - https://${parameters.sentinelOneDomain}/web/api/v2.1/threats the parameters variable will be replaced by the domain entered earlier.

      • Headers

      Next Request

      • Method* - GET

      • URL* - https://${parameters.sentinelOneDomain}/web/api/v2.1/threats the parameters variable will be replaced by the domain entered earlier.

      • Headers

    • Output

      • Select - .data

      • Map - .

    Click Create labels to move on to the next step and define the required Labels if needed.

    parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/alerts

    Secrets

    • Username (username)

    • Password (password)

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Falcon API Alerts fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your CrowdStrike Falcon API YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - EpochMillis

    Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST

    • URL* - https://${parameters.domain}/api/v3.0/authenticate

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - offsetLimit

    • Limit - 1000

    • Zero Index - true

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required Labels if needed.

    parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/alerts

    Secrets

    • Username (username)

    • Password (password)

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Falcon API Alerts fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your CrowdStrike Falcon API YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - EpochMillis

    Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST

    • URL* - https://${parameters.domain}/api/v3.0/authenticate

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - offsetLimit

    • Limit - 1000

    • Zero Index - true

    This HTTP Pull Listener now uses the data export API to extract the connections.

    Click Create labels to move on to the next step and define the required Labels if needed.

    Maximum result set size is 100.
  • Offset is the zero-based number of alerts from the start of the result set. The response indicates whether an PAN NGFW type alert contains a PCAP triggering packet.

  • Use the Retrieve PCAP Packet API to retrieve a list of alert IDs and their associated PCAP data. Required license: Cortex XDR Prevent, Cortex XDR Pro per Endpoint, or Cortex XDR Pro per GB.

    Configuration

    Parameters

    Name - domain

    Value - CortexXdrDomain

    Secrets

    • CortexXDRAuthorization will reference the Cortex XDR Authorization token.

    • CortexXDRAuthId will reference the Cortex XDR Authorization ID.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Cortex incident Management fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your Cortex XDR API YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    Authentication Phase

    Off

    Enumeration Phase

    Off

    Collection Phase

    • Pagination Type* - fromTo

    • Zero index* - false

    • Limit* - 100

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required Labels if needed.

    Azure Blob Storage
    Listener connects to your Azure Storage account and detects when new files are uploaded. It works by monitoring an Azure Storage Queue that receives notifications from Azure Event Grid whenever a blob is created. The Listener then retrieves the file content and makes it available for processing in your workflows.

    Prerequisites

    Depending on your authentication method, you'll need the following permissions:

    • Connection String: Storage account access key

    • Service Principal: Azure AD application with these assigned roles:

      • Storage Blob Data Reader (minimum)

      • Storage Queue Data Contributor (minimum)

    Azure Blob Storage Setup

    You'll need to set up the following resources:

    • An Azure Storage Account with:

      • An Blob Storage container (where files will be uploaded)

      • A Storage Queue (to receive notifications)

    • An Azure Event Grid Subscription configured to:

      • Monitor your Blob Storage container

      • Send BlobCreated events to your Storage Queue

      • Filter for BlockBlob creation events only

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Azure Blob Storage Listener.

    3

    Enter a Name* for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    In the Authentication section, choose between:

    Connection String

    Use your storage account's connection string as your authentication method. This method is straightforward but requires managing the connection string securely.

    Follow these steps to get your connection string:

    5

    In the Retry Configuration section, set the maximum number of attempts a failed Azure read should be retried (Max Retries*) and the wait time before sending the next request after the last response was received and empty (Idle Backoff Time*).

    6

    In the Queue Configuration section, enter the Queue Name* of the queue that is receiving blob events.

    7

    In the Limit & Timeout* section, enter the following:

    • Message Limit* - Number of messages to retrieve per polling cycle. The minimum value is 1, and the maximum value is 32.

    • Visibility Timeout

    8

    In the Advanced configuration section, you can optionally configure the following:

    • Event delimiter - Split file content into multiple messages using a delimiter. The default value is \n for line-by-line processing.

    • Use compression - Activate this toggle if you want to listen for compressed files. Choose between Auto, Gzip or Bzip2.

    9

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    10

    Click Create listener when you're done.

    Azure Blob Storage

    Configuration

    Parameters

    • Domain (sentinelOneDomain)

    Secrets

    • SentinelOneApiToken corresponds to the API Token used to authenticate the connection to Sentinel One.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Sentinel One Web API Activities fields, or simply paste the desired YAML.

    Configure as YAML

    Manually Configure

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - RFC3339

    Authentication Phase

    OFF

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - cursor

    • Cursor Selector* - the cursor defined is based on the request we get from the API as .pagination.nextCursor.

    • Initial Request

      • Method* - GET

      • URL* - https://${parameters.sentinelOneDomain}/web/api/v2.1/activities the parameters variable will be replaced by the domain entered earlier.

      • Headers

      Next Request

      • Method* - GET

      • URL* - https://${parameters.sentinelOneDomain}/web/api/v2.1/activities the parameters variable will be replaced by the domain entered earlier.

      • Headers

    • Output

      • Select - .data

      • Map - .

    Click Create labels to move on to the next step and define the required Labels if needed.

    . Any configuration requested through the chat will be automatically applied. This is especially useful for requesting specific use cases, as the AI will automatically apply the necessary fields and settings to achieve the desired result.

    To start using it, open the Action configuration and just click this icon at the bottom left corner:

    The Action Assistant is only available for a specific set of Actions, but it will soon be expanded to cover more. These are the Actions where you can currently use it:

    • Accumulator

    • Conditional

    Examples

    Here are some example use cases where we ask for help from the Action Assistant. Check the prompts we use and the resulting configuration in each example picture.

    Conditional

    Prompt: Please could you identify common windows logs event ids and create a condition for each value?

    • In this example, we request a condition for each of the most common Windows event IDs:

    • In this case, we request conditions for each of the most common FortiGate log IDs:

    • Here, we are filtering events with Success status only:

    Group By

    Prompt: Group events every 5 minutes by host_ip and count the occurrences.

    • In this example, we need to identify each unique IP address for every 10 minutes:

    • In this case, we need all the unique app name values every 5 seconds, grouped by source ports and IP addresses:

    Math Expression

    Prompt: Convert the priority field to an integer, convert the source and destination ips to he format, identify the appnames starting with windows

    • In this case, we ask the assistant to transform a series of amounts from bytes to megabytes:

    • Here we are transforming our epoch dates in milliseconds into seconds:

    • In this example, we want to calculate the time difference between a series of from and to dates:

    Message Builder

    Prompt: Please build me a message in json format with the most important fields.

    • In this example, we ask for the most relevant fields but in key-value format:

    • Here we are requesting the most relevant fields as a message in JSON format:

    • In this case, we want to order all our fields in alphabetical order:

    • Here we want to filter only string-type fields:

    Unique

    Prompt: Please identify the unique message IDs and codify them in 8 bits.

    • In this example, we want to identify the unique message IDs and codify them in 8 bits.

    Actions
    Pipeline
    ).

    For example, an input list containing [a,b,c] will generate three outputs, with these fields added to the event:

    • elementValueOutField: a; elementIndexOutField: 0

    • elementValueOutField: b; elementIndexOutField: 1

    • elementValueOutField: c; elementIndexOutField: 2

    In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how this works.

    Ports

    These are the input and output ports of this Action:

    Input ports
    • Default port - All the events to be processed by this Action enter through this port.

    Output ports
    • Default port - Events are sent through this port if no error occurs while processing them.

    • Error port - Events are sent through this port if an error occurs while processing them.

    Configuration

    1

    Find For Each in the Actions tab (under the Advanced group) and drag it onto the canvas.

    2

    To open the configuration, click the Action in the canvas and select Configuration.

    3

    Enter the required parameters:

    Parameter
    Description
    4

    Click Save to complete the process.

    Example

    Imagine you receive a list-type field containing a string of five IPs:

    127.0.0.1,127.0.0.2,127.0.0.3,127.0.0.4,192.168.0.1

    1

    Add the For Each Action to your Pipeline and link it to your required Data sink.

    2

    Now, double-click the For Each Action to configure it. You need to set the following config:

    Operation
    Parameters

    Input

    3

    Click Save to apply the configuration.

    4

    Now link the Default output port of the Action to the input port of your Data sink.

    5

    Finally, click Publish and choose in which clusters you want to publish the Pipeline.

    6

    Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

    The Action will create a separate event for each element of the string, each event containing two fields (value and index).

    The Redis Action allows users to set and retrieve data from a Redis server.

    In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.

    Ports

    These are the input and output ports of this Action:

    Input ports
    • Default port - All the events to be processed by this Action enter through this port.

    Output ports
    • Default port - Events are sent through this port if no error occurs while processing them.

    • Error port - Events are sent through this port if an error occurs while processing them.

    Installing Redis

    To use this Action, you must install Redis and Redis CLI.

    As installing Redis via a Docker is generally preferable, we will brief you on this procedure. To install it locally, check this article.

    1

    Start your local Redis Docker instance:

    2

    Now, connect to the Redis container:

    3

    Use this command to get the IP:

    4

    Paste this IP in the Redis endpoint field of your Redis Action.

    For more help and in-depth detail, see these use cases.

    Configuration

    1

    Find Redis in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required Listener and Data sink.

    2

    To open the configuration, click the Action in the canvas and select Configuration.

    3

    Enter the required parameters:

    Connection Settings

    Parameter
    Description

    Network Timeout

    Parameter
    Description

    Command

    4

    Click Save to complete.

    Redis

    Docker client

    Choose one of the available Docker instances to execute your code.

    IP

    Enter the instance IP to execute your code.

    Port

    Enter the instance port to execute your port.

    Timeout connection

    Enter the milliseconds to wait for the Docker connection.

    Buffer size

    Size in bytes to batch events.

    this article

    Net Saved/Increased

    Here you can see the difference (in %) of volume saved/increased in comparison to the previous period. Hover the circle icons to see the input/output volumes and see the total GB saved.

    Listeners

    View the total amount of data ingested by the Listeners in the selected time range compared to the previous, as well as the increased/decreased volume (in %).

    Data Sink

    You can see at a glance the total amount of data sent out of your Tenant, as well as the difference (in %) with the previous time range selected.

    Data Volume

    This shows the total volume of ingested data for the selected period. Notice it is the same as the input volume shown in the Net saved/increased metric. You can also see the difference (in %) with the previous time range selected.

    Estimation

    The estimated volumes ingested and sent over the next 24 hours. This is calculated using the data volume of the time period.

    Selecting a Time Range.
    Data sinks
    Cover
    Cover
    Cover
    Cover
    Cover
    this article
    this article
    this article

    Collect data from Dropbox

    Overview

    Get a list of event streams from Dropbox.

    Configuration

    Parameters

    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /oauth2/token or /2/team_log/get_events

    Secrets

    • refresh_tokenwill reference the .

    • secrets.client_id will reference the Client ID

    • secrets.client_secret will reference the Client Secret.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Falcon API Alerts fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your CrowdStrike Falcon API YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    This HTTP Pull Listener now uses the business API to extract events.

    Click Create labels to move on to the next step and define the required if needed.

    Collect data using HTTP

    See the changelog of the HTTP Listener type .

    Overview

    Onum supports integration with HTTP.

    HTTP, which stands for Hypertext Transfer Protocol, is a foundational protocol for communication on the World Wide Web. It defines how messages are formatted and transmitted between web servers and browsers, enabling the retrieval and display of webpages and other web content.

    Prerequisites

    to get the cert information needed for TLS communication, which will be needed on the Listener setup.

    Important notes

    • The SIEM type will be Other.

    • You must generate a JWT token and add it as an HTTP header. Add the word Bearer before the token value (Bearer <token>). The corresponding secret value will be added in the Onum configuration later.

    if you cannot generate a JWT token.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the HTTP Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    Click Create listener when you're done.

    Alert Events

    Overview

    Get a list of all or filtered alerts. The alerts listed are what remains after alert exclusions are applied by Agari DMARC Protection.

    Configuration

    Parameters

    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/alert_events

    Secrets

    • secrets.client_id will reference to Agari's Client ID

    • secrets.client_secret will reference to Agari's Client Secret.

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset -

    This HTTP Pull Listener now uses the data export API to extract alert events.

    Click Create labels to move on to the next step and define the required if needed.

    Collect data from Splunk

    See the changelog of the Syslog Listener type .

    Splunk Setup

    The usual scenario for adapting Splunk data for sending via the Syslog Listener involves using a Splunk Heavy Forwarder (HF), configuring three specific configuration files: outputs.conf, props.conf, and transforms.conf.

    The HF is necessary because Universal Forwarders (UFs) lack the processing capabilities to format and route data to a third-party system using the Syslog protocol.

    You will need to create or edit these files, typically located in $SPLUNK_HOME/etc/system/local/ or a custom app directory on your Heavy Forwarder.

    1. Define the Syslog Destination

    The outputs.conf file tells the Heavy Forwarder where to send the data. You must define a Syslog output group ([syslog:<target_group>]) rather than a standard TCP output group.

    2. Selective Routing (Recommended)

    To avoid sending all data from the Heavy Forwarder to Onum, you typically use props.conf and transforms.conf to filter and route only the desired events (e.g., logs from a specific sourcetype or index).

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Syslog Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    Click Create listener when you're done.

    Cloud detections - Alerts

    Overview

    Get a list of alerts for a given scope.

    • The response contains an array of activities under the data field.

    SentinelOne uses cursor-based pagination for the /cloud-detections/alerts endpoint. Cursor-based pagination relies on a pointer (cursor) that refers to the next set of results. Each API response contains a nextCursor field. You pass that cursor value in your next request using the cursor query parameter to get the next page. For that reason, we define, pagination as cursor and we should define an initialRequest and a nextRequest under collection.

    Configuration

    Parameters

    • Domain (sentinelOneDomain)

    Secrets

    • SentinelOneApiToken corresponds to the API Token used to authenticate the connection to Sentinel One.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Sentinel One Web API Cloud detections alerts fields, or simply paste the desired YAML.

    Configure as YAML

    Manually Configure

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - RFC3339

    Authentication Phase

    OFF

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - cursor

    • Cursor Selector* - the cursor defined is based on the request we get from the API as .pagination.nextCursor.

    • Initial Request

    Click Create labels to move on to the next step and define the required if needed.

    Collect data using SNMP

    Most recent version: v0.0.1

    See the changelog of this Listener type .

    Overview

    Onum supports integration with SNMP.

    SNMP (Simple Network Management Protocol) is a standard protocol for monitoring and managing network devices. It operates on a client-server model where:

    • SNMP Agents (devices) send traps (asynchronous notifications) to SNMP Managers.

    • Traps contain information about events like system failures, security alerts, or performance issues.

    • OIDs (Object Identifiers) uniquely identify each piece of information in the trap.

    The SNMP Trapd Listener is a powerful and intelligent Listener that receives SNMP traps from network devices, parses them using embedded MIB (Management Information Base) files, and converts them into structured Onum events. It supports all major SNMP versions (v1, v2c, v3) with comprehensive authentication and privacy options.

    What are MIBs?

    MIBs (Management Information Bases) are hierarchical databases that define:

    • OID structure and relationships

    • Data types for each OID

    • Human-readable names for OIDs

    • Units and ranges for values

    Example OID: 1.3.6.1.2.1.1.1.0 → sysDescr (System Description)

    Architecture & MIB Strategy

    The SNMP Trapd Listener includes essential MIBs for comprehensive SNMP support:

    MIB
    Purpose
    Dependencies

    Prerequisites

    In order to use this Listener, you must

    • Enable SNMP on the device itself.

    • Specifically enable the sending of SNMP traps.

    • Configure the device to send traps to the IP address and port of the receiving SNMP management system.

    • For SNMPv3, configure the correct authentication and/or privacy settings to be used when sending traps.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the SNMP Trapd Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    Click Create listener when you're done.

    Output data

    The listener converts SNMP traps into structured Onum events with the following generated fields:

    Field
    Description

    This is an output event example:

    Reports

    Overview

    Get the reports that match the filter and the data of the reports. Other data in the response: schedule, Insight Type, name and ID of the user who created the report, the date range, and more.

    • The response contains an array of activities under the data field.

    SentinelOne uses cursor-based pagination for the /reports endpoint. Cursor-based pagination relies on a pointer (cursor) that refers to the next set of results. Each API response contains a nextCursor field. You pass that cursor value in your next request using the cursor query parameter to get the next page. For that reason, we define, pagination as cursor and we should define an initialRequest and a nextRequest under collection.

    Configuration

    Parameters

    • Domain (sentinelOneDomain)

    Secrets

    • SentinelOneApiToken corresponds to the API Token used to authenticate the connection to Sentinel One.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Sentinel One Web API Reports fields, or simply paste the desired YAML.

    Configure as YAML

    Manually Configure

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - RFC3339

    Authentication Phase

    OFF

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - cursor

    • Cursor Selector* - the cursor defined is based on the request we get from the API as .pagination.nextCursor.

    • Initial Request

    Click Create labels to move on to the next step and define the required if needed.

    Incidents

    Get a list of all incidents in Guardicore.

    Configuration

    Parameters

    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/alerts

    Secrets

    • Username (username)

    • Password (password)

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Falcon API Alerts fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your CrowdStrike Falcon API YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required if needed.

    Incidents

    Overview

    Get a list of event streams from CrowdStrike Falcon.

    Configuration

    Alerts

    Overview

    Get a list of all or filtered alerts. The alerts listed are what remains after alert exclusions are applied by CrowdStrike Falcon.

    Configuration

    Building a Pipeline

    Overview

    The Pipeline canvas provides infinite possibilities to use your data.


    Google DLP

    Most recent version: v0.0.1

    See the changelog of this Action type .

    Overview

    The Google DLP Action is designed to integrate with Google's Data Loss Prevention (DLP) API. This Action allows detecting and classifying sensitive information, enabling workflows to comply with data protection requirements.

    Pipelines

    A Pipeline is Onum's way of streamlining your data

    Overview

    Use Pipelines to transform your events and build a data flow linking from and to .

    Select the Pipelines tab at the left menu to visualize all your Pipelines in one place. Here's what you will find all the actions you can perform in this area:

    aws kinesis list-streams
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/v1/cp/oauth/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
            - name: Accept
              value: application/json
          bodyType: urlEncoded
          bodyParams:
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: false
    collectionPhase:
      paginationType: offsetLimit
      limit: 200
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/users
        queryParams:
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: "."
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "fromTo"
      limit: 100
      request:
        method: "POST"
        url: "https://${parameters.CortexXdrDomain}/public_api/v1/incidents/get_incidents"
        headers:
          - name: Accept
            value: "application/json"
          - name: Content-Type
            value: "application/json"
          - name: Authorization
            value: "${secrets.CortexXdrAuthorization}"
          - name: x-xdr-auth-id
            value: ${secrets.CortexXdrAuthId}
        bodyType: raw
        bodyRaw: |
          {
            "request_data": {
              "search_from": ${pagination.from},
              "search_to": ${pagination.to},
              "filters": [
                {
                  "field": "creation_time",
                  "operator": "gte",
                  "value": ${temporalWindow.from}000
                },
                {
                  "field": "creation_time",
                  "operator": "lte",
                  "value": ${temporalWindow.to}000
                }
              ]
            }
          }
      output:
        select: ".reply.incidents"
        map: "."
        outputMode: "element"
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "fromTo"
      limit: 100
      request:
        responseType: json
        method: "POST"
        url: "https://${parameters.CortexXdrDomain}/public_api/v2/alerts/get_alerts_multi_events"
        headers:
          - name: Accept
            value: "application/json"
          - name: Content-Type
            value: "application/json"
          - name: Authorization
            value: "${secrets.CortexXdrAuthorization}"
          - name: x-xdr-auth-id
            value: ${secrets.CortexXdrAuthId}
        bodyType: raw
        bodyRaw: |
          {
            "request_data": {
              "search_from": ${pagination.from},
              "search_to": ${pagination.to},
              "filters": [
                {
                  "field": "creation_time",
                  "operator": "lte",
                  "value": ${temporalWindow.to}
                }
              ]
            }
          }
      output:
        select: ".reply.alerts"
        map: "."
        outputMode: "element"
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: EpochMillis
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: https://${parameters.domain}/auth/v2/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
            - name: Accept
              value: application/json
            - name: authorization
              value: ${secrets.cisco_auth}
          bodyType: urlEncoded
          bodyParams:
            - name: grant_type
              value: client_credentials
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: false
    collectionPhase:
      paginationType: offsetLimit
      limit: 100
      isZeroIndex: false
      request:
        responseType: json
        method: GET
        url: https://${parameters.domain}/reports/v2/activity
        queryParams:
          - name: from
            value: ${temporalWindow.from}
          - name: to
            value: ${temporalWindow.to}
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: ".data"
        map: "."
        outputMode: element
    retry:
      statusCodes: [429, 500, 502, 503, 504]
      type: fixed 
      fixed:
        interval: 2s
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    
    processors:
      batch:
    
    exporters:
      otlp/example:  # OTLP exporter to send to your backend service
        endpoint: "YOUR_BACKEND_ADDRESS:4317" # Replace with your observability tool's OTLP endpoint
        insecure: false  # Use true for HTTP, false for HTTPS/TLS
    
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [otlp/example]
        metrics:
          receivers: [otlp]
          processors: [batch]
          exporters: [otlp/example]
    otelcol --config=config.yaml
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: "2006-01-02T15:04:05"
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "webLinking"
      limit: 1000
      request:
        responseType: json
        method: "GET"
        url: "https://${parameters.mydomain}/api/v1/logs"
        headers:
          - name: Accept
            value: "application/json"
          - name: Content-Type
            value: "application/json"
          - name: Authorization
            value: "SSWS ${secrets.OktaAuthorization}"
        queryParams:
          - name: since
            value: "${temporalWindow.from}"
          - name: until
            value: "${temporalWindow.to}"
      output:
        select: "."
        map: "."
        outputMode: "element"
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/v1/cp/oauth/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
            - name: Accept
              value: application/json
          bodyType: urlEncoded
          bodyParams:
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: false
    collectionPhase:
      paginationType: offsetLimit
      limit: 200
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/domains
        queryParams:
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: "."
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "page"
      pageSize: 100
      isZeroIndex: false
      request:
        responseType: json
        method: GET
        url: https://api.fortirecon.forticloud.com/easm/${parameters.organizationId}/breaches
        headers:
          - name: Authorization
            value: ${secrets.fortireconAuth}
        queryParams:
          - name: page
            value: "${pagination.pageNumber}"
          - name: size
            value: "${pagination.pageSize}"
          - name: start_date
            value: ${temporalWindow.from}
          - name: end_date
            value: ${temporalWindow.to}
      output:
        select: ".hits"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "page"
      pageSize: 100
      isZeroIndex: false
      request:
        responseType: json
        method: GET
        url: https://api.fortirecon.forticloud.com/easm/${parameters.organizationId}/leaked_creds
        headers:
          - name: Authorization
            value: ${secrets.fortireconAuth}
        queryParams:
          - name: page
            value: "${pagination.pageNumber}"
          - name: size
            value: "${pagination.pageSize}"
          - name: start_date
            value: ${temporalWindow.from}
          - name: end_date
            value: ${temporalWindow.to}
      output:
        select: ".hits"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: true
    authentication:
      type: "token"
      token:
        request:
          method: POST
          url: "${parameters.PrismaCloudEndpoint}/login"
          headers:
            - name: Content-Type
              value: application/json
          bodyType: raw
          bodyRaw: |
            {
              "username": "${secrets.PrismaCloudAccessKeyId}",
              "password": "${secrets.PrismaCloudAccessKeySecret}"
            }
          responseType: json
        tokenPath: ".token"
        authInjection:
          name: "Authorization"
          in: "header"
          prefix: "Bearer "
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".nextPageToken"
      initialRequest:
        method: POST
        url: "${parameters.PrismaCloudEndpoint}/audit/api/v1/log"
        headers:
          - name: Accept
            value: application/json
          - name: Content-Type
            value: application/json
        responseType: json
        bodyType: raw
        bodyRaw: |
          {
            "timeRange": {
              "type": "absolute",
              "value": {
                "startTime": ${temporalWindow.from},
                "endTime": ${temporalWindow.to},
              }
            }
          }
      nextRequest:
        method: POST
        url: "{parameters.PrismaCloudEndpoint}/audit/api/v1/log"
        headers:
          - name: Accept
            value: application/json
          - name: Content-Type
            value: application/json
        responseType: json
        bodyType: raw
        bodyRaw: |
          {
            "timeRange": {
              "type": "absolute",
              "value": {
                "startTime": ${temporalWindow.from},
                "endTime": ${temporalWindow.to},
              }
            },
            "nextPageToken": ${pagination.cursor}
          }
      output:
        select: ".value"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "responseBodyLink"
      nextLinkSelector: ".nextLink"
      limit: 100
      request:
        method: GET
        url: "https://${parameters.trendMicroDomain}/v3.0/oat/detections"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "Bearer ${secrets.trendMicroBearerToken}"
        queryParams:
          - name: detectedStartDateTime
            value: "${temporalWindow.from}"
          - name: detectedEndDateTime
            value: "${temporalWindow.to}"
      output:
        select: ".items"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/v1/cp/oauth/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
            - name: Accept
              value: application/json
          bodyType: urlEncoded
          bodyParams:
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: false
    collectionPhase:
      paginationType: offsetLimit
      limit: 200
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/organizations
        queryParams:
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: "."
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".pagination.nextCursor"
      limit: 100
      initialRequest:
        method: GET
        url: "https://${parameters.sentinelOneDomain}/web/api/v2.1/threats"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "ApiToken ${secrets.sentinelOneApiToken}"
        queryParams: 
          - name: createdAt__gte
            value: "${temporalWindow.from}"
          - name: createdAt__lte
            value: "${temporalWindow.to}"
      nextRequest:
        method: GET
        url: "https://${parameters.sentinelOneDomain}/web/api/v2.1/threats"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "ApiToken ${secrets.sentinelOneApiToken}"
      output:
        select: ".data"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: EpochMillis
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: https://${parameters.domain}/api/v3.0/authenticate
          headers:
            - name: Content-Type
              value: application/json
          bodyType: raw
          bodyRaw: |
            {
              "username": "${secrets.username}",
              "password": "${secrets.password}"
            }
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: false
    collectionPhase:
      paginationType: offsetLimit
      limit: 1000
      isZeroIndex: true
      request:
        responseType: json
        method: GET
        url: https://${parameters.domain}/api/v3.0/reputation-log
        queryParams:
          - name: from_time
            value: ${temporalWindow.from}
          - name: to_time
            value: ${temporalWindow.to}
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: ".objects"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: EpochMillis
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: https://${parameters.domain}/api/v3.0/authenticate
          headers:
            - name: Content-Type
              value: application/json
          bodyType: raw
          bodyRaw: |
            {
              "username": "${secrets.username}",
              "password": "${secrets.password}"
            }
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: false
    collectionPhase:
      paginationType: offsetLimit
      limit: 1000
      isZeroIndex: true
      request:
        responseType: json
        method: GET
        url: https://${parameters.domain}/api/v3.0/connections
        queryParams:
          - name: from_time
            value: ${temporalWindow.from}
          - name: to_time
            value: ${temporalWindow.to}
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: ".objects"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "fromTo"
      limit: 100
      request:
        method: "POST"
        url: "https://${parameters.CortexXdrDomain}/public_api/v1/alerts/get_alerts"
        headers:
          - name: Accept
            value: "application/json"
          - name: Content-Type
            value: "application/json"
          - name: Authorization
            value: "${secrets.CortexXdrAuthorization}"
          - name: x-xdr-auth-id
            value: ${secrets.CortexXdrAuthId}
        bodyType: raw
        bodyRaw: |
          {
            "request_data": {
              "search_from": ${pagination.from},
              "search_to": ${pagination.to},
              "filters": [
                {
                  "field": "creation_time",
                  "operator": "gte",
                  "value": ${temporalWindow.from}
                },
                {
                  "field": "creation_time",
                  "operator": "lte",
                  "value": ${temporalWindow.to}
                }
              ]
            }
          }
      output:
        select: ".reply.alerts"
        map: "."
        outputMode: "element"        
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".pagination.nextCursor"
      limit: 100
      initialRequest:
        method: GET
        url: "https://${parameters.sentinelOneDomain}/web/api/v2.1/activities"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "ApiToken ${secrets.sentinelOneApiToken}"
        queryParams: 
          - name: createdAt__gte
            value: "${temporalWindow.from}"
          - name: createdAt__lte
            value: "${temporalWindow.to}"
      nextRequest:
        method: GET
        url: "https://${parameters.sentinelOneDomain}/web/api/v2.1/activities"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "ApiToken ${secrets.sentinelOneApiToken}"
      output:
        select: ".data"
        map: "."
        outputMode: element 
    FROM redis:latest
    
    EXPOSE 6379
    
    CMD ["redis-server"]
    
    ## build
    
    docker build -t my-redis-image
    
    ## run
    
    docker run -d --name my-redis my-redis-image
    docker run -d --name my-redis -p 6379:6379 redis/redis-stack-server:latest
    docker exec -it {{ContainerID}} sh
    
    > redis-cli

    Value - ${pagination.limit}

    Name - to
  • Value - ${temporalWindow.to}

  • Name - offset

  • Value - ${pagination.offset}

  • Name - limit

  • Value - ${pagination.limit}

  • URL* - https://${parameters.myDomain}/api/v1/logs
  • Headers

    • Name - Accept

    • Value - application/json

    • Name - Content-Type

    • Value - application/json

    • Name - Authorization

    • Value - SSWS ${secrets.OktaAuthorization}

  • Query Params

    • Name - since

    • Value - ${temporalWindow.from}

    • Name - Content-Type

    • Value -${temporalWindow.to}

  • -
    element
    -
    https://${parameters.CortexXdrDomain}/public_api/v1/alerts/get_alerts
  • Headers

    • Name - Accept

    • Value - application/json

    • Name - Content-Type

    • Value - application/json

    • Name - Authorization

    • Value - ${secrets.CortexXdrAuthorization}

    • Name - x-xdr-auth-id

    • Value - ${secrets.CortexXdrAuthId}

  • Body type* - raw

  • Body content* - { "request_data": { "search_from": ${pagination.from}, "search_to": ${pagination.to}, "filters": [ { "field": "creation_time", "operator": "gte", "value": ${temporalWindow.from}000 }, { "field": "creation_time", "operator": "lte", "value": ${temporalWindow.to}000 } ] } }

  • Output Mode
    -
    element
    -
    https://${parameters.CortexXdrDomain}/public_api/v1/alerts/get_alerts
  • Headers

    • Name - Accept

    • Value - application/json

    • Name - Content-Type

    • Value - application/json

    • Name - Authorization

    • Value - ${secrets.CortexXdrAuthorization}

    • Name - x-xdr-auth-id

    • Value - ${secrets.CortexXdrAuthId}

  • Body type* - raw

  • Body content* - { "request_data": { "search_from": ${pagination.from}, "search_to": ${pagination.to}, "filters": [ { "field": "creation_time", "operator": "lte", "value": ${temporalWindow.to} } ] } }

  • Output Mode
    -
    element
    Name - Accept
  • Value - application/json

  • Value -'${secrets.client_id}'

  • Name - client_secret

  • Value - '${secrets.client_secret

  • Prefix - Bearer

  • Suffix - ''

  • Query Params -

    • Name - offset

    • Value - ${pagination.offset}

    • Name - limit

    • Value - ${pagination.limit}

    -
    element

    URL* - https://api.fortirecon.forticloud.com/easm/${parameters.organizationId}/breaches

  • Headers -

    • Name - Authorization

    • Value - ${secrets.fortireconAuth}

  • Query params

    • Name - page

    • Value - ${pagination.pageNumber}

    • Name - Size

    • Value - ${pagination.pageSize}

    • Name - start_date

    • Value - ${temporalWindow.from}

    • Name - end_date

    • Value - ${temporalWindow.to}

  • Output Mode
    -
    element
    Name - Accept
  • Value - application/json

  • Value -'${secrets.client_ID}'

  • Name - client_secret

  • Value - '${secrets.client_Secret

  • Prefix - Bearer

  • Suffix - ''

  • Query Params -

    • Name - offset

    • Value - ${pagination.offset}

    • Name - limit

    • Value - ${pagination.limit}

    -
    element

    Name - Accept

  • Value - application/json

  • Name - Authorization

  • Value - ApiToken ${secrets.sentinelOneApiToken} where the dynamic variable is replaced with the value in the Secrets field entered above.

  • Query Params - defines query string parameters that are appended to the URL when making the HTTP request. These parameters are commonly used to filter, paginate, or otherwise control the behavior of the API response.

    • Name -createdAt_gte. createdAt refers to the timestamp field in the API's data. _gte is a common query operator meaning "greater than or equal to".

    • Value - ${temporalWindow.from}This is a dynamic value injected, representing the start time of the temporal window.

    • Name -createdAt_lte(less than or equal to).

    • Value -${temporalWindow.to} the end time of the temporal window.

  • Name - Accept

  • Value - application/json

  • Name - Authorization

  • Value - ApiToken ${secrets.sentinelOneApiToken} where the dynamic variable is replaced with the value in the Secrets field entered above.

  • Body type* - there is no required body type because the parameters are included in the URL. However, these fields are mandatory, so select raw and enter the {} placeholder.

  • Output Mode
    -
    element

    Name - Accept

  • Value - application/json

  • Name - Authorization

  • Value - ApiToken ${secrets.sentinelOneApiToken} where the dynamic variable is replaced with the value in the Secrets field entered above.

  • Query Params - defines query string parameters that are appended to the URL when making the HTTP request. These parameters are commonly used to filter, paginate, or otherwise control the behavior of the API response.

    • Name -createdAt_gte. createdAt refers to the timestamp field in the API's data. _gte is a common query operator meaning "greater than or equal to".

    • Value - ${temporalWindow.from}This is a dynamic value injected, representing the start time of the temporal window.

    • Name -createdAt_lte(less than or equal to).

    • Value -${temporalWindow.to} the end time of the temporal window.

  • Name - Accept

  • Value - application/json

  • Name - Authorization

  • Value - ApiToken ${secrets.sentinelOneApiToken} where the dynamic variable is replaced with the value in the Secrets field entered above.

  • Body type* - there is no required body type because the parameters are included in the URL. However, these fields are mandatory, so select raw and enter the {} placeholder.

  • Output Mode
    -
    element
    Field Transformation
    Group By
    Math Expression
    Message Builder
    Unique

    Client key* - Select your client key from your Secrets or create a new one.

  • Skip verify - Select true to skip or false to require verification.

  • Server name - Enter the name of the server to connect to.

  • Minimum TLS version - Select the required minimum version from the menu.

  • this article
    Secrets
    Secrets
    Secrets

    Name - Content-Type

  • Value - application/json

  • Body Type - raw

  • Body Raw - | { "username": "${secrets.PrismaCloudAccessKeyId}", "password": "${secrets.PrismaCloudAccessKeySecret}" }

  • Response Type - json

  • Prefix - Bearer

    Method* - POST

  • URL* - ${parameters.PrismaCloudEndpoint}/audit/api/v1/log

  • Headers

    • Name - Accept

    • Value - application/json

    • Name - Content-Type

    • Value - application/json

  • Response Type* - json

  • Body type* - raw

  • Body content* - { "timeRange": { "type": "absolute", "value": { "startTime": ${temporalWindow.from}, "endTime": ${temporalWindow.to}, } } }

  • Next Request

    • Method* - POST

    • URL* - ${parameters.PrismaCloudEndpoint}/audit/api/v1/log

    • Headers

      • Name - Accept

      • Value - application/json

      • Name - Content-Type

    • Response Type* - json

    • Body type* - raw

    • Body content* - | { "timeRange": { "type": "absolute", "value": { "startTime": ${temporalWindow.from}, "endTime": ${temporalWindow.to}, } }, "nextPageToken": ${pagination.cursor} }

  • Output

    • Select - .value

    • Map - .

    • Output Mode - element

  • To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    Method* - GET

  • URL* - https://${parameters.trendMicroDomain}/v3.0/oat/detections

  • Headers -

    • Name - Accept

    • Value - application/json

    • Name - Authorization

    • Value - Bearer ${secrets.trendMicroBearerToken}

  • Query Params

    • Name - detectedStartDateTime

    • Value - ${temporalWindow.from}

    • Name - detectedEndDateTime

    • Value - ${temporalWindow.to}

  • Body type* - there is no required body type because the parameters are included in the URL. However, these fields are mandatory, so select raw and enter the {} placeholder.

  • Output

    • Select - .items

    • Map - .

    • Output Mode - element

  • To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

  • Headers

    • Name - Content-type

    • Value - application/json

  • BodyType* - raw

    • Body Raw - | { "username": "${secrets.username}", "password": "${secrets.password}" }

  • Token Path* - .access_token

  • Auth Injection

    • In* - header

    • Name* - authorization

    • Prefix - Bearer

    • Suffix - ''

  • Request

    • Method* - GET

    • URL* - https://${parameters.domain}/api/v3.0/reputation-log

    • Query Params -

      • Name - from_time

      • Value - ${temporalWindow.from}

      • Name - to_time

  • Output

    • Select - .objects

    • Map - .

    • Output Mode - element

  • Headers

    • Name - Content-type

    • Value - application/json

  • BodyType* - raw

    • Body Raw - | { "username": "${secrets.username}", "password": "${secrets.password}" }

  • Token Path* - .access_token

  • Auth Injection

    • In* - header

    • Name* - authorization

    • Prefix - Bearer

    • Suffix - ''

  • Request

    • Method* - GET

    • URL* - https://${parameters.domain}/api/v3.0/connections

    • Query Params -

      • Name - from_time

      • Value - ${temporalWindow.from}

      • Name - to_time

  • Output

    • Select - .objects

    • Map - .

    • Output Mode - element

  • Request

    • Response Type* - JSON

    • Method* - POST

    • URL* - https://${parameters.CortexXdrDomain}/public_api/v1/alerts/get_alerts

    • Headers

      • Name - Accept

      • Value - application/json

      • Name - Content-Type

    • Body type* - raw

    • Body content* - { "request_data": { "search_from": ${pagination.from}, "search_to": ${pagination.to}, "filters": [ { "field": "creation_time", "operator": "gte", "value": ${temporalWindow.from} }, { "field": "creation_time", "operator": "lte", "value": ${temporalWindow.to} } ] } }

  • Output

    • Select - .reply.alerts

    • Map - .

    • Output Mode - element

  • Click your Event Hubs namespace to view the Hubs it contains.
  • Scroll down to the bottom and click the specific event hub to connect to.

  • In the left menu, go to Shared Access Policies.

  • If there is no policy created for an event hub, create one with Manage, Send, or Listen access.

  • Select the policy from the list.

  • Select the copy button next to the Connection string-primary key field. Depending on the version of Azure you are using, the corresponding field may have a different name, so to help you find it, look for a string with the same format:

  • Endpoint=sb://.servicebus.windows.net/; SharedAccessKeyName=RootManageSharedAccessKey; SharedAccessKey=

    Now that you got it, open the Connection String* field and click New secret. In the window that appears, give your secret a Name* and turn off the Expiration date toggle if not needed. Then, click Add new value and paste the connection string. Click Save when you're done.

    Now, select the token you have just created in the Connection String* field.

    Learn more about secrets in this article.

    Client Secret

    Use Azure Active Directory authentication with a registered application and client secret. This provides better security and access control. We recommend to use this method for production environments and multi-tenant applications.

    Enter your Storage Account Name* and get the following credentials from the Certificates & Secrets area:

    • Tenant ID* - Azure AD tenant identifier.

    • Client ID* - Azure AD application (service principal) identifier.

    • Client Secret* - Secret key for your service principal. To add it, open the field and click New secret. In the window that appears, give your secret a Name* and turn off the Expiration date toggle if not needed. Then, click Add new value and paste your client secret. Click Save when you're done. Now, select the token you have just created in the Client Secret* field.

    Learn more about secrets in .

    Certificate

    Use Azure Active Directory authentication with a certificate instead of a secret. This is the most secure option. We recommend to use this method for high-security production environments and compliance requirements.

    Enter your Storage Account Name* and get the following credentials from the Certificates & Secrets area:

    • Tenant ID* - Azure AD tenant identifier.

    • Client ID* - Azure AD application (service principal) identifier.

    • Certificate* - PEM-encoded certificate with private key Open the field and click New secret. In the window that appears, give your secret a Name* and turn off the Expiration date toggle if not needed. Then, click Add new value and paste your certificate. Click Save when you're done. Now, select the token you have just created in the Certificate* field.

    Learn more about secrets in .

    *
    - Number of seconds messages should stay hidden from other consumers while processing. The minimum value is
    1
    , and the maximum value is
    604,800
    (7 days).
    this article
    Offset - 5m
  • Format - RFC3339

  • Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST

    • URL* - ${parameters.domain}/oauth2/token

    • Headers

      • Name - Content-type

      • Value - application/x-www-form-urlencoded

    • BodyType* - UrlEncoded

      • Body params

        • Name - grant_type

    • Token Path* - .access_token

    • Auth Injection

      • In* - header

      • Name* - authorization

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - cursor

    • Cursor - .cursor

    • Initial Request

      • Method* - POST

      • URL* - https://${parameters.domain}/2/team_log/get_events

      • Headers -

        • Name - Content-Type

        • Value - application/json

      • Body Type - raw

      • Body Raw - | { "time": { "start_time": "${temporalWindow.from}", "end_time": "${temporalWindow.to}" } }

    • Next Request

      • Method* - POST

      • URL* - https://${parameters.domain}/2/team_log/get_events/continue

    • Output

      • Select - .events

      • Map - .

    Dropbox refresh token
    this article
    Labels
    4

    In the Socket section, enter the required Port. By default, all TCP ports from 1024 to 10000 are open.

    Note that you won't see the Socket and TLS configuration sections in the creation form if you're defining this Listener in a Cloud instance, as Onum already provides these. Learn more about Cloud Listeners in this article.

    5

    In the TLS configuration section, enter the data you received from the Onum team (Certificate, Private key and CA chain). Choose No client certificate as Client authentication method and TLS v.1.0 as the Minimum TLS version.

    Note that the parameters in this section are only mandatory if you decide to include TLS authentication in this Listener. Otherwise, leave blank.

    6

    If your connection does not require Authentication, leave as None. Otherwise, choose the authentication type and enter the details.

    The options provided will vary depending on the type chosen to authenticate your API. This is the type you have selected in the API end, so it can recognize the request.

    Basic

    Enter the following:

    • Username* - The user sending the request.

    • Password* - Choose the basic auth password from your list of Secrets or .

    Bearer

    Bearer Token Authentication

    Enter your Token Secret for the API request using an existing Secret or if you haven't stored it in Onum yet.

    This grants access without needing to send credentials (like username and password) in every request.

    Example

    Let's say you have the following configuration:

    API Key in URL Params

    Enter the following:

    • API Key Name* - A label assigned to the API key for identification. You can find it depending on where the API key was created.

    • API Key Value* -

    API Key in Header

    Enter the following:

    • API Key in Header Name* - A label assigned to the API key for identification. You can find it depending on where the API key was created.

    • API Key in Header Value* -

    Learn more about secrets in Onum in .

    7

    You can now select the secret you just created in the Token Secret field.

    8

    In the Endpoint section, choose GET, POST, or PUT method and the Path to the resource being requested from the server.

    9

    In the Message extraction section, the strategy defines how data extraction should be performed. It is the overall methodology or approach used to extract relevant information from HTTP messages. Choose between:

    • Single event with the whole request - Choose this option if you want to include the whole request in each event.

    • Single event from request path - Choose this option if you want to include the request paths in each event.

    • Single event as query string - Choose this option if you want to include the requests with their whole query strings.

    • Single event as query parameter - Choose this option if you want to include a specific request parameter in your events. Specify the required parameter name in the Extraction info option (for example: msg)

    • Single event as header - Choose this option if you want to include a specific header in your events. Specify the required header in the Extraction info option (for example: Message)

    • Single event as body (partially) - Choose this option if you want to include a part of the request body in your events. Specify the required RegEx rule to match the required part in the Extraction info option (for example: \\[BODY: (.+)\\])

    • Single event as body (full) - Choose this option if you want to include the whole request body in your events. Specify the required RegEx rule to match the required part in the Extraction info option (for example: \\[BODY: (.+)\\])

    • Multiple events at body with delimiter - Choose this option if you want to include several messages in the same event separated by a delimiter. You must specify the delimiter in the Extraction info option.

    • Multiple events at body as JSON array - Choose this option if you want to include several messages formatted as a JSON array in your events.

    • Multiple events at body as stacked JSON - Choose this option if you want to include several messages formatted as a stacked JSON in your events.

    10

    In the General behavior section, choose between None (default option), Allow (enter the required header keys below), or All (all headers will be retrieved in the headers field).

    11

    Then, configure the following settings:

    • Header keys - Enter the required header keys in this field. Click Add element for each one.

    • Exported headers format - Choose the required format for your headers. The default value is JSON.

    • Maximum message length - Maximum characters of the message. The default value is 4096.

    • Response code - Specify the response code to show when successful. The default value is 202 Accepted.

    • Response Content-Type -

      The Content-Type: xxx/xxx lets the server know the expected format of the incoming message or request (application/json by default):

      • text/plain - The message body contains plain text.

      • application/json - The message body is formatted as JSON.

    • Response text - The text that will show in case of success.

    12

    Copy the DNS Address details to configure your data source in order to communicate with Onum. This contains the IP address of the DNS (Domain Name System) server to connect to.

    Note that you will only see this section if you're defining this Listener in a Cloud instance. Learn more about Cloud Listeners in this article.

    13

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in this article.

    Contact Onum
    Contact us
    initial offset should be
    5m
  • Format - RFC3339

  • Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST (we would need to generate the JWT using the secrets client_id and client_secret)

    • URL* - ${parameters.domain}/v1/cp/oauth/token

    • Headers

      • Name - Content-type

      • Value - application/x-www-form-urlencoded

    • BodyType* - UrlEncoded

      • Body params

        • Name - client_id

    • Token Path* - .access_token

    • Auth Injection

      • In* - header

      • Name* - authorization

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - offset/Limit

    • Limit* - 200

    • Request

      • Response Type - JSON

      • Method* - GET

      • URL* - ${parameters.domain}/v1/cp/alert_events

      • Headers -

        • Name - start_date

        • Value - ${temporalWindow.from}

    • Output

      • Select - .alert_events

      • Map - .

    this article
    Labels

    sendCookedData = false

    CRITICAL. This ensures Splunk sends the raw, unprocessed log data (not Splunk's internal 'cooked' format).

    Protocol

    type = udp (or tcp)

    Specifies the transport protocol. Syslog traditionally uses UDP, but TCP is often preferred for reliability.

    4

    Enter the required Port and Protocol (TCP or UDP).

    Note that you won't see the Port and Protocol settings in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum.

    While UDP 514 is the standard, some implementations may use TCP 514 or other ports, depending on specific configurations or security requirements. To determine the syslog port value, check the configuration settings of your syslog server or consult the documentation for your specific device or application.

    5

    Choose the required Framing Method, which refers to how characters are handled in log messages sent via the Syslog protocol. Choose between:

    • Auto-Detect - automatically detect the framing method using the information provided.

    • Non-Transparent Framing (newline) - the newline characters (\n) within a log message are preserved as part of the message content and are not treated as delimiters or boundaries between separate messages.

    • Non-Transparent Framing (zero) - refers to the way zero-byte characters are handled. Any null byte (\0) characters that appear within the message body are preserved as part of the message and are not treated as delimiters or boundaries between separate messages.

    • Octet Counting (message length) - the Syslog message is preceded by a count of the length of the message in octets (bytes).

    6

    If you're using TLS authentication, enter the data you received from the Onum team in the TLS configuration section (Certificate, Private key and CA chain). Choose your Client authentication method and Minimum TLS version.

    • Note that the parameters in this section are only mandatory if you decide to include TLS authentication in this Listener. Otherwise, leave it blank.

    • Note that you won't see this section in the creation form if you're defining this Listener in a Cloud instance, as these are already provided by Onum. Learn more about Cloud Listeners in .

    7

    If you're using TLS authentication, contact Onum to get the cert information needed for TLS communication.

    The TLS credentials are saved in Onum as Secrets. In the TLS form, click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    8

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in this article.

    Configuration

    Example Value

    Description

    Syslog Group

    [syslog]

    Defines the default syslog output settings.

    Default Group

    defaultGroup = onum_syslog_group

    Specifies the default target group for all unrouted data. If you only want to send a subset of logs, do not set a defaultGroup here.

    Target Stanza

    [syslog:onum_syslog_group]

    Defines the specific ONUM destination.

    Server

    server = 10.1.1.200:514

    server = [<onum_siem_ip>|<onum_siem_host>]:<port> The IP address and port (default is UDP 514) of the ONUM Syslog receiver.

    See here for more information on types of Splunk forwarders.

    Data Format

  • Method* - GET

  • URL* - ${parameters.sentinelOneDomain}/web/api/v2.1/cloud-detection/alerts the parameters variable will be replaced by the domain entered earlier.

  • Headers

    • Name - Accept

    • Value - application/json

    • Name - Authorization

    • Value - ApiToken ${secrets.sentinelOneApiToken} where the dynamic variable is replaced with the value in the Secrets field entered above.

  • Query Params - defines query string parameters that are appended to the URL when making the HTTP request. These parameters are commonly used to filter, paginate, or otherwise control the behavior of the API response.

    • Name -createdAt_gte. createdAt refers to the timestamp field in the API's data. _gte is a common query operator meaning "greater than or equal to".

    • Value

  • Next Request

    • Method* - GET

    • URL* - ${parameters.sentinelOneDomain}/web/api/v2.1/cloud-detection/alerts the parameters variable will be replaced by the domain entered earlier.

    • Headers

      • Name - Accept

      • Value - application/json

      • Name - Authorization

    • Body type* - there is no required body type because the parameters are included in the URL. However, these fields are mandatory, so select raw and enter the {} placeholder.

  • Output

    • Select - .data

    • Map - .

    • Output Mode - element

  • this article
    Labels

    IP-MIB

    IP protocol

    SNMPv2-SMI, IF-MIB

    TCP-MIB

    TCP protocol

    SNMPv2-SMI, IP-MIB

    UDP-MIB

    UDP protocol

    SNMPv2-SMI, IP-MIB

    HOST-RESOURCES-MIB

    Host resources

    SNMPv2-SMI

    ENTITY-MIB

    Entity monitoring

    SNMPv2-SMI

    4

    In the Version* section, select the required SNMP protocol version between v1, v2c, and v3.

    For v1 and v2c, you'll be prompted to enter the required Community*. The community string acts like a simple password to authenticate communication between the SNMP manager and the SNMP agent.

    For v3, you must choose a security level between:

    • noAuthNoPriv - Choose this option if no authentication is required:

      • Enter your username in the User* field that appears.

    • authNoPriv - Choose this option to set basic authentication:

      • Enter your username in the User* field

      • Select the required authentication protocol (MD5 or SHA). Then, choose your Authentication Password* from your or click New secret to create a new one.

    • authPriv - Choose this option to set authentication + encryption:

      • Enter your username in the User* field.

      • Select the required authentication protocol (MD5 or SHA). Then, choose your Authentication Password* from your or click New secret to create a new one.

    5

    To create a new secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the user/password.

    • Click Save.

    Learn more about secrets in Onum in .

    6

    Enter the UDP port to listen for traps.

    7

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in this article.

    SNMPv2-SMI

    Base SMI definitions

    None

    SNMPv2-TC

    Base TC definitions

    SNMPv2-SMI

    SNMPv2-MIB

    Core SNMP MIB

    SNMPv2-SMI, SNMPv2-TC

    IF-MIB

    Interface monitoring

    SNMPv2-SMI

    _raw

    Complete trap data as JSON

    sourceIp

    Source IP address

    sourcePort

    Source port number

    version

    SNMP version used

    mibName

    MIB name if parsing is successful, none otherwise

  • Method* - GET

  • URL* - https://${parameters.sentinelOneDomain}/web/api/v2.1/reports the parameters variable will be replaced by the domain entered earlier.

  • Headers

    • Name - Accept

    • Value - application/json

    • Name - Authorization

    • Value - ApiToken ${secrets.sentinelOneApiToken} where the dynamic variable is replaced with the value in the Secrets field entered above.

  • Query Params - defines query string parameters that are appended to the URL when making the HTTP request. These parameters are commonly used to filter, paginate, or otherwise control the behavior of the API response.

    • Name -createdAt_gte. createdAt refers to the timestamp field in the API's data. _gte is a common query operator meaning "greater than or equal to".

    • Value

  • Next Request

    • Method* - GET

    • URL* - https://${parameters.sentinelOneDomain}/web/api/v2.1/reports the parameters variable will be replaced by the domain entered earlier.

    • Headers

      • Name - Accept

      • Value - application/json

      • Name - Authorization

    • Body type* - there is no required body type because the parameters are included in the URL. However, these fields are mandatory, so select raw and enter the {} placeholder.

  • Output

    • Select - .data

    • Map - .

    • Output Mode - element

  • this article
    Labels
    Offset - 5m
  • Format - EpochMillis

  • Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST

    • URL* - https://${parameters.domain}/api/v3.0/authenticate

    • Headers

      • Name - Content-type

      • Value - application/json

    • BodyType* - raw

      • Body Raw - | { "username": "${secrets.username}", "password": "${secrets.password}" }

    • Token Path* - .access_token

    • Auth Injection

      • In* - header

      • Name* - authorization

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - offsetLimit

    • Limit - 1000

    • Zero Index - true

    • Request

      • Method* - GET

      • URL* - https://${parameters.domain}/api/v3.0/incidents

    • Output

      • Select - .objects

      • Map - .

    this article
    Labels
    Parameters
    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/incidents

    Secrets

    • secrets.client_id will reference the Client ID

    • secrets.client_secret will reference the Client Secret.

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Falcon API Alerts fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your CrowdStrike Falcon API YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - RFC3339

    Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST (we would need to generate the JWT using the secrets client_id and client_secret)

    Enumeration Phase

    Toggle ON to configure the enumeration phase. This API endpoint requires an initial request that will provide a list of alert ids. In order to get the details about that information, it will require an additional request for those details.

    • Pagination Type* - offset/limit

    • Limit - 100

    • Request

    Collection Phase

    • Variables

      • Source - input

      • Name - resources

    This HTTP Pull Listener now uses the data export API to extract incidents.

    Click Create labels to move on to the next step and define the required Labels if needed.

    Parameters
    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/alerts

    Secrets

    • secrets.client_id will reference the Client ID

    • secrets.client_secret will reference the Client Secret.

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Falcon API Alerts fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your CrowdStrike Falcon API YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - initial offset should be 0 (the latest alert).

    • Format - RFC3339

    Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST (we would need to generate the JWT using the secrets client_id and client_secret

    Enumeration Phase

    Toggle ON to configure the enumeration phase. This API endpoint requires an initial request that will provide a list of alert ids. In order to get the details about that information, it will require an additional request for those details.

    • Pagination Type* - Offset/Limit

    • Zero index* - false

    • Limit* - 100

    Collection Phase

    • Pagination Type* - none

    • Request

      • Method* - POST

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required Labels if needed.

    1. General settings

    This pane shows the general properties of your Pipeline. Click the ellipses next to its name to Copy ID.

    Depending on your permissions, you can view or modify:

    • Name: When you create a pipeline by default, the first recommendation is to change the default name. But you can modify the name at any time clicking on the pencil icon close to the Pipeline's name

    • Tags: Click the tag icon to open the menu.

    • Clusters: Here you can see how many clusters your Pipeline is running in, as well as update them.

    • Versions: View and run multiple versions of the Pipeline.

    • Stop/Start Pipeline: Stop and start the Pipeline in some or all of the clusters it is running in.

    • Publish

    When you modify your Pipeline, you will be creating a new version. When your modifications are complete, you can Publish this new version using this button in the top right.

    Go to Managing versions to learn more.

    You can carry out all these actions in bulk if you wish to modify more than one Pipeline at a time.


    2. The metrics bar

    If the Pipeline is running, the Metrics bar provides a visual, graphical overview of the data being processed in your Pipeline.

    • Events In: View the total events in per second for the selected period, compared to the previous range (in %).

    • Bytes In: The total bytes in per second for the selected time range, compared to the previous (in %).

    • Events Out: View the total events out per second for the selected period, compared to the previous range (in %).

    • Bytes Out: The total bytes out per second for the selected time range, compared to the previous (in %).

    • Latency: The time (in nanoseconds) it takes for data to travel from one point to another, compared to the previous (in %).

    Set a time range

    You can set a time range to view the metrics for a specific period of time. This will be used to calculate the percentages, compared to the previous time of the same period selected.

    Go to Selecting a Time Range to learn more about the specifics of how this works.

    Hide/Show metrics

    Use the Hide metrics/Show metrics button to hide/show the metrics pane.


    3. Add to the Pipeline

    Simply drag and drop an element from the left-hand side onto the canvas to add it to your Pipeline.

    For Listeners, you can drag the specific Label down to the required level. Once in the Pipeline, you can see which Listener the label belongs to by hovering over it, or in the Metrics area of the configuration pane.


    4. Canvas

    The canvas is where you will build your Pipeline. Drag and drop an element from the left pane to add it to your Pipeline.

    Click it in the canvas to open its Properties.

    Delete a node

    If you have enough permissions to modify this Pipeline, click the node in the canvas and select the Remove icon.

    Create links between your nodes to create a flow of data between them. Learn more about links below.


    5. Navigation options

    Zoom in/out, Center, undo, and redo changes using the buttons on the right.

    Use the window in the bottom-right to move around the Canvas.

    Connect the separate nodes of the canvas to form a Pipeline from start to finish.

    Simply click the port you wish to link from and drag to the port you wish to link to. When you let go, you will see a link form between the two.

    To Unlink, click anywhere on the link and select unlink in the menu.


    Ports

    Notice the ports of each element in the canvas. Ports are used as connectors to other nodes of the Pipeline, linking either incoming or outgoing data.

    Listener: As a Listener is used to send information on, there are no in ports, and one out port.

    Action: Actions generally have one in port, injecting it with data. When information is output, it will be sent via the default port. If there are problems sending on the data, it will not be lost, bur rather output via the error port.

    Datasink: A datasink is the end stop for our data, so there is only one in port that receives your processed data.

    Click one to read more about how to configure them:

    This Action does not generate new events. Instead, it processes incoming events to detect sensitive information based on the configured Info Types and returns the corresponding findings.

    In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how this works.

    Ports

    These are the input and output ports of this Action:

    Input ports
    • Default port - All the events to be processed by this Action enter through this port.

    Output ports
    • Error port - Events are sent through this port if an error occurs while processing them.

    Configuration

    1

    Find Google DLP in the Actions tab (under the Advanced group) and drag it onto the canvas.

    2

    To open the configuration, click the Action in the canvas and select Configuration.

    3

    Enter the required parameters:

    Parameter
    Description
    4

    Click Save to complete the process.

    Example

    Imagine you want to ensure that logs sent to a third-party service do not contain sensitive information such as credit card numbers, personal identification numbers, or passwords. To do it:

    1

    Add the Google DLP Action to your Pipeline and link it to your required Data sink.

    2

    Now, double-click the Google DLP Action to configure it. You need to set the following config:

    Parameter
    Description

    Info Types

    3

    Click Save to apply the configuration.

    4

    Now link the Default output port of the Action to the input port of your Data sink.

    5

    Finally, click Publish and choose in which clusters you want to publish the Pipeline.

    6

    Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

    This is the input data field we chose for our analysis:

    And this is a sample output data with the corresponding results of the DLP API:

    Check out our using this action.

    The graph at the top plots the data volume going through your Pipelines. The purple line graph represents the events in, and the blue one represents the events going out. Use the buttons above the graph to switch between Events/Bytes, and the Frequency slider bar to choose how frequently you want to plot the events/bytes in the chart.

  • At the bottom, you will find a list of all the Pipelines in your tenant. You can switch between the Cards view, which shows each Pipeline in a card, and the Table view, which displays Pipelines listed in a table. Learn more about the cards and table views in this article.

  • Narrow Down Your Data

    There are various ways to narrow down what you see in this view, both the Pipeline list and the informative graphs. To do it, use the options at the top of this view:

    Add Filters

    Add filters to narrow down the Pipelines you see in the list. Click the + Add filter button and select the required filter type(s). You can filter by:

    • Name: Select a Condition (Contains, Equals, or Matches) and a Value to filter Pipelines by their names.

    • Status: Choose the status(es) you want to filter by: Draft, Running, and/or Stopped. You'll only see Pipelines with the selected status(es).

    • Created by: Filter for the creator of the Pipeline in the window that appears.

    • Updated by: Filter for users to see the Pipeline they last updated.

    The filters applied will appear as tags at the top of the view.

    Note that you can only add one filter of each type.

    Select a Time Range

    If you wish to see data for a specific time period, this is the place to click. Go to Selecting a Time Range to dive into the specifics of how the time range works.

    Select Tags

    You can choose to view only those Pipelines that have been assigned the desired tags. You can create these tags in the Pipeline settings or from the cards view. Press the Enter key to confirm the tag, then Save.

    To filter by tags, click the + Tags button and select the required tag(s).

    Metrics

    Below the filters, you will see 3 metrics informing you about various components in your Pipelines.

    Note that these metrics are affected by the time range selected.

    Listeners

    View the events per second (EPS) ingested by all Listeners in your Pipelines for the selected time range, as well as the difference in percentage compared to the previous lapse.

    Data Sink

    View the events per second (EPS) sent by all Data Sinks in your Pipelines for the selected time range, as well as the difference in percentage compared to the previous.

    Data Volume

    See the overall data volume processed by all Pipelines for the selected time range, and the difference in percentage with the previous.

    Visualize Your Data In/Out

    Select between In and Out to see the volume received or sent by your Pipelines for the selected time range. The line graph represents the Events and the bar graph represents Bytes.

    Hover over a point on the chart to show a tooltip containing the Events and Bytes in/out for the selected time, as well as a percentage of how much increase/decrease has occurred since the previous lapse of time since the one currently selected.

    You can also analyze a different time range directly on the graph. To do it, click a starting date in the map and drag the frame that appears until the required ending date. The time range above will be also updated.

    Pipelines List

    At the bottom, you have a list of all the Pipelines in your tenant.

    Use the Group by drop-down menu at the right area to select a criterion to organize your Pipelines in different groups (Status or None). You can also use the search icon to look for specific Pipelines by name.

    Use the bottoms at the left of this area to display the Pipelines as Cards or listed in a Table:

    Cards View

    In this view, Pipelines are displayed as cards that display useful information. Click a card to open the Pipeline detail view, or double-click it to access it.

    This is the information you can check on each card:

    • The percentage at the top left corner indicates the amount of data that goes out of the Pipeline compared to the total incoming events, so you can check how data is optimized at a glance. Hover over it to see the in/out data in bytes and the estimation over the next 24 hours.

    • You can also see the status of the Pipeline (Running, Draft, or Stopped).

    • Next to the status, you can check the Pipeline current version.

    • Click the Add tag button to define tags for the Pipeline. To assign a new tag, simply type the name you wish to assign, make sure to press Enter, and then select the Save button. If the Pipeline has tags defined already, you'll see the number of tags next to the tag icon.

    • Click the ellipses in the right-hand corner of the card to reveal the options to Edit, Copy ID, or Remove it.

    Table view

    In this view, Pipelines are displayed in a table, where each row represents a Pipeline. Click a row to open the Pipeline detail view, or double-click it to access it.

    Click the cog icon at the top left corner to rearrange the column order, hide columns, or pin them. You can click Reset to recover the default configuration.

    Pipeline detail view

    Click a Pipeline to open its settings in the right-hand pane. Here you can see Pipeline versions and edit the Pipeline. Click the ellipses in the top right to Copy ID or Duplicate / Remove it.

    The details pane is split into three tabs showing the Pipeline at different statuses:

    Tab
    Description

    Running

    This is the main tab, where you can see details of the Pipeline versions that are currently running.

    Select the drop-down next to the Pipeline version name to see which clusters the Pipeline is currently running in.

    Draft

    Check the details of the draft versions of your Pipeline.

    Stopped

    Check the details of the version of your Pipeline that are currently stopped.

    Once you have located the Pipeline to work with, click Edit Pipeline to open it.

    Duplicate a Pipeline

    If you wish to use a Pipeline just like the one you are currently working on, click the ellipses in the Card or Table view and select Duplicate, or from the Configuration pane.

    Create a Pipeline

    Depending on your permissions, you can create a new Pipeline from this view. There are several ways to create a new Pipeline:


    From the Pipelines view


    From the Home page


    This will open the new Pipeline, ready to be built.

    Give your Pipeline a name and add optional Tags to identify it. You can also assign a Version in the top-right.

    Keep reading to learn how to build a Pipeline from this view.

    Build your Pipeline

    See Building a Pipeline to learn step by step.

    Actions
    Listeners
    Data sinks

    Input*

    Choose the field that contains the list you want to divide.

    Output field*

    Name of the new field where each iterated element will be stored. This will be the same type as the input field list.

    Index field*

    Name of the new field that will show the position of each element in the list.

    In this case, we choose the field ipsList that contains our IP list.

    Output field

    We're naming the new field ipValue.

    Index field

    We're naming the new field ipIndex.

    Endpoint*

    Enter the endpoint used to establish the connection to the Redis server.

    Read Timeout*

    Enter the maximum amount of milliseconds to wait to receive data after the connection has been established and the request has been sent.

    Write Timeout*

    Enter the maximum amount of milliseconds to wait while trying to send data to the server.

    Office 365 Management Activity API schema
    this article
    this article
    this article
    Secrets
    here

    Event Stream

    Overview

    Get a list of event streams from CrowdStrike Falcon.

    Configuration

    Parameters

    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/event_stream

    Secrets

    • secrets.client_id will reference the Client ID

    • secrets.client_secret will reference the Client Secret.

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Falcon API Alerts fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your CrowdStrike Falcon API YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 30 minutes (30m) as default, adjust based on your needs.

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required if needed.

    Field Generator

    Most recent version: v1.1.0

    See the changelog of this Action type .

    Overview

    The Field Generator action allows you to add new fields to your events using a given operation. You can select one or more operations to execute, and their resulting values will be set in user-defined event fields.

    In order to configure this action, you must first link it to a Listener or another Action. Go to to learn how this works.

    Ports

    These are the input and output ports of this Action:

    Input ports
    • Default port - All the events to be processed by this Action enter through this port.

    Output ports
    • Default port - Events are sent through this port if no error occurs while processing them.

    • Error port - Events are sent through this port if an error occurs while processing them.

    Configuration

    1

    Find Field Generator in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required and .

    2

    To open the configuration, click the Action in the canvas and select Configuration.

    3

    Example

    Imagine we want to add a couple of new fields to our events. We want a new field that indicates the current Epoch time and another that adds the string Test in each event. To do it:

    1

    Add the Field Generator Action to your Pipeline and link it to your required Data sink.

    2

    Now, double-click the Field Generator Action to configure it. You need to set the following config:

    Operation
    Parameters

    This is how your data will be transformed with the new fields:

    Actions

    Perform operations on your events

    Overview

    The Actions tab shows all available actions to be assigned and used in your Pipeline. Use the search bar at the top to find a specific action. Hover over an action in the list to see a tooltip, as well as the option to View details.

    To add an action to a Pipeline, drag it onto the canvas.

    Onum supports action versioning, so be aware that the configuration may be showing either the Latest version if you are adding a new action, or current version if you are editing an existing action.

    Action Versioning

    We are constantly updating and improving Actions, therefore, you may come across old or even discontinued actions.

    See the complete version history of each Action .

    If there is an updated version of the Action available, it will show update available in its Definition, above the node when added to a Pipeline, and Details pane.

    If you have added an Action to a Pipeline that is now discontinued, it will show as deactivated in the Canvas. You'll soon be able to see all the Actions with updates available in the

    Actions List

    See this table to understand what each Action does, when to use it, and how to get the most value from your Pipelines. Click an Action name to see its article.

    Action
    Description
    Example use case

    Collect data from Azure Event Hubs

    Most recent version: v2.0.0

    This is a Pull Listener and therefore should not be used in environments with more than one cluster.

    See the changelog of the Azure Event Hubs Listener .

    Incident Management - Incidents Extradata

    Overview

    Get the extradata associated to all the incidents within a time range defined by the time window.

    • The response is concatenated using AND condition (OR is not supported).

    HTTP Request

    Most recent version: v0.0.3

    See the changelog of this Action type .

    Overview

    The HTTP Request action allows you to configure and execute HTTP requests with custom settings for methods, headers, authentication, TLS, and more.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: https://${parameters.domain}/oauth2/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
          bodyType: urlEncoded
          bodyParams:
            - name: grant_type
              value: refresh_token
            - name: refresh_token
              value: '${secrets.refresh_token}'
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".cursor"
      initialRequest:
        method: POST
        url: "https://${parameters.domain}/2/team_log/get_events"
        headers:
          - name: Content-Type
            value: application/json
        bodyType: raw
        bodyRaw: |
          {
            "time": {
                "start_time": "${temporalWindow.from}",
                "end_time": "${temporalWindow.to}"
            }
          }
      nextRequest:
        method: POST
        url: "https://${parameters.domain}/2/team_log/get_events/continue"
        headers:
          - name: Content-Type
            value: application/json
        bodyRaw: |
          {
            "cursor": "${pagination.cursor}" 
          }
      output:
        select: ".events"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/v1/cp/oauth/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
            - name: Accept
              value: application/json
          bodyType: urlEncoded
          bodyParams:
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: false
    collectionPhase:
      paginationType: offsetLimit
      limit: 200
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/alert_events
        queryParams:
          - name: start_date
            value: ${temporalWindow.from}
          - name: end_date
            value: ${temporalWindow.to}
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: ".alert_events"
        map: "."
        outputMode: element
    # $SPLUNK_HOME/etc/system/local/outputs.conf
    
    [syslog]
    defaultGroup = onum_syslog_group 
    
    [syslog:onum_syslog_group]
    server = 10.1.1.200:514
    sendCookedData = false
    type = udp
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".pagination.nextCursor"
      limit: 100
      initialRequest:
        method: GET
        url: "https://${parameters.sentinelOneDomain}/web/api/v2.1/cloud-detection/alerts"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "ApiToken ${secrets.sentinelOneApiToken}"
        queryParams: 
          - name: createdAt__gte
            value: "${temporalWindow.from}"
          - name: createdAt__lte
            value: "${temporalWindow.to}"
      nextRequest:
        method: GET
        url: "https://${parameters.sentinelOneDomain}/web/api/v2.1/cloud-detection/alerts"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "ApiToken ${secrets.sentinelOneApiToken}"
      output:
        select: ".data"
        map: "."
        outputMode: element 
    {
      "_raw": "{\"agent_addr\":\"10.123.54.210\",\"generic_trap\":6,\"specific_trap\":1,\"enterprise\":\"1.3.6.1.4.1.18494.2\",\"variable_bindings\":{\"1.3.6.1.4.1.18494.2.1.1\":\"ACCESS\"}}",
      "sourceIp": "10.123.54.210",
      "sourcePort": 12345,
      "version": "v1",
      "mibName": "SWIFT-MIB"
    }
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".pagination.nextCursor"
      limit: 100
      initialRequest:
        method: GET
        url: "https://${parameters.sentinelOneDomain}/web/api/v2.1/reports"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "ApiToken ${secrets.sentinelOneApiToken}"
        queryParams: 
          - name: createdAt__gte
            value: "${temporalWindow.from}"
          - name: createdAt__lte
            value: "${temporalWindow.to}"
      nextRequest:
        method: GET
        url: "https://${parameters.sentinelOneDomain}/web/api/v2.1/reports"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "ApiToken ${secrets.sentinelOneApiToken}"
      output:
        select: ".data"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: EpochMillis
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: https://${parameters.domain}/api/v3.0/authenticate
          headers:
            - name: Content-Type
              value: application/json
          bodyType: raw
          bodyRaw: |
            {
              "username": "${secrets.username}",
              "password": "${secrets.password}"
            }
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: false
    collectionPhase:
      paginationType: offsetLimit
      limit: 1000
      isZeroIndex: true
      request:
        responseType: json
        method: GET
        url: https://${parameters.domain}/api/v3.0/incidents
        queryParams:
          - name: from_time
            value: ${temporalWindow.from}
          - name: to_time
            value: ${temporalWindow.to}
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: ".objects"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/oauth2/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
          bodyType: urlEncoded
          bodyParams:
            - name: grant_type
              value: client_credentials
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: true
    enumerationPhase:
      paginationType: offsetLimit
      limit: 100
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/incidents/queries/incidents/v1
        queryParams:
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
          - name: filter
            value: start:>='${temporalWindow.from}'+end:<'${temporalWindow.to}'
      output:
        select: ".resources"
        map: "."
        outputMode: collection
    collectionPhase:
      variables:
        - source: input
          name: resources
          expression: "."
          format: "json"
      paginationType: none
      request:
        method: POST
        url: ${parameters.domain}/incidents/entities/incidents/GET/v1
        headers:
          - name: Accept
            value: application/json
          - name: Content-Type
            value: application/json
        responseType: json
        bodyType: raw
        bodyRaw: |
          {
            "ids": ${inputs.resources}
          }
      output:
        select: ".resources"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/oauth2/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
          bodyType: urlEncoded
          bodyParams:
            - name: grant_type
              value: client_credentials
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: true
    enumerationPhase:
      paginationType: offsetLimit
      limit: 100
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/alerts/queries/alerts/v2
        queryParams:
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
          - name: filter
            value: created_timestamp:>'${temporalWindow.from}'+created_timestamp:<'${temporalWindow.to}'
      output:
        select: ".resources"
        map: "."
        outputMode: collection
    collectionPhase:
      variables:
        - source: input
          name: resources
          expression: "."
          format: "json"
      paginationType: none
      request:
        method: POST
        url: ${parameters.domain}/alerts/entities/alerts/v2
        headers:
          - name: Accept
            value: application/json
          - name: Content-Type
            value: application/json
        responseType: json
        bodyType: raw
        bodyRaw: |
          {
            "composite_ids": ${inputs.resources}
          }
      output:
        select: ".resources"
        map: "."
        outputMode: element
            
    {
      "Info": "My credit card number is 4111-1111-1111-1111"
    }
    {
      "dlpFindings": {
        "findings": [
          {
            "infoType": "CREDIT_CARD_NUMBER",
            "likelihood": "VERY_LIKELY",
            "quote": "4111-1111-1111-1111"
          }
        ]
      }
    }
    # Read Value
    
    127.0.0.1:6379> GET key
    
    # Set Value
    
    SET key value

    Value - ${temporalWindow.to}

  • Name - offset

  • Value - ${pagination.offset}

  • Name - limit

  • Value - ${pagination.limit}

  • Value - ${temporalWindow.to}

  • Name - offset

  • Value - ${pagination.offset}

  • Name - limit

  • Value - ${pagination.limit}

  • Value - application/json

  • Name - Authorization

  • Value - ${secrets.CortexXdrAuthorization}

  • Name - x-xdr-auth-id

  • Value - ${secrets.CortexXdrAuthId}

  • Value - refresh_token

  • Name - refresh_token

  • Value -${secrets.refresh_token}

  • Name - client_id

  • Value - ${secrets.client_id}

  • Name - client_secret

  • Value - ${secrets.client_secret}

  • Prefix - Bearer

  • Suffix - ''

  • Headers -

    • Name - Content-Type

    • Value - application/json

  • Body Type - raw

  • Body Raw - | { "cursor": "${pagination.cursor}" }

  • Output Mode
    -
    element
    Name - Accept
  • Value - application/json

  • Value -'${secrets.client_id}'

  • Name - client_secret

  • Value - '${secrets.client_secret

  • Prefix - Bearer

  • Suffix - ''

  • Name - end_date
  • Value - ${temporalWindow.to}

  • Name - offset

  • Value - ${pagination.offset}

  • Name - limit

  • Value - ${pagination.limit}

  • Output Mode
    -
    element
    -
    ${temporalWindow.from}
    This is a dynamic value injected, representing the start time of the temporal window.
  • Name -createdAt_lte(less than or equal to).

  • Value -${temporalWindow.to} the end time of the temporal window.

  • Value - ApiToken ${secrets.sentinelOneApiToken} where the dynamic variable is replaced with the value in the Secrets field entered above.
    -
    ${temporalWindow.from}
    This is a dynamic value injected, representing the start time of the temporal window.
  • Name -createdAt_lte(less than or equal to).

  • Value -${temporalWindow.to} the end time of the temporal window.

  • Value - ApiToken ${secrets.sentinelOneApiToken} where the dynamic variable is replaced with the value in the Secrets field entered above.

    Prefix - Bearer

  • Suffix - ''

  • Query Params -
    • Name - from_time

    • Value - ${temporalWindow.from}

    • Name - to_time

    • Value - ${temporalWindow.to}

    • Name - offset

    • Value - ${pagination.offset}

    • Name - limit

    • Value - ${pagination.limit}

    Output Mode
    -
    element
    Value - application/json
    Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    Port
    - 8080
  • Authentication Type - Bearer

  • Bearer Token Secret - a-string-secret-at-least-256-bits-longthis is the value you enter into Onum as the secret.

  • Request path - localhost

  • When you Listen for the HTTP request, the token will be encoded (generated by https://jwt.io/ here) eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWUsImlhdCI6MTUxNjIzOTAyMn0.KMUFsIDTnFmyG3nMiGM6H9FNFUROf3wh7SmqJp-QV30

    This entire request will show as follows: "http://localhost:8080/bearer" 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWUsImlhdCI6MTUxNjIzOTAyMn0.KMUFsIDTnFmyG3nMiGM6H9FNFUROf3wh7SmqJp-QV30'

    API keys are usually stored in developer portals, cloud dashboards, or authentication settings. Choose the existing Secret or
    if you haven't stored this key within Onum.

    Note that if you select this option, the HTTP Listener expects the API Key to be included in the URL, as a query parameter. For example:

    API keys are usually stored in developer portals, cloud dashboards, or authentication settings. Choose the existing Secret or
    if you haven't stored this key within Onum.

    application/xml - The message body is formatted as XML.

  • text/html - The message body contains HTML.

  • create a new one
    creating a new one
    this article
    create a new one
    create a new one

    Select the required privacy protocol (DES or AES). Then, choose your Privacy Password* from your Secrets or click New secret to create a new one.

    Secrets
    Secrets
    this article
    URL* - ${parameters.domain}/oauth2/token
  • Headers

    • Name - Content-type

    • Value - application/x-www-form-urlencoded

  • BodyType* - UrlEncoded

    • Body params

      • Name - grant_type

      • Value - client_credentials

      • Name - client_id

      • Value -'${secrets.client_id}'

      • Name - client_secret

      • Value - '${secrets.client_secret}'

  • Token Path* - .access_token

  • Auth Injection

    • In* - header

    • Name* -authorization

    • Prefix - Bearer

    • Suffix - ''

  • Response Type* - JSON

  • Method* - GET

  • URL* - ${parameters.domain}/incidents/queries/incidents/v1

  • Query Params

    • Name - offset

    • Value -${pagination.offset}

    • Name - limit

    • Value - ${pagination.limit}

    • Name - filter

    • Value -start:>='${temporalWindow.from}'+end:<'${temporalWindow.to}

  • Output

    • Select - .resources

    • Map -.

    • Output Mode - collection

  • Expression - .
  • Format - json

  • Pagination Type* - none

  • Request

    • Method* - POST

    • URL* - ${parameters.domain}/incidents/entities/incidents/GET/v1

    • Headers -

      • Name - Accept

      • Value - application/json

      • Name - Content-Type

  • Response Type - json

  • Body Type - raw

  • Body raw - | { "ids": ${inputs.resources} }

  • Output

    • Select - .resources

    • Map - .

    • Output Mode - element

  • URL* - ${parameters.domain}/oauth2/token
  • Headers

    • Name - Content-type

    • Value - application/x-www-form-urlencoded

  • BodyType* - UrlEncoded

    • Body params

      • Name - grant_type

      • Value - client_credentials

      • Name - client_id

      • Value -'${secrets.client_id}'

      • Name - client_secret

      • Value - '${secrets.client_secret}

  • Token Path* - .access_token

  • Auth Injection

    • In* - header

    • Name* - authorization

    • Prefix - Bearer

    • Suffix - ''

  • Request

    • Response Type* - JSON

    • Method* - GET

    • URL* - ${parameters.domain}/alerts/queries/alerts/v2

    • Query Params

      • Name - Offset

      • Value - ${pagination.offset}

      • Name - Limit

  • Output

    • Select - .resources

    • Map - .

    • Output Mode - collection

  • URL* - ${parameters.domain}/alerts/entities/alerts/v2

  • Headers -

    • Name - Accept

    • Value - application/json

    • Name - Content-Type

    • Value - application/json

  • Output

    • Select - .resources

    • Map - .

    • Output Mode - element

  • Exclude Info Types

    If true, excludes type information of the findings. The default value is false.

    Info Types*

    Type(s) of sensitive data to detect. You can choose as many types as needed.

    Data to Inspect*

    Choose the input field that contains the data to be inspected by the DLP API.

    JSON credentials*

    JSON object containing the credentials required to authenticate with the Google DLP API.

    Output Field*

    Name of the new field where the results of the DLP evaluation will be stored.

    Minimum Likelihood

    For each potential finding that is detected during the scan, the DLP API assigns a likelihood level. The likelihood level of a finding describes how likely it is that the finding matches an Info Type that you're scanning for. For example, it might assign a likelihood of Likely to a finding that looks like an email address.

    The API will filter out any findings that have a lower likelihood than the minimum level that you set here.

    The available values are:

    • Very Unlikely

    • Unlikely

    • Possible (This is the default value)

    • Likely

    • Very Likely

    For example, if you set the minimum likelihood to Possible, you get only the findings that were evaluated as Possible, Likely, and Very likely. If you set the minimum likelihood to Very likely, you get the smallest number of findings.

    Include Quote

    If true, includes a contextual quote from the data that triggered a finding. The default value is true.

    Choose the following info types:

    • Credit Card Number

    • Email Address

    • Password

    Data to Inspect

    Choose the input field that contains the data to be inspected by the DLP API.

    JSON credentials

    JSON object containing the credentials required to authenticate with the Google DLP API.

    Output Field

    Name of the new field where the results of the DLP evaluation will be stored.

    Minimum Likelihood

    We set the likelihood to Possible, as we want the right balance between recall and precision.

    Include Quote

    We want contextual info of the findings, so we set this to true.

    Exclude Info Types

    Set this to true, as we want to include type information of the findings.

    Offset - initial offset should be 0 (the latest alert).
  • Format - Epoch

  • Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST (we would need to generate the JWT using the secrets client_id and client_secret

    • URL* - ${parameters.domain}/oauth2/token

    • Headers

      • Name - Content-type

      • Value - application/x-www-form-urlencoded

    • BodyType* - UrlEncoded

      • Body params

        • Name - grant_type

    • Token Path* - .access_token

    • Auth Injection

      • In* - header

      • Name* - authorization

    Enumeration Phase

    Toggle ON to configure the enumeration phase. This API endpoint requires an initial request that will provide a list of alert ids. In order to get the details about that information, it will require an additional request for those details.

    • Pagination Type* - None

    • Request

      • Response Type* - JSON

      • Method* - GET

      • URL* - ${parameters.domain}/sensors/entities/datafeed/v2

      • Query Params

        • Name - appId

        • Value - my-datafeed-argos-onum-001

    • Output

      • Select - .resources[0]

      • Map -{dataFeedURL, sessionToken: .sessionToken.token}

    Collection Phase

    • Variables

      • Source - input

      • Name - dataFeedURL

      • Expression - .dataFeedURL

      • Format - ''

      • Source - input

      • Name - sessionToken

      • Expression - .sessionToken

      • Format - ''

    • Pagination Type* - none

    • Request

      • Method* - GET

      • URL* - ${inputs.dataFeedURL}

    • Query Params

      • Name - appId

      • Value - my-datafeed-argos-onum-001

    • Output

      • Select - .

      • Map - .

      • Output Mode

    this article
    Labels
    Choose which operations you want to use to define the new fields in your events:
    Operation
    Parameters

    Now

    • Now - Select true to create a new field with the current Epoch time in the selected time unit.

    • Now output field* - Give a name to the new field.

    • Now timezone* - Enter the required timezone (for example: UTC, America/New_York

    Today

    • Today - Select true to create a new field with the Epoch time corresponding to the current day at 00:00:00h in the selected time unit.

    • Today output field* - Give a name to the new field.

    • Today timezone* - Enter the required timezone (for example: UTC

    Yesterday

    • Yesterday - Select true to create a new field with the Epoch time corresponding to the previous day at 00:00:00h in the selected time unit.

    • Yesterday output field* - Give a name to the new field.

    • Yesterday timezone* - Enter the required timezone (for example: UTC

    This Year

    • This Year - Select true to create a new field with the Epoch time corresponding to the previous day at 00:00:00h in the selected time unit.

    • This Year output field* - Give a name to the new field.

    • This Year timezone* - Enter the required timezone (for example: UTC

    This Month

    • This Month - Select true to create a new field with the Epoch time corresponding to the previous day at 00:00:00h in the selected time unit.

    • This Month output field* - Give a name to the new field.

    • This Month timezone* - Enter the required timezone (for example: UTC

    This Week

    • This Week - Select true to create a new field with the Epoch time corresponding to the previous day at 00:00:00h in the selected time unit.

    • This Week output field* - Give a name to the new field.

    • This Week timezone* - Enter the required timezone (for example: UTC

    4

    Click Save to complete the process.

    Custom field

    • Allow custom field - Set it to true.

    • New custom field name - We're naming the new field Custom.

    • Custom field value - Enter Test.

    3

    Left the rest of the parameters as default and click Save to apply the configuration.

    4

    Now link the Default output port of the Action to the input port of your Data sink.

    5

    Finally, click Publish and choose in which clusters you want to publish the Pipeline.

    6

    Click Test pipeline at the top of the area and choose a specific number of events to test if your data is transformed properly. Click Debug to proceed.

    Now

    Building a Pipeline
    Listener
    Data sink
    • Now - Set it to true.

    • Now output field - We're naming the new field Now.

    • Now time unit - Choose seconds.

    Execute ML models via hosted APIs.

    Classify log severity with ML.

    Drop or allow events based on logic.

    Filter out successful health check logs.

    Add generated fields (timestamp, random, static...)

    Tag events with trace ID and pipeline time.

    Apply math, encoding, parsing, or string operations to fields.

    Hash IPs, defang URLs, convert timestamps.

    Iterate array fields and emit per-item events.

    Split DNS records into individual log lines.

    Redact sensitive data via Google API.

    Remove SSNs, emails from customer logs.

    Use Google’s LLM to enrich log content.

    Summarize error logs for dashboards.

    Aggregate by key(s) over a time window.

    Count logins per user every minute.

    Trigger external HTTP(S) calls inline.

    Notify PagerDuty, call enrichment APIs.

    Remap or rename JSON fields and structure.

    Standardize custom app logs to a shared schema.

    Convert arrays into individual events.

    Split one event with 5 IPs into 5 separate events.

    Apply open-source LLMs to event text.

    Translate or tag non-English log data.

    Add fields from a reference table.

    Add business unit or geolocation to IPs.

    Compute values using event fields.

    Calculate duration = end_time - start_time.

    Compose structured output for downstream tools.

    Create Slack-friendly JSON alerts.

    Convert events to Open Cybersecurity Schema.

    Standardize endpoint data for SIEM ingestion.

    Parse text using regex or pattern to extract fields.

    Convert syslog strings into structured events.

    Use Redis for state lookups or caching.

    Limit login attempts per user per hour.

    Run any hosted model from Replicate.

    Enrich logs using anomaly detection models.

    Randomly pass only a portion of events.

    Keep 10% of debug logs for cost control.

    Match events against threat rule patterns.

    Detect C2 activity or abnormal auth behavior.

    Emit only first-seen values.

    Alert on first-time-seen device IDs or IPs.

    Amazon GenAI

    Use models hosted on Amazon Bedrock to enrich log content.

    Enrich logs by extracting insights like key entities.

    Anonymizer

    Mask, hash, or redact sensitive fields.

    Obfuscate usernames or IPs in real-time.

    BLIP-2

    Extract text from images or diagrams.

    OCR screenshots of phishing sites.

    Bring Your Own Code

    Run custom Python in an isolated container.

    Actions view.
    here

    NLP on messages, custom alert logic.

    Overview

    Onum supports integration with Azure Event Hubs

    The Azure Event Hubs Listener receives messages from an Azure Event Hub for real-time data streaming, providing support for message batching, retries, and secure connection options.

    Prerequisites

    In order to use this Listener, you must activate the Environment Variable in your distributor using docker compose (AZURE_EVENTHUB_LISTENER_EXECUTION_ENABLED)

    Azure Event Hubs Setup

    There are various management credentials that Onum needs to communicate with the event hub.

    • Event Hubs namespace

    • Event hub

    See the Azure Event Hubs documentation for how to create these.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Azure Event Hubs Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    Establish the Event Hub Connection

    • Enter the Event Hub Namespace* to connect to (e.g. mynamespace.servicebus.windows.net). You can find this in the top left-hand corner of you Azure area.

    • In your Azure console, click your Event Hubs namespace to view the Hubs it contains in the middle pane and enter it in the Event Hub Name

    5

    In the Authentication section, choose between Connection String and Entra ID as the Authentication Type.

    • Connection String

      • Connection String* The URL for your Event Hub. To get it:

    6

    You can now select the secret you just created in the corresponding fields.

    7

    Checkpointing & Processor

    When multiple consumer instances read from the same Event Hub and consumer group, a cooperative processor coordinates partition ownership and progress using a checkpoint store (Azure Blob Storage).

    • Ensures at-least-once processing without duplicates when instances restart: committed checkpoints allow new owners to resume from the last processed offset instead of re-reading the whole partition.

    • Evenly distributes partitions across active instances (load balancing): with the balanced strategy, ownership is redistributed as instances join/leave;

    8

    Enter the Storage Container Name* In the left-hand menu, scroll down to Resource groups, where you will see a list of all the storage containers within your Event Hub. In Onum, enter the name of the blob container to persist checkpoints and ownership.

    9

    The Storage Connection String parameter is a secret, therefore you must add this string in the area, or select it from the list if you have already done so.

    for where to find it in the Azure portal.

    10

    Then, configure the Processor Options.

    • Load Balancing Strategy

      Choose how to distribute the work evenly across the server to avoid overload.

      • Balanced - distributes load evenly across all servers.

    11

    Decide whether to Use batch settings.

    When false, the handler processes events one-by-one using internal defaults (maxBatchSize=1, maxWaitTimeMs=500). When true, batch processing settings apply.

    • Max Batch Size* Enter the maximum bytes for the batch.

    12

    The Start Position defines where to begin reading the event stream.

    • Latest (End of Stream)

      • Onum begins reading from the next event that is enqueued after Onum starts. It skips all existing events currently in the partition.

    13

    Add the Backoff Settings regarding how long to wait before retrying a request after failure.

    • Error Backoff (ms) Enter the amount of milliseconds to wait after an error before retrying.

    • Idle Backoff (ms) Enter the amount of milliseconds to wait before trying again to send a request.

    14

    Choose the Decompression method used to restore a compressed message to its original form before being processed (none, gzip or zlib)

    15

    Choose the Split Strategy method of dividing the data or requests from the following delimiter options:

    • None to ignore

    • Newline

    • JSON array

    16

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabeled. Click Create listener when you're done.

    Learn more about labels in .

    Click Create listener when you're done.

    The maximum result set size is >100.
  • Offset is the zero-based number of incidents from the start of the result set.

  • Configuration

    Parameters

    Name - domain

    Value - CortexXdrDomain

    Secrets

    • CortexXDRAuthorization will reference the Cortex XDR Authorization token.

    • CortexXDRAuthId will reference the Cortex XDR Authorization ID.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Cortex incident Management fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your Cortex XDR multi alerts YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    Authentication Phase

    Off

    Enumeration Phase

    • Pagination Type* - fromTo

    • Zero index* - false

    • Limit* - 100

    Output

    • Select - .reply.incidents

    • Map - .

    • Output Mode - element

    Collection Phase

    • Source - input

    • Name - incident_id

    • Expression - .incident_id

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required Labels if needed.

    In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.

    Ports

    These are the input and output ports of this Action:

    Input ports
    • Default port - All the events to be processed by this Action enter through this port.

    Output ports
    • Default port - Events are sent through this port if no error occurs while processing them.

    • Error port - Events are sent through this port if an error occurs while processing them.

    Configuration

    1

    Find HTTP Request in the Actions tab (under the Advanced group) and drag it onto the canvas. Link it to the required Listener and Data sink.

    2

    To open the configuration, click the Action in the canvas and select Configuration.

    3

    Enter the required parameters:

    Parameter
    Description

    Authentication Configuration

    Choose the type of authentication for the request.

    Parameter
    Description

    Authentication Credentials

    Depending on the option you chose above, you must enter the required authentication information in this section:

    Parameter
    Description

    Bulk Configuration

    Parameter
    Description

    Rate Limiter Configuration

    Establish a limit for the number of HTTP requests permitted per second.

    Parameter
    Description

    TLS Configuration

    Parameter
    Description

    Proxy Configuration

    If your organization uses proxy servers, set it using these options:

    Parameter
    Description

    Retry Configuration

    Set how you want to manage retry attempts in case of errors in the requests:

    Parameter
    Description
    4

    Click Save to complete.

    Example

    Click Save to complete.

    Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    Listeners

    Learn about how to set up and use Listeners

    Actions

    Discover Actions to manage and customize your data

    Datasinks

    Add the final piece of the puzzle for simpler data

    this article
    this article
    this article
    this article

    Collect data from Sophos

    Overview

    Get SIEM Integration events from Sophos.

    Configuration

    Secrets

    • secrets.Sophos.client_ID will reference the Client ID

    • secrets.Sophos_Client_Secret will reference the Client Secret.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required secrets, you can choose to manually enter the Sophos SIEM integration event fields, or simply paste the given YAML:

    Toggle this ON to enable a free text field where you can paste your Cortex XDR multi alerts YAML.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    This HTTP Pull Listener now uses the data export API to extract events.

    Click Create labels to move on to the next step and define the required if needed.

    Collect data from Amazon S3

    Most recent version: v2.1.0

    See the changelog of the Amazon S3 Listener .

    The Amazon S3 Listener is a Pull Listener and therefore should not be used in environments with more than one cluster.

    Event

    Overview

    Get a list of all or filtered events.

    Configuration

    curl --location 'http://customer.in.prod.onum.com:2250/test?My-Token=1234567890qwerty' \
    --header 'Content-Type: application/json' \
    --data '{"message": "hello, how are you doing? :)"}'
    withTemporalWindow: true
    temporalWindow:
      duration: 30m
      offset: 0
      tz: UTC
      format: Epoch
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/oauth2/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
          bodyType: urlEncoded
          bodyParams:
            - name: grant_type
              value: client_credentials
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: true
    enumerationPhase:
      paginationType: none
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/sensors/entities/datafeed/v2
        queryParams:
          - name: appId
            value: my-datafeed-argos-onum-001
      output:
        select: ".resources[0]"
        map: "{dataFeedURL, sessionToken: .sessionToken.token}"
        outputMode: element
    collectionPhase:
      variables:
        - source: input
          name: dataFeedURL
          expression: ".dataFeedURL"
          format: ''
        - source: input
          name: sessionToken
          expression: ".sessionToken"
          format: ''
      paginationType: none
      request:
        method: GET
        url: "${inputs.dataFeedURL}"
        headers:
          - name: Accept
            value: application/json
          - name: Authorization
            value: "Token ${inputs.sessionToken}"
        queryParams:
          - name: appId
            value: my-datafeed-argos-onum-001
          - name: whence
            value: 2
        responseType: ndjson
      output:
        select: "."
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: true
    enumerationPhase:
      paginationType: "fromTo"
      limit: 100
      request:
        responseType: json
        method: "POST"
        url: "https://${parameters.CortexXdrDomain}/public_api/v1/incidents/get_incidents"
        headers:
          - name: Accept
            value: "application/json"
          - name: Content-Type
            value: "application/json"
          - name: Authorization
            value: "${secrets.CortexXdrAuthorization}"
          - name: x-xdr-auth-id
            value: ${secrets.CortexXdrAuthId}
        bodyType: raw
        bodyRaw: |
          {
            "request_data": {
              "search_from": ${pagination.from},
              "search_to": ${pagination.to},
              "filters": [
                {
                  "field": "creation_time",
                  "operator": "gte",
                  "value": ${temporalWindow.from}000
                },
                {
                  "field": "creation_time",
                  "operator": "lte",
                  "value": ${temporalWindow.to}000
                }
              ]
            }
          }
      output:
        select: '.reply.incidents'
        map: "."
        outputMode: element
    collectionPhase:
      variables:
        - source: input
          name: incident_id
          expression: ".incident_id"
          default: "0"
      paginationType: none
      request:
        responseType: json
        method: "POST"
        url: "https://${parameters.CortexXdrDomain}/public_api/v1/incidents/get_incident_extra_data"
        headers:
          - name: Accept
            value: "application/json"
          - name: Content-Type
            value: "application/json"
          - name: Authorization
            value: "${secrets.CortexXdrAuthorization}"
          - name: x-xdr-auth-id
            value: ${secrets.CortexXdrAuthId}
        bodyType: raw
        bodyRaw: |
          {
              "request_data":{
                  "incident_id":"${inputs.incident_id}",
                  "alerts_limit":100
              }
          }
      output:
        select: ".reply"
        map: "."
        outputMode: "element"
    {
        "payloadField": "correlationIDKey",
        "outField": "outputField",
        "serverUrl": "http://localhost:8080/${path_from_event}?${impactKey}=${correlationIDKey}",
        "method": "POST",
        "authentication": {
            "authType": "apiKey",
            "credentials": {
                "apiKey": {
                    "apiKeyName": "x-api-key",
                    "apiKeyValue": {
                        "id": "apiKey",
                        "value": "ad1dewfwef2321323"
                    }
                }
            }
        }
    }

    Value - application/json

    Value - ${pagination.limit}

  • Name - Filter

  • Value - created_timestamp:>'${temporalWindow.from}'+created_timestamp:<'${temporalWindow.to}'

  • Value - client_credentials

  • Name - client_id

  • Value -'${secrets.client_id}'

  • Name - client_secret

  • Value - '${secrets.client_secret}'

  • Prefix - Bearer

  • Suffix - ''

  • Output Mode - element

    Headers -

    • Name - Accept

    • Value - application/json

    • Name - Authorization

    • Value - Token ${inputs.sessionToken}

    Name - whence
  • Value - 2

  • -
    element
    Cog
    Conditional
    Field Generator
    Field Transformation
    For Each
    Google DLP
    Google GenAI
    Group By
    HTTP Request
    JSON Transformation
    JSON Unroll
    Llama
    Lookup
    Math Expression
    Message Builder
    OCSF
    Parser
    Redis
    Replicate
    Sampling
    Sigma Rules
    Unique
    here
    here
    here
    *
    field. Alternatively, click
    Event Hub
    to create one.
  • In the left-hand menu, scroll down to Entities and click Consumer groups to see the names. This value is $Default when empty.

  • Click your Event Hubs namespace to view the Hubs it contains.

  • Scroll down to the bottom and click the specific event hub to connect to.

  • In the left menu, go to Shared Access Policies.

  • If there is no policy created for an event hub, create one with Manage, Send, or Listen access.

  • Select the policy from the list.

  • Select the copy button next to the Connection string-primary key field.

  • Depending on the version of Azure you are using, the corresponding field may have a different name, so to help you find it, look for a string with the same format: Endpoint=sb://.servicebus.windows.net/;SharedAccessKeyName=;SharedAccessKey=

  • Entra ID - enter the following credentials from the Certificates & Secrets area

    • Tenant ID*

    • Client ID*

    • Client Secret*

  • Open the Secret fields and click New secret to create a new one:

    • Give the token a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the JWT token you generated before. Remember that the token will be added in the Zscaler configuration.

    • Click Save.

    Learn more about secrets in Onum in this article.

    greedy
    tries to acquire as many partitions as possible.
  • Enables safe horizontal scaling: adding instances increases throughput by processing multiple partitions in parallel.

  • Learn more in the Azure Event Hubs documentation:

    • Checkpointing overview

    • Tutorial on checkpoints and rewinding

    It is recommended you only choose False if you are sure that your worker only uses a single distributor. If you suspect your worker is connected to more than one distributor, it is important to select true and configure the fields below.

    Greedy - assigns each new task immediately to the currently leadt-loaded server.

  • Update Interval (ms) How often a processor renews partition ownership; defaults to 10000ms if unset.

  • Partition Expiration Duration (ms) Enter a time limit in milliseconds, after which the load partition will be considered expired and can be claimed by other instances.

  • Max Wait Time* Enter the maximum amount of milliseconds to wait before considering the batch as complete.

  • Earliest (Start of Stream)

    • Onum begins reading from the very first event currently retained in the partition. Events are only available up to the Event Hub's data retention period (e.g., 1 to 7 days for Standard, up to 90 days for Premium/Dedicated). You cannot read events older than the retention limit.

  • Sequence Number

    • Onum begins listening at a specific event identified by its unique, increasing sequence number within that partition.

      • This will show a new field where you can enter the Sequence Number*, and an Inclusive* drop-down where true includes this value and false listens from this value onwards.

  • Minutes ( FromEnqueuedTime)

    • Onum begins listening from the first event that was enqueued on or after a specified UTC date/time.

      • This will show a new field where you can enter the Minutes Ago*, and an Inclusive* drop-down where true includes this value and false listens from this value onwards.

  • If you have configured Checkpoint & Processing options, the Start Position only applies the first time you run the Listener. From then on, the checkpoint is used.

  • JSON object

  • Custom Delimiter

    • Custom Delimiter - enter your custom delimiter here.

  • Secrets
    See here
    this article

    Request

    • Response Type* - JSON

    • Method* - POST

    • URL* - https://${parameters.CortexXdrDomain}/public_api/v1/incidents/get_incidents

    • Headers

      • Name - Accept

        • Value - application/json

  • Body type* - raw

  • Body content* - { "request_data": { "search_from": ${pagination.from}, "search_to": ${pagination.to}, "filters": [ { "field": "creation_time", "operator": "gte", "value": ${temporalWindow.from}000 }, { "field": "creation_time", "operator": "lte", "value": ${temporalWindow.to}000 } ] } }

  • Format - JSON
  • Pagination Type* - None

  • Request

    • Response Type* - JSON

    • Method* - POST

    • URL* - https://${parameters.CortexXdrDomain}/public_api/v1/incidents/get_incident_extra_data

    • Headers

      • Name - Accept

      • Value - application/json

      • Name - Content-Type

    • Body type* - raw

    • Body content* -{ "request_data":{ "incident_id":"${inputs.incident_id}", "alerts_limit":100 } }

  • Output

    • Select - .reply

  • Map - .

  • Output Mode - element

  • Disable Redirects

    Select true to disable HTTP redirects or false to ignore.

    Content-Type

    Set the request content-type:

    • text/plain - Plain text with no formatting.

    • application/json - Data in JSON format. This is the default value.

    • application-xml - Data in XML format.

    HTTP Method*

    The HTTP method for the request. Choose between GET, POST, PUT, DELETE, or PATCH.

    Server URL*

    The target URL for the HTTP request.

    Field that holds the request body

    Enter the name of the field that includes the request body.

    Field where the response will be stored

    Enter the name of the field that will store the HTTP response.

    HTTP Headers

    Optionally, you can enter a map of header key-value pairs to include in the request.

    Timeout (seconds)

    Enter the timeout for the HTTP request in seconds.

    Authentication Type*

    Choose between None, Basic, Bearer, or API Key.

    Basic Authentication

    Username and Password for basic authentication. For the password, choose one of the secrets defined in your Tenant or create a new one by clicking New secret. Learn more about secrets in this section.

    Bearer Token

    Token for Bearer authentication. Choose one of the secrets defined in your Tenant or create a new one by clicking New secret. Learn more about secrets in this section.

    API Key

    Define the API Key Name and API Key for API Key configuration. For the API key, choose one of the secrets defined in your Tenant or create a new one by clicking New secret. Learn more about secrets in this section.

    Bulk allow*

    Set this to true and configure the options below if you want to set bulk sending in your HTTP requests. Otherwise, set it to false.

    Store as*

    Decide how to store events in your responses. Choose between:

    • Delimited - Events in a batch are stored separated by a delimiter. Set the required delimiter in the option below. The default option is newline (\n).

    • Without Delimeter - Events are concatenated without any separator.

    • JSON Array - Events are structured in a JSON array.

    Events per batch*

    Set the number of individual events per bulk request.

    Maximum number of buffers per server URL

    Set the maximum number of buffers per server URL. The default value is 25, and the maximum value is 50.

    Event time limit

    Time in seconds to send the events.

    Number of requests per second

    Enter the maximum number of requests that can be sent per second. The minimum is 1.

    Allow TLS configuration*

    Set this option to true if you need to configure the TLS config of the Data sink. Otherwise, set it to false.

    Certificate*

    Choose the predefined TLS certificate.

    Private Key*

    The private key of the corresponding certificate.

    CA Chain*

    The path containing the CA certificates.

    Minimum TLS version*

    Minimum TLS version required for incoming connections. The default version is v1.2

    URL

    Enter the required proxy URL.

    Username

    Enter the username used in the proxy.

    Password

    Enter the password used in the proxy.

    Max attempts

    Set the maximum number of attempts before returning an error. The minimum value is 1.

    Wait between attempts

    Choose the milliseconds to wait between attempts in case of an error. The minimum value is 100.

    Backoff interval

    Define how the wait time should increase between attempts, in seconds. The minimum value is 1.

    here
    Offset - 5m
  • Format - Epoch

  • Authentication Phase

    Toggle ON to set the Authentication settings.

    • Type* - token

    • Request Method* - POST

    • URL* - https://id.sophos.com/api/v2/oauth2/token

    • Headers

      • Name - Content-type

      • Value - application/x-www-form-urlencoded

    • BodyType* - UrlEncoded

      • Body params

        • Name - grant_type

    • Token Path* - .access_token

    • Auth Injection

      • In* - header

      • Name* - authorization

    Enumeration Phase

    Toggle ON to configure the enumeration phase. This API endpoint requires an initial request that will provide a list of alert ids. In order to get the details about that information, it will require an additional request for those details.

    • Pagination Type* - none

    • Request

      • Response Type* - JSON

      • Method* - GET

      • URL* - https://api.central.sophos.com/whoami/v1

      • Headers

        • Name - Accept

        • Value - application/json

        • Name - Accept-Encoding

    • Output

      • Select - .

      • Filter - .

    Collection Phase

    • Inputs

      • Source - input

      • Name - tenantId

      • Expression - .id

      • Format - ''

      • Source - input

      • Name - dataRegionURL

      • Expression - .apiHosts.dataRegion

      • Format - ''

    • Pagination Type* - cursor

    • Cursor Selector* - .next_cursor

    • Initial Request

      • Method* - GET

      • URL* - ${inputs.dataRegionURL}/siem/v1/events

    • Next Request

      • Method* - GET

      • URL* - ${inputs.dataRegionURL}/siem/v1/events

    • Output

      • Select - .items

      • Filter - .

    this article
    Labels
    ,
    Europe/London
    ...). The default value is
    UTC
    .
  • Output type* - Choose the type of your output dates:

    • Unix timestamp - This is the default value. Choose the required time unit in the Now time unit* parameter that appears. The available time units are nanoseconds, microseconds, milliseconds & seconds.

    • Custom format - Enter a specific time format for your output dates in the Custom format* field that appears. For the complete valid format reference, see the .

  • ,
    America/New_York
    ,
    Europe/London
    ...). The default value is
    UTC
    .
  • Output type* - Choose the type of your output dates:

    • Unix timestamp - This is the default value. Choose the required time unit in the Today time unit* parameter that appears. The available time units are nanoseconds, microseconds, milliseconds & seconds.

    • Custom format - Enter a specific time format for your output dates in the Custom format* field that appears. For the complete valid format reference, see the .

  • ,
    America/New_York
    ,
    Europe/London
    ...). The default value is
    UTC
    .
  • Output type* - Choose the type of your output dates:

    • Unix timestamp - This is the default value. Choose the required time unit in the Yesterday time unit* parameter that appears. The available time units are nanoseconds, microseconds, milliseconds & seconds.

    • Custom format - Enter a specific time format for your output dates in the Custom format* field that appears. For the complete valid format reference, see the .

  • ,
    America/New_York
    ,
    Europe/London
    ...). The default value is
    UTC
    .
  • Output type* - Choose the type of your output dates:

    • Unix timestamp - This is the default value. Choose the required time unit in the This Year time unit* parameter that appears. The available time units are nanoseconds, microseconds, milliseconds & seconds.

    • Custom format - Enter a specific time format for your output dates in the Custom format* field that appears. For the complete valid format reference, see the .

  • ,
    America/New_York
    ,
    Europe/London
    ...). The default value is
    UTC
    .
  • Output type* - Choose the type of your output dates:

    • Unix timestamp - This is the default value. Choose the required time unit in the This Month time unit* parameter that appears. The available time units are nanoseconds, microseconds, milliseconds & seconds.

    • Custom format - Enter a specific time format for your output dates in the Custom format* field that appears. For the complete valid format reference, see the .

  • ,
    America/New_York
    ,
    Europe/London
    ...). The default value is
    UTC
    .
  • Output type* - Choose the type of your output dates:

    • Unix timestamp - This is the default value. Choose the required time unit in the This Week time unit* parameter that appears. The available time units are nanoseconds, microseconds, milliseconds & seconds.

    • Custom format - Enter a specific time format for your output dates in the Custom format* field that appears. For the complete valid format reference, see the .

  • Custom field data type - Choose string.

    Random number

    • Random number - Select true to create a new field with a random value.

    • Random output field* - Give a name to the new field.

    UUID v4

    Select True to enable and enter a name for the UUID v4 output field*

    Custom field

    • Allow custom field - Select true to create a new field with a custom value.

    • New custom field name* - Give a name to the new field.

    • Custom field value* - Set the value you want to add in the new field.

    • Custom field data type* - Choose the data type of the new field between integer, boolean, float or string.

    here
    Overview

    Amazon Simple Storage Service (S3) is a fully managed object storage service. Users typically use it to store big files at a reasonable cost for long periods of time. In particular, it's commonly used as a data lake storage layer, storing files containing user events with some format/encoding/compression.

    Amazon S3 also supports sending notifications to an SQS queue when new files are added to some bucket. You can see a sample notification here.

    By leveraging all the above, our S3 Listener is able to react to new files being added to the bucket, get the files, and ingest their events into Onum. All that is needed is an existing SQS queue, an existing S3 bucket, and having the bucket correctly configured to send notifications to the queue.

    Using the Amazon S3 Listener, you can read the following AWS content:

    • Collect AWS Application Logs - You can use the Amazon S3 Listener to collect AWS application logs, such as AWS CloudTrail, AWS CloudWatch or AWS WAF logs.

    • Collect Bucket Content - You can send the files in your Amazon S3 buckets to Onum using the Amazon S3 Listener.

    Prerequisites

    Before configuring and starting to send data with the Amazon S3 Listener, you need to take into consideration the following requirements:

    • Your Amazon user needs at least permission to use the GetObject operation (S3) and the ReceiveMessage and DeleteMessageBatch operations (SQS Bucket) to make this Listener work.

    • Cross-Region Configurations: Ensure that your S3 bucket and SQS queue are in the same AWS Region, as S3 event notifications do not support cross-region targets.

    • Permissions: Confirm that the AWS Identity and Access Management (IAM) roles associated with your S3 bucket and SQS queue have the necessary permissions.

    • Object Key Name Filtering: If you use special characters in your prefix or suffix filters for event notifications, ensure they are URL-encoded.

    Amazon S3 Setup

    You need to configure your Amazon S3 bucket to send notifications to an Amazon Simple Queue Service (SQS) queue when new files are added.

    1

    Create an Amazon SQS Queue

    • Sign in to the AWS Management Console and open the Amazon SQS console.

    • Choose Create Queue and configure the queue settings as needed.

    • After creating the queue, note its Amazon Resource Name (ARN), which follows this format: arn:aws:sqs:<region>:<account-id>:<queue-name>.

    2

    Modify the SQS Queue Policy to Allow S3 to Send Messages

    1. In the Amazon SQS console, select your queue.

    3

    Configure S3 Event Notifications

    1. Open the Amazon S3 console and select the bucket you want to configure.

    4

    Test the Configuration

    1. Upload a new file to your S3 bucket.

    Onum Setup

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the Amazon S3 Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    4

    In the Objects section, enter the required Compression method used in the ingested S3 files and Format of the ingested S3 files.

    • Compression - This accepts the standard compression codecs (gzip, zlib, bzip2), none for no compression, and auto to autodetect the compression type from the file extension.

    • Format - This currently accepts JSON, JSON lines (a JSON object representing an event on each line), and CSV.

    If you select JSON

    5

    Define the Bucket to listen from.

    • Region* - Find this in your Buckets area, next to the name.

    • Name - The your data is stored in. This is the bucket name found in your Buckets area. You can fill this if you want to check that notifications come from that bucket, or leave it empty to avoid such checks.

    6

    Proceed with caution when modifying the Bucket advanced options. Default values should be enough in most cases.

    Optionally, Amazon S3 provides different types of service endpoints based on the region and access type.

    1. Select your bucket.

    2. Go to the Properties tab.

    7

    In the Queue section, choose the region your queue is created in from the dropdown provided.

    8

    Then, enter the URL of your existing Amazon SQS queue to send the data to.

    1. Go to the AWS Management Console.

    2. In the Search Bar, type SQS and click on Simple Queue Service (SQS).

    9

    Choose your Authentication Type*

    Choose manual to enter your access key ID and secret access key manually in the parameters below, or auto to authenticate automatically.

    10

    If you have configured your bucket and queue to require different Access Key IDs and Secret Access Keys, enter them here. If these are the same as your bucket, you don't need to repeat them here.

    11

    Proceed with caution when modifying the Queue advanced options. Default values should be enough in most cases.

    • Service endpoint - If you have a custom endpoint, enter it here. The default SQS regional service endpoint will be used by default.

    • Maximum number of messages* - Set a limit for the maximum number of messages to receive in the notifications queue for each request. The minimum value is 1, and the maximum and default value is

    12

    Proceed with caution when modifying the General advanced options. Default values should be enough in most cases.

    • Event batch size*- Enter a limit for the number of events allowed through per batch. The minimum value is 1, and the maximum and default value is 1000000.

    • Minimum retry time

    13

    Finally, click Create labels. Optionally, you can set labels to be used for internal Onum routing of data. By default, data will be set as Unlabelled.

    Learn more about labels in .

    14

    Click Create listener when you're done.

    Parameters
    • Domain (netskopeDomain)

    • Index (netskopeIndex) - The index parameter in the Netskope API for Data Export is used to:

      • Uniquely identify an export session.

      • Prevent multiple API consumers from overlapping their collections.

      • Allow incremental paging without losing events.

    Secrets

    • NetskopeApiToken refers to the API Token used to authenticate the connection to Netskope.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in this article.

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Netskope API Alerts fields, or simply paste the desired YAML.

    Configure as YAML

    Manually Configure

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - Epoch

    Authentication Phase

    OFF

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - cursor

    • Cursor Selector* - .timestamp_hwm

    • Initial Request

      • Method* - GET

      • URL* - https://${parameters.domain}/api/v2/events/dataexport/events/INSERT NAME FROM YAML?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}

      • Headers -

        • Name - Accept

        • Value - application/json

      Next Request

      • Method* - GET

      • URL* - https://${parameters.domain}/api/v2/events/dataexport/events/INSERT NAME?operation=next&index=${parameters.netskopeIndex}

    • Output

      • Select - .result

      • Map - .

    Click Create labels to move on to the next step and define the required Labels if needed.

    Audits

    Overview

    Fetches audits for user, domain, or organization. start_date defaults to 14 days ago and max is 13 months.

    Configuration

    Parameters

    • parameters.domain will store the value of the API URL, excluding the endpoint paths like /v1/cp/oauth/token or /v1/cp/alert_events

    Secrets

    • secrets.client_id will reference to Agari's Client ID

    • secrets.client_secret will reference to Agari's Client Secret.

    Open the Secret fields and click New secret to create a new one:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the fields, or simply paste the given YAML:

    Configure as YAML

    Toggle this ON to enable a free text field where you can paste your YAML.

    Here we provide three different YAMLs that list audits by Domain, Organization and User.

    Manually configure

    If you would rather configure each field, follow the steps below.

    Here you have the examples based on the List audits by Domain YAML. Simply replace the word domain with organization or user.

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - initial offset should be 5m

    • Format - RFC3339

    Authentication Phase

    Toggle ON to configure the authentication phase. This is required to get the token to pull data using OAuth.

    • Type* - token

    • Request Method* - POST (we would need to generate the JWT using the secrets client_id and client_secret)

    Enumeration Phase

    Toggle ON

    • Pagination Type* - offset/Limit

    • Limit* - 200

    • Request

    Collection Phase

    • Variables

      • Source - input

      • Name - domainId

    This HTTP Pull Listener now uses the data export API to extract alert events.

    Click Create labels to move on to the next step and define the required if needed.

    Parameter
    Description

    Commands*

    The command to read or write data from the Redis server.

    • SET

      • Redis Key*- Choose the input field that contains it the model version.

      • Value* - Choose the field that contains the events you want to input to Redis.

    here
    here
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: https://id.sophos.com/api/v2/oauth2/token
          headers:
            - name: Accept
              value: application/json
            - name: Content-Type
              value: application/x-www-form-urlencoded
          queryParams: []
          bodyType: urlEncoded
          bodyParams:
            - name: grant_type
              value: client_credentials
            - name: client_id
              value: '${secrets.sophosClientId}'
            - name: client_secret
              value: '${secrets.sophosClientSecret}'
            - name: scope
              value: token
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: true
    enumerationPhase:
      paginationType: none
      request:
        responseType: json
        method: GET
        url: https://api.central.sophos.com/whoami/v1
        headers:
          - name: Accept
            value: application/json
          - name: Accept-Encoding
            value: gzip, deflate
          - name: Content-Type
            value: application/json
          - name: Cache-Control
            value: no-cache
        queryParams: []
        bodyParams: []
      output:
        select: "."
        filter: "."
        map: "."
        outputMode: element
    
    collectionPhase:
      variables:
        - source: input
          name: tenantId
          expression: ".id"
          format: ''
        - source: input
          name: dataRegionURL
          expression: ".apiHosts.dataRegion"
          format: ''
      paginationType: cursor
      cursorSelector: ".next_cursor"
      initialRequest:
        method: GET
        url: "${inputs.dataRegionURL}/siem/v1/events"
        headers:
          - name: Accept
            value: application/json
          - name: Accept-Encoding
            value: gzip, deflate
          - name: X-Tenant-ID
            value: "${inputs.tenantId}"
        queryParams:
          - name: from_date
            value: "${temporalWindow.from}"
        bodyParams: []
      nextRequest:
        method: GET
        url: "${inputs.dataRegionURL}/siem/v1/events"
        headers:
          - name: Accept
            value: application/json
          - name: Accept-Encoding
            value: gzip, deflate
          - name: X-Tenant-ID
            value: "${inputs.tenantId}"
        queryParams:
          - name: cursor
            value: "${pagination.cursor}"
        bodyParams: []
      output:
        select: ".items"
        filter: "."
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/alert?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/alert?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/application?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/application?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/audit?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/audit?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/incident?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/incident?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/infrastructure?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/infrastructure?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/network?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/network?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/page?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/events/page?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    Name
    - Content-Type
  • Value - application/json

  • Name - Authorization

  • Value - ${secrets.CortexXdrAuthorization}

  • Name - x-xdr-auth-id

  • Value - ${secrets.CortexXdrAuthId}

  • Value - application/json

  • Name - Authorization

  • Value - ${secrets.CortexXdrAuthorization}

  • Name - x-xdr-auth-id

  • Value - ${secrets.CortexXdrAuthId}

  • Value - client_credentials

  • Name - client_id

  • Value -${secrets.Sophos_Client_ID}

  • Name - client_secret

  • Value - ${secrets.Sophos_Client_Secret}

  • Name - scope

  • Value - token

  • Prefix - Bearer

  • Suffix - ''

  • Value - gzip, deflate

  • Name - Content-Type

  • Value - application/json

  • Name - Cache-Control

  • Value - no-cache

  • Map -
    .
  • Output Mode - element

  • Headers -

    • Name - Accept

    • Value - application/json

    • Name - Accept-Encoding

    • Value - gzip, deflate

    • Name - X-Tenant-ID

    • Value - ${inputs.tenantId}

  • Query Params

    • Name - from_date

    • Value - ${temporalWindow.from}

  • Headers -

    • Name - Accept

    • Value - application/json

    • Name - Accept-Encoding

    • Value - gzip, deflate

    • Name - X-Tenant-ID

    • Value - ${inputs.tenantId}

  • Body type* - there is no required body type because the parameters are included in the URL. However, these fields are mandatory, so select raw and enter the {} placeholder.

  • Map -
    .
  • Output Mode - element

  • text/html - Data in HTML format.

    Expiration in seconds - Optionally, enter how long the key will be available in the Redis server. The minimum value is 0.

  • HSET

    • Redis Key*- Choose the input field that contains it the model version.

    • Field/Value pairs - Add as many fields and pipeline values as required.

  • GET

    • Redis Key*- Choose the input field that contains it the model version.

    • Output field* - Enter a name for the output field that will store the output data.

  • HGET

    • Redis Key*- Choose the input field that contains it the model version.

    • Redis field* - Select the field from the Listener or Action that serves as the HGET field.

    • Output field* - Enter a name for the output field that will store the output data.

  • gotime Nites Format Documentation
    gotime Nites Format Documentation
    gotime Nites Format Documentation
    gotime Nites Format Documentation
    gotime Nites Format Documentation
    gotime Nites Format Documentation
    Name - Netskope-Api-Token
  • Value - ${secrets.netskopeApiToken}

  • Headers -

    • Name - Accept

    • Value - application/json

    • Name - Netskope-Api-Token

    • Value - ${secrets.netskopeApiToken}

  • Body type* - there is no required body type because the parameters are included in the URL. However, these fields are mandatory, so select raw and enter the {} placeholder.

  • Output Mode
    -
    element
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    here
    Navigate to the Access Policy tab and choose Edit.
  • Replace the existing policy with the following, ensuring you update the placeholders with your specific details:

  • Save the changes. This policy grants your S3 bucket permission to send messages to your SQS queue.

    Go to the Properties tab and find the "Event notifications" section.
  • Click on Create event notification.

  • Provide a descriptive name for the event notification.

  • In the Event types section, select All object create events or specify particular events that should trigger notifications.

  • In the Destination section, choose SQS Queue and select the queue you configured earlier.

  • Save the configuration.

  • Check your SQS queue to verify that a message has been received, indicating that the notification setup is functioning correctly.
    or
    CSV
    , more options appear:
    JSON Options
    • Path - Enter the path of the JSON element you want to retrieve. The path are keys separated by dots. For example one.two.three will select the element in {"one":{"two":{"three":[1,2,3]}}. To select the root, you can leave the path empty or enter a single dot . (default option).

    For Cloudwatch and Cloudtrail logs, you must enter the .Records path.

    • Array Unroll - Activate this toggle if you want to generate one event for each element in the array. The element that the path points to must be a JSON array.

    CSV Options
    • Header Row - select true to include a header for your CSV rows.

    • Delimiter - decide between comma, semicolon and tab.

    • Text Encoding - refers to the scheme (or key) that maps the characters used to store and read the data in the CSV file: UTF-8, UTF-16, UTF-16 Little Endian, UTF-16 Big Endian, ISO 8859-1 (Latin-1), Windows-1252.

    • Output Format - CSV or JSON.

      • JSON Output (outputFormat: "json"):

        • Converts CSV records to structured JSON objects

    • Trim Leading Space - select true to remove any whitespace characters that appear immediately before the first non-whitespace character in a cell.

    • Lazy Quotes - select true to allow double quotes to appear in fields without strictly following the formal CSV escaping rules.

    • Fields per Second - this number determines the speed and efficiency of S3 as it reads and interprets the data within the CSV file.

    • Comment Character - use the hash symbol (#) to designate lines of text that should be ignored during the data parsing process.

  • Authentication Type*- Choose manual to enter your access key ID and secret access key manually in the parameters below, or auto to authenticate automatically. The default value is manual.

  • Access key ID*- Select the access key ID from your Secrets or click New secret to generate a new one.

    The Access Key ID is found in the IAM Dashboard of the AWS Management Console.

    1. In the left panel, click on Users.

    2. Select your IAM user.

    3. Under the Security Credentials tab, scroll to Access Keys, and you will find existing Access Key IDs (but not the secret access key).

  • Secret access key*- Select the secret access key from your Secrets or click New secret to generate a new one. Under Access keys, you can see your Access Key IDs, but AWS will not show the Secret Access Key. You must have it saved somewhere. If you don't have the secret key saved, you need to create a new one.

  • Under Bucket ARN & URL, find the S3 endpoint URL.

    Amazon Service Endpoint will usually be chosen automatically, so you should not normally have to fill this up. However, in case you need to override the default access point, you can do it here.

    Click on Queues in the left panel.
  • Locate your queue from the list and click it.

  • The Queue URL will be displayed in the table under URL.

  • This is the correct URL format: https://sqs.region.localhost/awsaccountnumber/storedinenvvar

    10
    .
  • Visibility timeout* - Set how many seconds to leave a message as hidden in the queue after being delivered, before redelivering it to another consumer if not acknowledged. The minimum value is 30s, and the maximum value is 12h. The default value is 1h.

  • Wait time*- When the queue is empty, set how long to wait for messages before deeming the request as timed out. The minimum value is 5s, and the maximum and default value is 20s.

  • *
    - Set the minimum amount of time to wait before retrying. The default and minimum value is
    1s
    , and the maximum value is
    10m
    .
  • Maximum retry time* - Set the maximum amount of time to wait before retrying. The default value is 5m, and the maximum value is 10m. The minimum value is the one set in the parameter above.

  • AWS bucket
    this article
    here
    URL* - ${parameters.domain}/v1/cp/oauth/token
  • Headers

    • Name - Content-type

    • Value - application/x-www-form-urlencoded

    • Name - Accept

    • Value - application/json

  • BodyType* - UrlEncoded

    • Body params

      • Name - client_id

      • Value -'${secrets.client_id}'

      • Name - client_secret

      • Value - '${secrets.client_secret

  • Token Path* - .access_token

  • Auth Injection

    • In* - header

    • Name* - authorization

    • Prefix - Bearer

    • Suffix - ''

  • Response Type - JSON

  • Method* - GET

  • URL* - ${parameters.domain}/v1/cp/domains

  • Headers -

    • Name - accept

    • Value - application/json

  • Query Params -

    • Name - offset

    • Value - ${pagination.offset}

    • Name - limit

    • Value - ${pagination.limit}

  • Output

    • Select - [.domains[].id]

    • Map - .

    • Output Mode - element

  • Expression: "."
  • Format ""

  • Pagination Type* - none

  • Request

    • Response Type - JSON

    • Method* - GET

    • URL* - ${parameters.domain}/v1/cp/audits

    • Headers -

      • Name - accept

      • Value - application/json

    • Query Params -

      • Name - start_date

      • Value - ${temporalWindow.from}

  • Output

    • Select - .audits.entries

    • Map - .

    • Output Mode - element

  • this article
    Labels

    Alert Endpoints

    Overview

    Get a list of all or filtered alerts. The alerts listed are what remains after alert exclusions are applied by Netskope.

    Configuration

    Parameters

    • Domain (netskopeDomain)

    • Index (netskopeIndex) - The index parameter in the Netskope API for Data Export is used to:

      • Uniquely identify an export session.

    Secrets

    • NetskopeApiToken refers to the used to authenticate the connection to Netskope.

    To add a Secret, open the Secret fields and click New secret:

    • Give the secret a Name.

    • Turn off the Expiration date option.

    • Click Add new value and paste the secret corresponding to the value.

    • Click Save.

    Learn more about secrets in Onum in .

    You can now select the secret you just created in the corresponding fields.

    After entering the required parameters and secrets, you can choose to manually enter the Netskope API Alerts fields, or simply paste the desired YAML.

    Configure as YAML

    Manually Configure

    Temporal Window

    Toggle ON to add a temporal window for events. This repeatedly shifts the time window over which data is collected.

    • Duration - 5 minutes (5m) as default, adjust based on your needs.

    • Offset - 5m

    • Format - Epoch

    Authentication Phase

    OFF

    Enumeration Phase

    OFF

    Collection Phase

    • Pagination Type* - cursor

    • Cursor Selector* - .timestamp_hwm

    • Initial Request

    Click Create labels to move on to the next step and define the required if needed.

    here
      {
        "Version": "2012-10-17",
        "Id": "S3ToSQSPolicy",
        "Statement": [
          {
            "Sid": "AllowS3Bucket",
            "Effect": "Allow",
            "Principal": {
              "Service": "s3.amazonaws.com"
            },
            "Action": "SQS:SendMessage",
            "Resource": "arn:aws:sqs:<region>:<account-id>:<queue-name>",
            "Condition": {
              "ArnLike": {
                "aws:SourceArn": "arn:aws:s3:::<bucket-name>"
              },
              "StringEquals": {
                "aws:SourceAccount": "<account-id>"
              }
            }
          }
        ]
      }
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/v1/cp/oauth/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
            - name: Accept
              value: application/json
          bodyType: urlEncoded
          bodyParams:
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: true
    enumerationPhase:
      paginationType: offsetLimit
      limit: 200
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/domains
        headers:
          - name: Accept
            value: application/json
        queryParams:
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: "[.domains[].id]"
        map: "."
        outputMode: element
    collectionPhase:
      variables:
        - source: input
          name: domainId
          expression: "."
          format: ""
      paginationType: none
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/audits
        headers:
          - name: Accept
            value: application/json
        queryParams:
          - name: start_date
            value: ${temporalWindow.from}
          - name: end_date
            value: ${temporalWindow.to}
          - name: object_type
            value: domain
          - name: object_id
            value: ${inputs.domainId}
      output:
        select: ".audits.entries"
        map: "."
        outputMode: elementwithTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/v1/cp/oauth/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
            - name: Accept
              value: application/json
          bodyType: urlEncoded
          bodyParams:
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: true
    enumerationPhase:
      paginationType: offsetLimit
      limit: 200
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/organizations
        headers:
          - name: Accept
            value: application/json
        queryParams:
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: "[.organizations[].id]"
        map: "."
        outputMode: element
    collectionPhase:
      variables:
        - source: input
          name: orgId
          expression: "."
          format: ""
      paginationType: none
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/audits
        headers:
          - name: Accept
            value: application/json
        queryParams:
          - name: start_date
            value: ${temporalWindow.from}
          - name: end_date
            value: ${temporalWindow.to}
          - name: object_type
            value: organization
          - name: object_id
            value: ${inputs.orgId}
      output:
        select: ".audits.entries"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: RFC3339
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/v1/cp/oauth/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
            - name: Accept
              value: application/json
          bodyType: urlEncoded
          bodyParams:
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withEnumerationPhase: true
    enumerationPhase:
      paginationType: offsetLimit
      limit: 200
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/users
        headers:
          - name: Accept
            value: application/json
        queryParams:
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
      output:
        select: "[.users[].id]"
        map: "."
        outputMode: element
    collectionPhase:
      variables:
        - source: input
          name: userId
          expression: "."
          format: ""
      paginationType: none
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/v1/cp/audits
        headers:
          - name: Accept
            value: application/json
        queryParams:
          - name: start_date
            value: ${temporalWindow.from}
          - name: end_date
            value: ${temporalWindow.to}
          - name: object_type
            value: user
          - name: object_id
            value: ${inputs.userId}
      output:
        select: ".audits.entries"
        map: "."
        outputMode: element
    Name - end_date
  • Value - ${temporalWindow.to}

  • Name - object_type

  • Value - domain

  • Name - object_id

  • Value - ${inputs.domainId}

  • Field names are derived from header row (if present) or auto-generated (field_0, field_1, etc.)

  • Provides structured data for easier processing in pipelines

  • CSV Output (outputFormat: "csv"):

    • Preserves original CSV formatting

    • Each CSV record becomes a separate event containing the raw CSV line

    • Useful when you want to maintain the original CSV structure

  • Prevent multiple API consumers from overlapping their collections.
  • Allow incremental paging without losing events.

  • Method* - GET

  • URL* - https://${parameters.domain}/api/v2/events/dataexport/alerts/INSERT NAME FROM YAMLS ABOVE ?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}

  • Headers -

    • Name - Accept

    • Value - application/json

    • Name - Netskope-Api-Token

    • Value - ${secrets.netskopeApiToken}

  • Next Request

    • Method* - GET

    • URL* - https://${parameters.domain}/api/v2/events/dataexport/alerts/INSERT NAME?operation=next&index=${parameters.netskopeIndex}

    • Headers -

      • Name - Accept

      • Value - application/json

      • Name - Netskope-Api-Token

  • Output

    • Select - .result

    • Map - .

    • Output Mode - element

  • API Token
    this article
    Labels
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/compromisedcredential?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/compromisedcredential?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/ctep?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/ctep?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/dlp?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/dlp?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/malsite?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/malsite?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/malware?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/malware?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/policy?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/policy?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/quarantine?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/quarantine?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/remediation?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/remediation?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/securityassessment?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/securityassessment?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/uba?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/uba?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 5m
      tz: UTC
      format: Epoch
    withAuthentication: false
    withEnumerationPhase: false
    collectionPhase:
      paginationType: "cursor"
      cursor: ".timestamp_hwm"
      initialRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/watchlist?index=${parameters.netskopeIndex}&operation=${temporalWindow.from}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      nextRequest:
        method: GET
        url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/watchlist?operation=next&index=${parameters.netskopeIndex}"
        headers:
          - name: Accept
            value: application/json
          - name: Netskope-Api-Token
            value: "${secrets.netskopeApiToken}"
      output:
        select: ".result"
        map: "."
        outputMode: element 

    Value - ${secrets.netskopeApiToken}

    this article
    Data Loss Prevention Use Case

    Pull data from HTTP endpoints

    See the changelog of this Listener type here.

    Note that this Listener is only available in certain Tenants. Get in touch with us if you don't see it and want to access it.

    Overview

    Onum supports integration with HTTP Pull. Select HTTP Pull from the list of Listener types and click Configuration to start.

    HTTP Pull configuration

    1

    Log in to your Onum tenant and click Listeners > New listener.

    2

    Double-click the HTTP Pull Listener.

    3

    Enter a Name for the new Listener. Optionally, add a Description and some Tags to identify the Listener.

    Desconstructing a YAML

    Here we will learn what each parameter of the YAML means, and how they correspond to the settings in the HTTP Pull Listener.

    The YAML is used for pulling alerts via an API and typically uses

    • A Temporal Window to enable the use of a time-based query window for filtering results.

    • Authentication using a token to authenticate the connection.

    • The first phase (Enumeration) enables an initial listing phase to get identifiers (e.g., alert IDs), paginating through the results.

    • The second phase (Collection) then fetches full alert details using the alert IDs from the enumeration phase.

    Only the Collection phase is mandatory, the rest of the fields are optional.

    Let´s take a closer look at each phase below.


    Temporal window

    A temporal window is a defined time range used to filter or limit data retrieval in queries or API requests. It specifies the start and end time for the data you want to collect or analyze. This YAML uses a temporal window of 5 minutes, in RFC3339 format, with an offset of 0, in UTC timezone.

    Parameter
    Description
    Temporal Window example

    In Onum, toggle ON the Temporal Window selector and enter the information in the corresponding fields

    • Duration* - 5m

    • Offset*


    Authentication phase

    If your connection requires authentication, enter the credentials here.

    Parameter
    Description

    Authentication credentials

    The options provided will vary depending on the type chosen to authenticate your API. This is the type you have selected in the API end, so it can recognize the request.

    Choose between the options below.

    Basic
    • Username* - the user sending the request.

    • Password* - the password eg: ${secrets.password}

    API Key

    Enter the following:

    • API Key - API keys are usually stored in developer portals, cloud dashboards, or authentication settings. Set the a secret, eg: ${secrets.api_key}

    • Auth injection:

    Token

    Token Retrieve Based Authentication

    • Request -

      • Method* - Choose between GET

    HMAC

    Signs the queries using a secret key that is used by the server to authenticate and validate integrity.

    Token Retrieve Based Authentication

    Request

    • Generate ID - Toggle ON to generate.


    Retry

    Toggle ON to allow for retries and to configure the specifics.

    Parameter
    Description

    Throttling

    Use throttling to intentionally limit the rate at which the HTTP requests are sent to the API or service.

    Throttling Type*

    The client itself controls and limits the rate at which it sends requests.

    Parameter
    Description

    The server controls the rate at which it sends data.

    Parameter
    Description

    Enumeration phase

    The enumeration phase is an optional step in data collection or API integration workflows, where the system first retrieves a list of available items (IDs, resource names, keys, etc.) before fetching detailed data about each one.

    Identify the available endpoints, methods, parameters, and resources exposed by the API. This performs initial data discovery to feed the collection phase and makes the results available to the Collection Phase via variable interpolation (inputs.*).

    Can use:

    • ${parameters.xxx}

    • ${secrets.xxx}

    • ${temporalWindow.xxx} (if configured)

    Parameter
    Description

    Output

    Parameter
    Description
    Enumeration example
    • Pagination type - offset/LimitUses classic pagination with offset and limit to page through results, fetching data in batches (pages) — limit determines page size, offset determines where to start.


    Collection phase

    The collection phase in an HTTP Puller is the part of the process where the system actively pulls or retrieves data from an external API using HTTP requests.

    The collection phase is mandatory. This is where the final data retrieval happens (either directly or using IDs/resources generated by an enumeration phase).

    The collection phase involves gathering actual data from an API after the enumeration phase has mapped out endpoints, parameters, and authentication methods. It supports dynamic variable resolution via the variable resolver and can use data exported from the Enumeration Phase, such as:

    • ${parameters.xxx}

    • ${secrets.xxx}

    • ${temporalWindow.xxx}

    Inputs

    In collection phases, you can define variables to be used elsewhere in the configuration (for example, in URLs, query parameters, or request bodies). Each variable definition has the following fields:

    Parameter
    Description

    Retry

    Toggle ON to allow for retries and to configure the specifics.

    Parameter
    Description

    Throttling

    Use throttling to intentionally limit the rate at which the HTTP requests are sent to the API or service.

    Throttling Type*

    The client itself controls and limits the rate at which it sends requests.

    Parameter
    Description

    The server controls the rate at which it sends data.

    Parameter
    Description

    Parameter
    Description

    Output

    Parameter
    Description
    Collection example

    Let´s say you have the following SIEM Integration events from Sophos.

    • Pagination type - cursor. If you select the cursor type, you retrieve the data in chunks (pages) using a cursor token, which points to the position in the dataset where the next page of results should start.

    Ports

    The HTTP Pull Listener has two output ports:

    • Default port - Events are sent through this port if no error occurs while processing them.

    • Error port - Events are sent through this port if an error occurs while processing them.

    The error message is provided in a free-text format and may change over time. Please consider this if performing any post-processing based on the message content.

    Examples

    1. Basic GET Puller

    Here's a simple example of using the HTTP Puller collector with parameters for a basic GET request. No authentication, no pagination, just pulling JSON data from an API endpoint. Keep Config as YAML, Temporal window, Authentication and Enumeration phase as OFF.

    • Collection phase

      • Pagination type - none Indicates that you only need one request to retrieve all data at once.

      • Request

    2. Make an HTTP request using offset and limit pagination

    Instead of displaying the results in a scrollable list, we will use offset/limit pagination to fetch data in pages.

    • Pagination type - offset/Limit We control how many records are returned at a time (limit) and choose where to start each request (offset or skip parameter)

    • Zero Index - false

    3. Enumeration + Collection with responseBodyLink

    This example defines a data extraction workflow that

    1. Enumerates through a paginated API endpoint using responseBodyLink.

    2. Filters and transforms specific data from the paginated results.

    3. Collects further data based on the enumerated output using individual requests.

    It also uses a temporal window to scope or schedule the data extraction process.

    Enumeration

    The enumeration defines how to gather data in a paginated manner from the Cyber Threat Intelligence API using the responseBodyLink pagination strategy.

    • Pagination Type - The type is Next Link At Response Body

    • Selector - The next page link is found using the JSON path ".info.nextPage" This suggests that the response will contain a field info.nextPage with the URL of the next page of results.

    For example, the response might look like:

    • Response type - JSON

    • Method - GET. The HTTP method is GET to fetch the data.

    • URL - The initial URL for the request is "https://api.cyberintel.dev/iocs", where the IOCs are listed.

    Output

    • Select - The .data array from the response is selected for further processing. This array contains the actual IOC data.

    • Filter - The filter expression '.threatType == "Ransomware"' selects only those IOCs where the threatType is "Ransomware". This is how we focus on ransomware-related indicators.

    Result: After processing the pages, we will have a list of ransomware IOC IDs.

    Collection

    Once the enumeration process gathers a list of IOC IDs related to ransomware, the collection section is responsible for retrieving more detailed information for each of those IOCs.

    variables - This section defines variables used in the collection step.

    • Name - id: The variable id represents each individual IOC ID from the enumeration output.

    • Source - The source: input means that the IDs come from the output of the previous enumeration step.

    • Expression - expression: "." simply takes each item from the input (the IOC IDs).

    HTTP Request for Detailed IOC Information

    • Pagination type: The type is "none", indicating no additional processing is needed before making the request.

    • Response type - JSON.

    • Method: The HTTP method is GET, to fetch detailed information about each IOC.

    Output Selection and Mapping

    • Select: This selects the .data field from the response, which contains the detailed information for the IOC.

    • Filter: No additional filtering is applied.

    • Map: The map expression "{iocName: .name}" creates a new object with the iocName key, mapping it to the .name

    Result: Each IOC name (or other information, if mapped) will be saved to a file.

    4. Enumeration (collection output) + Collection (POST with bodyRaw)

    Temporal window defines a 5-minute slice of time, offset 10 minutes ago.

    Enumeration step:

    • Makes a paginated GET to /posts.

    • Extracts IDs from posts within the time window.

    • Produces a collection of IDs.

    Collection step:

    • Uses those IDs in a POST request.

    • Filters, maps, and outputs enriched objects (id, title, status).

    • Saves results to a file.

    • Duration - 5m window size is 5 minutes.

    • Offset - 10m shifts the window back 10 minutes from “now”. So if current UTC is 12:00, the range would be 11:45 – 11:50.

    • Time zone - UTC

    The variables ${temporalWindow.from} and ${temporalWindow.to} get auto-populated with these calculated times.

    Enumeration

    • Pagination type - page number/page size

    • Page size: 50 fetch 50 records per request.

    • Request

    Collection (POST with BodyRaw)

    • Pagination Type - Next link at response body

    • Selector - "." take the full collection.

    • Response Type - json keep it as JSON (array of IDs).

    4

    Now you need to specify the Parameters.

    • Enter the name of the parameter to search for in the YAML below, used later as ${parameters.name} e.g. ${parameters.domain}

    • Enter the value or variable to fill in when the given parameter name has been found, e.g. domain.com.

      With the name set as domain and the value set as mydomain , the expression to execute on the YAML would be: ${parameters.domain}, which will be automatically replaced by the variable. Add as many name/value pairs as required.

    YAML Sample:

    5

    Next, configure your Secrets

    • Enter the name of the parameter to search for in the YAML below, used later as ${secrets.name}

    • Select the Secret containing the connection credentials if you have added them previously, or select New Secret to add it. This will add this value as a variable when the field name is found in the YAML. Add as many as required.

    YAML Sample:

    6

    Toggle on to configure the HTTP as a YAML and paste it here.

    The system supports interpolated variables throughout the HTTP request building process using the syntax: ${prefix.name}

    Each building block may:

    • Use variables depending on their role (e.g., parameters, secrets, pagination state).

    • Expose variables for later phases (e.g., pagination counters, temporal window bounds).

    Not all variable types are available in every phase. Each block has access to a specific subset of variables.

    Variables can be defined in the configuration or generated dynamically during execution. Each variable has a prefix that determines its source and scope.

    These are the supported prefixes:

    • Parameters - User-defined values configured manually. Available in all phases.

    • Secrets - Sensitive values such as credentials or tokens. Available in all phases.

    • temporalWindow - Automatically generated from the Temporal Window block. Available in the Enumeration and Collection phases.

    • Pagination - Values produced by the pagination mechanism (e.g., offset, cursor). Available in the Enumeration and Collection phases.

    If you do not have a YAML to paste, see how to manually configure the various components of a YAML in the following sections.

  • Standard JSON response mapping is used to output the results.

  • -
    0s
  • TZ* - this will set automatically according to your current timezone.

  • Format* - RFC3339

  • In* - Enter the incoming format of the API: Header or Query.

  • Name* - The header name or parameter name where the api key will be sent.

  • Prefix - Enter a prefix if required.

  • Suffix - Enter a suffix if required.

  • or
    POST
  • URL*- Enter the URL to send the request to.

  • Headers - Add as many headers as required.

    • Name

    • Value

  • Query Params - Add as many query parameters as required.

    • Name

    • Value

  • Token Path* - Enter your Token Path for used to retrieve an authentication token.

  • Auth injection:

    • In* - Enter the incoming format of the API: Header or Query.

    • Name* - A label assigned to the API key for identification. You can find it depending on where the API key was created.

    • Prefix - Enter a connection prefix if required.

    • Suffix - Enter a connection suffix if required.

  • Example

    • Type - Token. Token authentication is a method of authenticating API requests by using a secure token, usually passed in an HTTP header.

    • Request

      • method - POSTSends a POST request to obtain an access token.

      • url - ${parameters.domain}/oauth2/tokenThe OAuth token endpoint. ${parameters.domain} is a placeholder for value entered in the Parameters section.

      • headers - these headers are key-value pairs that provide additional information to the server when making a request.

        • name - Content-Type

        • value - application/x-www-form-urlencodedIndicates that the request body is formatted as URL-encoded key-value pairs (standard for OAuth token requests).

      • Body type -urlEncoded Specifies the request body format is URL-encoded (like key=value&key2=value2).

        • Body params

          • name - grant_type Required by OAuth 2.0 to specify the type of grant being requested.

      • Token path - Extracts the access token from the JSON response of an authentication request. It's a JSONPath-like expression used to locate the token in the response body.

    Toggle ON the Authentication option.

    • Auth injection - This part defines how and where to inject the authentication token (typically an access token) into the requests after it has been retrieved, for example, from an OAuth token endpoint.

      • in -headerThe token should be injected into the HTTP header of the request.This is the most common method for passing authentication tokens.

      • Name -AuthorizationThe name of the header that will contain the token. Most APIs expect this to be Authorization.

      • prefix - The text added before the token value.Bearer is the standard prefix for OAuth 2.0 tokens.

      • suffix -''Text added after the token value. In this case, it's empty — nothing is appended.

    Generate Timestamp
    • Timezone* - this field is automatically-filled using your current timezone.

    • Format* - the format for the timestamp syntax (Seconds, Epoch, Epoch Timestamp, RFC1123, RFC1123Z, RFC3339 or custom). Selecting custom opens the Go time format option, where you can write your custom syntax e.g. 2 Jan 2006 15:04:05

  • Generate content hash

    • Content hash

      • Hashing algorithm* - select the hash operation to carry out on the content.

      • Encoding* - choose the encoding method.

    • Hashing

      • Hashing algorithm* - select the operation to carry out on the content.

      • Encoding* - choose the encoding method.

  • Headers to be added to the request (name & value).

  • Example: Authenticate HTTP requests to Microsoft Azure using the HMAC-SHA256 scheme.

    Learn how to calculate the HMAC for this API here.

    • Type - HMAC.

    Request Parameters

    • Generate Timestamp

      • Timezone - UTC

      • Format - RFC1123

    • Generate Content Hash

      • Algorithm - sha256

      • Encoding - base64

    Hash

    Base64-encoded HMACSHA256 of the String-To-Sign.

    • Algorithm - hmac_sha256

    • Encoding - base64

    • Secret Key - ${secrets.secretKey} This variable is retrieved from the secrets parameter.

    • Data To Sign - A canonical representation of the request with the format HTTP_METHOD + '\n' + path_and_query + '\n' + signed_headers_values ${request.method}\n${request.relativeUrl}\n${request.timestamp};${request.host};${request.contentHash}

    Headers

    • Name - x-ms-date can be used when the agent cannot directly access the Date request header or when a proxy modifies it. If both x-ms-date and Date are provided, x-ms-date takes precedence.

    • Value - ${request.timestamp}

    • Name - x-ms-content-sha256 Base64-encoded SHA256 hash of the request body. It must be provided even if there is no body.

    • Value - ${request.contentHash}

    • Name - Authorization Required by the HMAC-SHA256 scheme.

    • Value - HMAC-SHA256 Credential=${secrets.accessKeyId}&SignedHeaders=x-ms-date;host;x-ms-content-sha256&Signature=${hmac.hash}

    Example 2: API HMAC Authentication for Oracle

    See here for how to calculate the API HMAC in Oracle.

    • Type - HMAC.

    Request Parameters

    • Generate ID

      • Type - uuid

    • Generate Timestamp

      • Timezone - UTC

      • Format - Epoch

    • Generate Content Hash

      • Algorithm - sha1

      • Encoding - base64 - The binary hash result will be encoded in Base64 for transmission.

    Hash

    Base64-encoded HMACSHA256 of the String-To-Sign.

    • Algorithm - hmac_sha256

    • Encoding - base64

    • Secret Key - ${secrets.secretKey} This variable is retrieved from the secrets parameter.

    • Data To Sign - ${request.method}\n${request.contentHash}\napplication/json${request.timestamp}\n${request.relativeUrl}This is the canonical string-to-sign:

      • ${request.method} - HTTP method (e.g., GET, POST)

      • ${request.contentHash} - Base64 SHA-1 hash of the request body

    Headers

    • Name - ct-authorization

    • Value - CTApiV2Auth ${parameters.publicKey}:${hmac.hash}

      • CTApiV2Auth - Authentication scheme name.

      • ${parameters.publicKey} - Public key or access ID.

      • ${hmac.hash} - The generated HMAC-SHA256 signature from the hash section.

    • Name - ct-timestamp

    • Value - ${request.timestamp} the same Epoch UTC timestamp generated earlier.

    Wait response header*

    These headers inform how long to wait before retrying and how many requests remaining.

    • Header Type* - Enter the header to instruct that to do e.g. wait., Retry-After, etc.

    • Format - The format for the header syntax (Seconds, Epoch, Epoch Timestamp, RFC1123, RFC1123Z, RFC3339). e.g. wait 120 seconds Retry-After: 120 e.g. epoch timestamp Retry-After: Wed, 21 Oct 2025 07:28:00 GMT

    Reset response header

    Indicates when a rate limit or throttle window resets, allowing the client to resume normal activity (e.g., making more requests or pulling more data).

    • Header Type* - Enter the header to instruct that to do e.g. wait., Retry-After, etc.

    • Format - The format for the header syntax (Seconds, Epoch, Epoch Timestamp, RFC1123, RFC1123Z, RFC3339).

    Remaining response header*

    How many requests or units of usage the puller can still make within the current time window before hitting the limit and being throttled.

    ${pagination.xxx} Pagination variables

    Limit - Retrieves up to 100 records per request. This value is used in the limit query parameter to control batch size.

  • Request - Describes the API request that will be sent during enumeration.

    • Response type - Specifies the expected response format. Here, the system expects a JSON response.

    • Method - The HTTP method to use for this request. GET is used to retrieve data from the server.

    • URL - ${parameters.domain} is a placeholder variable that will be replaced by the domain value you entered in the Parameters section.

  • Query params - These are query string parameters appended to the URL.

    • ${pagination.offset}controls where to start in the dataset. Used for pagination.

    • ${pagination.limit}replaced with the limit value you entered for number of records to retrieve per request (100).

    • Filters data to only return alerts created within a specific time window. ${temporalWindow.from} and ${temporalWindow.to} are dynamically filled in with RFC3339 or epoch timestamps, depending what you have configured.

    output - Describes how to extract and interpret the results from the JSON response.

    • select - .resourcesLooks for a field named resources in the response JSON. This is where the array of items lives.

    • map - .Each item under .resources is returned as-is. No transformation or remapping.

    • outputMode - collectionThe result is treated as a collection (array) of individual items. Used when you expect multiple items and want to pass them along for further processing.

    ${inputs.xxx}
    (from Enumeration Phase)
  • ${pagination.xxx}*

  • Wait response header*

    These headers inform how long to wait before retrying and how many requests remaining.

    • Header Type* - Enter the header to instruct that to do e.g. wait., Retry-After, etc.

    • Format - The format for the header syntax (Seconds, Epoch, Epoch Timestamp, RFC1123, RFC1123Z, RFC3339). e.g. wait 120 seconds Retry-After: 120 e.g. epoch timestamp Retry-After: Wed, 21 Oct 2025 07:28:00 GMT

    Reset response header

    Indicates when a rate limit or throttle window resets, allowing the client to resume normal activity (e.g., making more requests or pulling more data).

    • Header Type* - Enter the header to instruct that to do e.g. wait., Retry-After, etc.

    • Format - The format for the header syntax (Seconds, Epoch, Epoch Timestamp, RFC1123, RFC1123Z, RFC3339).

    Remaining response header*

    How many requests or units of usage the puller can still make within the current time window before hitting the limit and being throttled.

    Cursor selector - The cursor selector tells the HTTP Puller where to find the cursor value in the API response so it can be saved and used in the next request e.g. .next_cursor
  • Initial request - We fetch the first set of results, the response including the cursor token (e.g. timestamp or ID).

    • method - GET to fetch the results.

    • url - The URL is composed of various elements:

      • https://${inputs.dataRegionURL}- these variables are taken from the values you entered in the Parameters section of the HTTP Pull settings.

      • /siem/v1/ -API base path — indicates you're calling version 1 of the SIEM API.

      • events- indicates the specific endpoint being accessed. events general category of the API (event-related).

  • headers - these headers are key-value pairs that provide additional information to the server when making a request.

    • name - Accept

    • value - application/json tells the server that the client expects the response to be in JSON format, a standard HTTP header used for content negotiation.

  • Next request - send the cursor token back to the server using a parameter (e.g., ?cursor=abc123) to get the next page of results. The server returns the next chunk of data and a new cursor.

    Repeat until no more data or the server returns a has_more: false flag.method

  • Output

    • select - .result Selects the part of the response to extract. This is a JSONPath-like expression that tells the puller where to find the list or array of items in the response.

    • map - . Maps each selected item as-is, keeping each object unchanged. It passes through each item without transforming it. If you needed to restructure or extract specific fields from each item, you would replace . with a field mapping (e.g., .id, { "id": .id, "name": .username }, etc.).

    • output mode - element Controls the output format. Each item from the select result will be emitted individually using element. This is useful for event stream processing, where each object (e.g., an alert or event) is treated as a separate record. Other possible values (depending on the platform) might include array (emit as a batch) or raw (emit as-is).

  • Response type - jsonTells the puller to expect a JSON response.
  • Method: GET Performs a basic HTTP GET request.

  • URL: Constructed from the parameters.domain and parameters.path https://{{parameters.domain}}{{parameters.path}}

  • Headers: Set standard headers and include the API key.

  • Output:

    • Select:.logs Tells the system where to find the list of log entries in the response.

    • Output mode: element each object inside .logs will be extracted as a separate output element e.g.

  • Limit* - 50
  • Request - The request to be repeated, with offset and limit automatically incremented per iteration.

  • Response type* - Json

  • Method* - GET

  • URL* - https://example.com/items

  • Query params The API supports pagination through query parameters:

    • Name - skip

    • Value - ${pagination.offset}" the number of records to skip before returning results

    • Name - limit

    • Value - ${pagination.limit} uses the limit entered (50) as the maximum number of records to return in one request.

  • headers - The Accept header specifies that the response should be in JSON format.

    Map - The map expression '._id' extracts the ._id field from each IOC that passed the filter. This results in a list of IOC IDs that match the ransomware threat type.

  • Output Mode - element indicates that each IOC ID (element) is treated as an individual item, rather than as a group or array.

  • Url: The URL for each IOC is dynamic, with the IOC ID substituted in the URL (${id}). For example, if id = "a1b2", the URL would be https://api.cyberintel.dev/iocs/a1b2.

  • Headers: The Accept: "application/json" header ensures the response is in JSON format.

  • of the IOC from the response.
  • Output Mode: outputMode: "element" means each IOC’s name will be treated as an individual output item.

  • Format - RFC3339 output format for timestamps (e.g., 2025-08-20T12:00:00Z).

  • Response type - JSON
  • Method - GET

  • URL - https://api.fake-rest.refine.dev/posts

  • Query Params

    1.From: "${temporalWindow.from}"

    • Inserts the start timestamp of the time window. ${temporalWindow.from} is automatically computed based on your temporalWindow configuration e.g. If now = 12:00 UTC, offset = 10m, and duration = 5m = temporalWindow.from = 11:45 UTC (start) In the request, this becomes something like:

    2. to: "${temporalWindow.to}" Inserts the end timestamp of the time window e.g.

    temporalWindow.to = 11:50 UTC (end). In the request, this becomes:

    So together, from and to tell the API:

    “Only give me records between 11:45 and 11:50 UTC.”

    3. _page: "${pagination.pageNumber}" This is a built-in pagination variable.

    ${pagination.pageNumber} auto-increments as the system makes repeated requests to fetch all pages e.g. First request _page=1 Second request _page=2 etc.

    This ensures you don’t just get the first batch, but all results page by page.

    4. _per_page: "${pagination.pageSize}” Controls how many records to fetch per page.

    This pulls from your earlier configuration

    So each request includes: &_per_page=50

  • Select - '.'selects the entire JSON response.

  • Filter - would filter only records where .language == 3.

  • Map - extracts only {id: .id} for each record.

  • Output Mode - collection outputs an array of items (instead of single elements).

  • Method - POST to send data.

  • URL - https://api.fake-rest.refine.dev/posts

  • Body Type: raw freeform JSON payload.

  • Body Content - sends the IDs collected in the enumeration: ids": ${inputs.ids}

  • Select: "." take the full response.

  • Filter - ".id > 10" only keep posts with ID greater than 10.

  • Map - reduce each record to {id, title, status}.

  • Output Mode - element output individual objects, one at a time.

  • Duration*

    Add the duration in milliseconds that the window will remain open for.

    Offset*

    How far back from the current time the window starts.

    Time Zone*

    This value is usually automatically set to your current time zone. If not, select it here.

    Format*

    Choose between Epoch or RCF3339 for the timestamp format.

    Authentication Type*

    Choose the authentication type and enter the details.

    Retry Type*

    • Fixed - Retries the failed operation after a constant, fixed interval every time e.g. the same amount of time between each retry attempt

      • Interval* - enter the amount of time to wait e.g. 5s.

    • Exponential - Retries the failed operation after increasingly longer intervals to avoid overwhelming the service. The delay grows with each retry attempt.

      • Initial delay* - The starting delay before the first retry attempt to ensure there’s at least some delay before retrying to avoid immediate re-hits. For example, an initial delay of 2s equals a retry pattern of 2s, 4s, 8s, 16s, etc.

      • Maximum delay* - The maximum wait time allowed between retries to prevent the retry delay from growing indefinitely. For example, an initial delay of 2s and a maximum delay of 10s equals a delay progression of 2s, 4s, 8s, 10s, 10s, etc.

      • Increasing factor* - The multiplier used to calculate the next delay interval, determining how quickly the delay grows after each failed attempt.

    Retry after response header

    Used to define how long to wait before making another request e.g. HTTP 429 Too Many Requests or HTTP 503 Service Unavailable.

    • Header - Follow the header syntax for the header.

    • Format - The format for the header syntax (Seconds, Epoch, Epoch Timestamp, RFC1123, RFC1123Z, RFC3339).

      • e.g. wait 120 seconds Retry-After: 120

      • e.g. epoch timestamp Retry-After: Wed, 21 Oct 2025 07:28:00 GMT

    Client type*

    How to manage the rate of requests.

    • Rate - the client is restricted by the data transfer rate or request rate over time.

      • Maximum requests* - The maximum number of requests (or amount of data) to make within a specified time interval.

      • Call interval* - The sliding or fixed window of time used to calculate the rate.

      • Number of burst requests* - the number of requests that can exceed the normal rate temporarily before throttling kicks in to allow short bursts of traffic over the limit to accommodate sudden spikes without immediate blocking. e.g. if the max rate is 10 requests/sec, and burst is 5, the client could make up to 15 requests instantly, but then throttling will slow down after the burst.

    • Fixed delay - The server enforces a fixed wait time after each request before allowing the client to make the next request. Instead of limiting by rate (requests per second) or volume, it just inserts a pause/delay between requests.

      • Call interval* - The sliding or fixed window of time used to calculate the delay.

    Pagination Type*

    Select one from the drop-down. Pagination type is the method used to split and deliver large datasets in smaller, manageable parts (pages), and how those pages can be navigated during discovery.

    Each pagination method manages its own state and exposes specific variables that can be interpolated in request definitions (e.g., URL, headers, query params, body).

    None

    • Description: No pagination; only a single request is issued.

    PageNumber/PageSize

    • Description: Pages are indexed using a page number and fixed size.

    • Configuration:

      • pageSize: page size

    • Exposed Variables:

      • ${pagination.pageNumber}

      • ${pagination.pageSize}

    Offset/Limit

    • Description: Uses offset and limit to fetch pages of data.

    • Configuration:

      • Limit: max quantity of records per request

    • Exposed Variables:

    From/To

    • Description: Performs pagination by increasing a window using from and to values.

    • Configuration: limit: max quantity of records per request

    • Exposed Variables:

      • ${pagination.from}

    Web Linking (RFC 5988)

    • Description: Parses the Link header to find the rel="next" URL.

    • Exposed Variables: None

    Next Link at Response Header

    • Description: Follows a link found in a response header.

    • Configuration:

      • headerName: header name that contains the next link

    • Exposed Variables: None

    Next Link at Response Body

    • Description: Follows a link found in the response body.

    • Configuration:

      • nextLinkSelector: path to next link sent in response payload

    • Exposed Variables: None

    Cursor

    • Description: Extracts a cursor value from each response to request the next page.

    • Configuration:

      • cursorSelector: path to the cursor sent in response payload

    • Exposed Variables:

    Select*

    If your connection does not require authentication, leave as None. Otherwise, choose the authentication type and enter the details. A JSON selector expression to pick a part of the response e.g. '.data'.

    Filter

    A JSON expression to filter the selected elements. Example: '.films | index("Tangled")'.

    Map

    A JSON expression to transform each selected element into a new event. Example: '{characterName: .name}'.

    Output Mode*

    Choose between

    • Element: emits each transformed element individually as an event.

    • Collection: emits all transformed items as a single array/collection as an event.

    Name

    The variable name (used later as ${inputs.name} in the configuration).

    Source

    Usually "input", indicating the value comes from the enumeration phase’s output.

    Expression

    A JSON expression applied to the input to extract or transform the needed value.

    Format

    Controls how the variable is converted to a string (see Variable Formatting below). Eg: json.

    Retry Type*

    • Fixed - Retries the failed operation after a constant, fixed interval every time e.g. the same amount of time between each retry attempt

      • Interval* - enter the amount of time to wait e.g. 5s.

    • Exponential - Retries the failed operation after increasingly longer intervals to avoid overwhelming the service. The delay grows with each retry attempt.

      • Initial delay* - The starting delay before the first retry attempt to ensure there’s at least some delay before retrying to avoid immediate re-hits. For example, an initial delay of 2s equals a retry pattern of 2s, 4s, 8s, 16s, etc.

      • Maximum delay* - The maximum wait time allowed between retries to prevent the retry delay from growing indefinitely. For example, an initial delay of 2s and a maximum delay of 10s equals a delay progression of 2s, 4s, 8s, 10s, 10s, etc.

      • Increasing factor* - The multiplier used to calculate the next delay interval, determining how quickly the delay grows after each failed attempt.

    Retry after response header

    Used to define how long to wait before making another request e.g. HTTP 429 Too Many Requests or HTTP 503 Service Unavailable.

    • Header - Follow the header syntax for the header.

    • Format - The format for the header syntax (Seconds, Epoch, Epoch Timestamp, RFC1123, RFC1123Z, RFC3339).

      • e.g. wait 120 seconds Retry-After: 120

      • e.g. epoch timestamp Retry-After: Wed, 21 Oct 2025 07:28:00 GMT

    Client type*

    How to manage the rate of requests.

    • Rate - the client is restricted by the data transfer rate or request rate over time.

      • Maximum requests* - The maximum number of requests (or amount of data) to make within a specified time interval.

      • Call interval* - The sliding or fixed window of time used to calculate the rate.

      • Number of burst requests* - the number of requests that can exceed the normal rate temporarily before throttling kicks in to allow short bursts of traffic over the limit to accommodate sudden spikes without immediate blocking. e.g. if the max rate is 10 requests/sec, and burst is 5, the client could make up to 15 requests instantly, but then throttling will slow down after the burst.

    • Fixed delay - The server enforces a fixed wait time after each request before allowing the client to make the next request. Instead of limiting by rate (requests per second) or volume, it just inserts a pause/delay between requests.

      • Call interval* - The sliding or fixed window of time used to calculate the delay.

    Pagination Type*

    Choose how the API organizes and delivers large sets of data across multiple pages—and how that affects the process of systematically collecting or extracting all available records.

    Select*

    If your connection does not require authentication, leave as None. Otherwise, choose the authentication type and enter the details. A JSON selector expression to pick a part of the response e.g. '.data'.

    Filter

    A JSON expression to filter the selected elements. Example: '.films | index("Tangled")'.

    Map

    A JSON expression to transform each selected element into a new event. Example: '{characterName: .name}'.

    Output Mode*

    Choose between

    • Element: emits each transformed element individually as an event.

    • Collection: emits all transformed items as a single array/collection as an event.

    {
      "logs": [
        { "timestamp": "2024-12-01T12:00:00Z", "event": "user_login" },
        { "timestamp": "2024-12-01T12:05:00Z", "event": "file_upload" }
      ]
    }
    withTemporalWindow: true
    temporalWindow:
      duration: 5m
      offset: 0
      tz: UTC
      format: RFC3339
    enumerationPhase:
      paginationType: offsetLimit
      limit: 100
      request:
        responseType: json
        method: GET
        url: ${parameters.domain}/alerts/queries/alerts/v2
        queryParams:
          - name: offset
            value: ${pagination.offset}
          - name: limit
            value: ${pagination.limit}
          - name: filter
            value: created_timestamp:>'${temporalWindow.from}'+created_timestamp:<'${temporalWindow.to}'
      output:
        select: ".resources"
        map: "."
        outputMode: collection
    collectionPhase:
      paginationType: cursor
      cursorSelector: ".next_cursor"
      initialRequest:
        method: GET
        url: "${inputs.dataRegionURL}/siem/v1/events"
        headers:
          - name: Accept
            value: application/json
          - name: Accept-Encoding
            value: gzip, deflate
          - name: X-Tenant-ID
            value: "${inputs.tenantId}"
        queryParams:
          - name: from_date
            value: "${temporalWindow.from}"
        bodyParams: []
      nextRequest:
        method: GET
        url: "${inputs.dataRegionURL}/siem/v1/events"
        headers:
          - name: Accept
            value: application/json
        queryParams:
          - name: cursor
            value: "${pagination.cursor}"
        bodyParams: []
      output:
        select: ".result"
        filter: "."
        map: "."
        outputMode: element
    collectionPhase
      paginationType:
      "offsetLimit"
        limit: 50
        isZeroIndex: false
        request:
          method: "GET"
          url: "https://example.com/items"
          queryParams:
            - name: skip
              value: "${pagination.offset}"
              name: limit
              value: "${pagination.limit}"
    # Temporal window (optional)
    # Generated variables: $temporalWindow.from, $temporalWindow.to
    temporalWindow:
      duration: 5m
      offset: 10m
      tz: UTC
      format: RFC3339
    enumerationPhase:
      paginationType: 
        responseBodyLink:
          nextLinkSelector: ".info.nextPage"
          request:
            method: "GET"
            url: "https://api.cyberintel.dev/iocs"
            headers:
            - name: accept
              value: "application/json"
            bodyExpression:
              expression: "(.data | length) == 50"
      output:
        select: '.data'
        filter: '.threatType == "Ransomware"'
        map: '._id'
        outputMode: "element"
    collectionPhase:
      variables:
        - name: id
          source: input
          expression: "."
        paginationType: none
          request:
            method: "GET"
            url: "https://api.cyberintel.dev/iocs/${id}"
            headers:
           -  name: accept
              value: "application/json"
      output:
        select: ".data"
        filter: ""
        map: "{iocName: .name}"
        outputMode: "element"
    {
      "info": {
        "nextPage": "https://api.cyberintel.dev/iocs?page=2"
      },
      "data": [ ... ]
    }
    # Temporal window (optional)
    temporalWindow:
      duration: 5m
      offset: 10m
      tz: UTC
      format: RFC3339
    enumerationPhase:
      httpRequest:
        type: "page"
        page:
          pageSize: 50
          request:
            method: "GET"
            url: "https://api.fake-rest.refine.dev/posts"
            headers:
              Accept: "application/json"
            queryParams:
              from: "${temporalWindow.from}"
              to: "${temporalWindow.to}"
              _page: "${pagination.pageNumber}"
              _per_page: "${pagination.pageSize}"
      output:
        select: '.'
        # filter: '.language == 3'
        map: '{id: .id}'
        outputMode: "collection"
    collectionPhase:
      variables:
        - name: ids
          source: input
          expression: "."
          format: "json"
      httpRequest:
        type: "none"
        none:
          request:
            method: "POST"
            url: "https://api.fake-rest.refine.dev/posts"
            headers:
              Accept: "application/json"
            bodyType: "raw"
            bodyRaw: |
              {
                "ids": ${inputs.ids}
              }
      output:
        select: "."
        filter: ".id > 10"
        map: "{id: .id, title: .title, status: .status}"
        outputMode: "element"
    [
      {"id": 1},
      {"id": 2},
      {"id": 3}
    ]
    withAuthentication: true
    authentication:
      type: token
      token:
        request:
          method: POST
          url: ${parameters.domain}/oauth2/token
          headers:
            - name: Content-Type
              value: application/x-www-form-urlencoded
          bodyType: urlEncoded
          bodyParams:
            - name: grant_type
              value: client_credentials
            - name: client_id
              value: '${secrets.client_id}'
            - name: client_secret
              value: '${secrets.client_secret}'
        tokenPath: ".access_token"
        authInjection:
          in: header
          name: Authorization
          prefix: 'Bearer '
          suffix: ''
    withAuthentication: true
    authentication:
      type: hmac
      hmac:
        request:
          generateTimestamp: true
          timestamp:
            tz: UTC
            format: EpochMillis
        hash:
          secretKey: ${secrets.apiSecret}
          algorithm: hmac_sha256
          encoding: hex
          dataToSign: "${secrets.apiKey}${request.body}${request.timestamp}"
        headers:
          x-logtrust-apikey: ${secrets.apiKey}
          x-logtrust-timestamp: ${request.timestamp}
          x-logtrust-sign: ${hmac.hash}
    withAuthentication: true
    authentication:
      type: hmac
      hmac:
        request:
          generateTimestamp: true
          timestamp:
            tz: UTC
            format: RFC1123
          generateContentHash: true
          contentHash:
            algorithm: sha256
            encoding: base64
        hash:
          algorithm: hmac_sha256
          encoding: base64
          secretKey: ${secrets.secretKey}
          dataToSign: "${request.method}\n${request.relativeUrl}\n${request.timestamp};${request.host};${request.contentHash}"
        headers:
          - name: x-ms-date
            value: ${request.timestamp}
          - name: x-ms-content-sha256
            value: ${request.contentHash}
          - name: Authorization
            value: "HMAC-SHA256 Credential=${secrets.accessKeyId}&SignedHeaders=x-ms-date;host;x-ms-content-sha256&Signature=${hmac.hash}"
    withAuthentication: true
    authentication:
      type: hmac
      hmac:
        request:
          generateId: true
          idType: uuid
          generateTimestamp: true
          timestamp:
            tz: UTC
            format: Epoch
          generateContentHash: true
          contentHash:
            algorithm: sha1
            encoding: base64
        hash:
          algorithm: hmac_sha256
          encoding: base64
          secretKey: ${secrets.secretKey}
          dataToSign: "${request.method}\n${request.contentHash}\napplication/json${request.timestamp}\n${request.relativeUrl}"
        headers:
          - name: x-ct-authorization
            value: CTApiV2Auth ${parameters.publicKey}:${hmac.hash}
          - name: x-ct-timestamp
            value: ${request.timestamp}
  • Inputs - Values derived from the output of the Enumeration phase. Available only in the Collection phase.

  • value - client_credentials Used for server-to-server authentication without a user.

  • name - client_ID

  • value - ${secrets.client_id}this is a dynamic variable pulled from the value entered in the Secrets setting.

  • name - client_secret

  • value - ${secrets.client_secret} this is a dynamic variable pulled from the value entered in the Secrets setting.

  • Secret key* - how to generate the string that will be signed.
  • Data to sign* - e.g. "${request.method}\n${request.contentHash}\napplication/json\n${request.relativeUrl}\n${request.timestamp}"

  • "application/json" - Hardcoded content type

  • ${request.timestamp} - Epoch UTC timestamp

  • ${request.relativeUrl} - The relative path and query string

    The \n means each element is separated by a newline.

  • ${pagination.offset}

  • ${pagination.limit}

  • ${pagination.to}

    ${pagination.cursor}

    hash
      url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/
      headers:
        — name: Accept
          value: application/json
        — name: Netskope—Api—Token
          value: "${secrets.netskopeApiToken}"
    nextRequest:
      method: GET
      url: "https://${parameters.domain}/api/v2/events/dataexport/alerts/
      headers:
        — name: Accept
          value: application/json
        — name: Netskope—Api—Token
          value: "${secrets.netskopeApiToken}"
      method: GET
      url: "https://${parameters.domain}/api/v2/events/data
      headers:
        — name: Accept
          value: application/json
        — name: Netskope—Api—Token
          value: "${secrets.netskopeApiToken}"
    nextRequest:
      method: GET
      url: "https://${parameters.domain}/api/v2/events/data
      headers:
        — name: Accept
          value: application/json
        — name: Netskope—Api—Token
          value: "${secrets.netskopeApiToken}"
    ?from=2025-08-20T11:45:00Z
    &to=2025-08-20T11:50:00Z
    page:
      pageSize: 50
    &_per_page=50