Data processing

Svg background
connectors
Create ETL steps

You can graphically configure an ETL (extract / transform / load) step. This allows you to load data into a source, select the data you want, filter it, aggregate it, perform new column calculations, and store the result in a data table of our internal datawarehouse.

Execute your own Python scripts

Our platform integrates a Python scripting system. Code all your data processing (import, export, advanced calculations, data cleaning, machine learning, predictive analyses ...) and execute them regularly via our scheduler or by triggering them via a REST API call. You avoid having to manage your own data processing infrastructure.

In a Python script, you can use the Serenytics package. This provides many functions that allow you to query a data source configured in the interface (with aggregates and filtering). The result of this query can be manipulated with the Python language, often using libraries like Pandas or Scikit-learn. Finally, the Serenytics library allows you to load your results into our internal Datawarehouse to view the results.

connectors
connectors
Create data flows

Our engine allows you to create a flow that will sequence several steps (ETL or Python script). With our scheduler, you can schedule the execution of this flow whenever you want.

Create parametric ETL steps

Some formulas are sometimes dependent on a parameter. For example, the function "made a purchase in the last 12 months" depends on the month in which this function is evaluated. To manage this complexity, Serenytics allows you to add parameters to your functions (for example, a "calculation month" parameter) and start calculating your ETL steps several times (once for each parameter value). The results of each execution are concatenated in a table in our internal datawarehouse. In just a few clicks, you set up very advanced data processing. This allows you, for example, to build the history of the number of active clients in a client database.

connectors
×