PostgreSql or protocol-compatible database (like Amazon Redshift) can be used as a data source with SQL-compatible database connector.
There are no any limitations on the dataset size; your PostgreSql should be able to execute aggregate queries fast enough (in seconds; 2 minutes max). In case of large datasets you may pre-aggregate the data with materialized view, apply some filters on indexed columns, or use PipelineDB extension for real-time aggrations.
Connection String should be a valid connection string for NpgSql driver; for example:
Specifies the host name of the machine on which the PostgreSql is running.
Do not use "localhost" or LAN server name; use only public IP address or server's domain name.
|Database||The PostgreSQL database to connect to.|
|User ID||The username to connect with.|
|Password||The password to connect with.|
|Trust Server Certificate||Specify
|Server Compatibility Mode||Specify
Trust Server Certificate=Trueto the connection string to disable SSL certificate validation (needed if your server uses self-signed certificate).
operator does not exist: text > numericand your filter is
some_column < 5this means that
some_columnis TEXT type. To fix that go to the cube configuration form, find the dimension with Name "some_column" and add a Parameter to define SQL expression with a cast: