SQL query context
Apache Druid supports two query languages: Druid SQL and native queries. This document describes the SQL language.
Druid supports query context parameters which affect SQL query planning. See Query context for general query context parameters for all query types.
SQL query context parameters
The following table lists query context parameters you can use to configure Druid SQL planning.
You can override a parameter's default value by setting a runtime property in the format druid.query.default.context.{query_context_key}.
For more information, see Overriding default query context values.
| Parameter | Description | Default value | 
|---|---|---|
| sqlQueryId | SQL query ID. For HTTP client, Druid returns it in the X-Druid-SQL-Query-Idheader.To specify a SQL query ID, use sqlQueryIdinstead ofqueryId. SettingqueryIdfor a SQL request has no effect. All native queries underlying SQL use an auto-generatedqueryId. | auto-generated | 
| sqlTimeZone | Time zone for a connection. For example, "America/Los_Angeles" or an offset like "-08:00". This parameter affects how time functions and timestamp literals behave. | UTC | 
| sqlStringifyArrays | If true, Druid serializes result columns with array values as JSON strings in the response instead of arrays. | true, except for JDBC connections, where it's alwaysfalse | 
| useApproximateCountDistinct | Whether to use an approximate cardinality algorithm for COUNT(DISTINCT foo). | true | 
| useGroupingSetForExactDistinct | Whether to use grouping sets to execute queries with multiple exact distinct aggregations. | false | 
| useApproximateTopN | If true, Druid converts SQL queries to approximate TopN queries wherever possible. Iffalse, Druid uses exact GroupBy queries instead. | true | 
| enableTimeBoundaryPlanning | If true, Druid converts SQL queries to time boundary queries wherever possible. Time boundary queries are very efficient for min-max calculation on the__timecolumn in a datasource. | false | 
| useNativeQueryExplain | If true,EXPLAIN PLAN FORreturns the explain plan as a JSON representation of equivalent native query, else it returns the original version of explain plan generated by Calcite.This property is provided for backwards compatibility. We don't recommend setting this parameter unless your application depends on the older behavior. | true | 
| sqlFinalizeOuterSketches | If false(default behavior in Druid 25.0.0 and later),DS_HLL,DS_THETA, andDS_QUANTILES_SKETCHreturn sketches in query results. Iftrue(default behavior in Druid 24.0.1 and earlier), Druid finalizes sketches from these functions when they appear in query results.This property is provided for backwards compatibility with behavior in Druid 24.0.1 and earlier. We don't recommend setting this parameter unless your application uses Druid 24.0.1 or earlier. Instead, use a function that doesn't return a sketch, such as APPROX_COUNT_DISTINCT_DS_HLL,APPROX_COUNT_DISTINCT_DS_THETA,APPROX_QUANTILE_DS,DS_THETA_ESTIMATE, orDS_GET_QUANTILE. | false | 
| sqlUseBoundAndSelectors | If false(default behavior in Druid 27.0.0 and later), the SQL planner uses equality, null, and range filters instead of selector and bounds. For filteringARRAYtyped values,sqlUseBoundAndSelectorsmust befalse. | false. | 
| sqlReverseLookup | Whether to consider the reverse-lookup rewrite of the LOOKUPfunction during SQL planning.Druid reverses calls to LOOKUPonly when the number of matching keys is lower than bothinSubQueryThresholdandsqlReverseLookupThreshold. | true | 
| sqlReverseLookupThreshold | Maximum size of INfilter to create when applying a reverse-lookup rewrite. If aLOOKUPcall matches more keys than the specified threshold, it remains unchanged.If inSubQueryThresholdis lower thansqlReverseLookupThreshold, Druid usesinSubQueryThresholdthreshold instead. | 10000 | 
| sqlPullUpLookup | Whether to consider the pull-up rewrite of the LOOKUPfunction during SQL planning. | true | 
| enableJoinLeftTableScanDirect | This parameter applies to queries with joins. By default, when the left child is a simple scan with a filter, Druid runs the scan as a query, then joins it with the right child on the Broker. Setting this parameter to trueoverrides that behavior and pushes the join to the data servers instead. Even if a query doesn't explicitly include a join, this parameter may still apply since the SQL planner can translate the query into a join internally. | false | 
| maxNumericInFilters | Max limit for the amount of numeric values that Druid can compare for a string type dimension when the entire SQL WHERE clause of a query translates only to an OR of bound filter. By default, Druid doesn't restrict the amount of numeric bound filters on string columns, although this situation may block other queries from running. Set this parameter to a smaller value to prevent Druid from running queries that have prohibitively long segment processing times. The optimal limit requires some trial and error. We recommend starting with 100. Users who submit a query that exceeds the limit of maxNumericInFiltersshould rewrite their queries to use strings in theWHEREclause instead of numbers. For example,WHERE someString IN (‘123’, ‘456’). This value can't exceed the set system configurationdruid.sql.planner.maxNumericInFilters. Ifdruid.sql.planner.maxNumericInFiltersisn't set explicitly, Druid ignores this value. | -1 | 
| inFunctionThreshold | At or beyond this threshold number of values, Druid converts SQL INtoSCALAR_IN_ARRAY. A threshold of 0 forces this conversion in all cases. A threshold ofInteger.MAX_VALUEdisables this conversion. The converted function is eligible for fewer planning-time optimizations, which speeds up planning, but may prevent certain planning-time optimizations. | 100 | 
| inFunctionExprThreshold | At or beyond this threshold number of values, SQL INis eligible for execution using the native functionscalar_in_arrayrather than an||of==, even if the number of values is belowinFunctionThreshold. This property only affects translation of SQLINto a native expression. It doesn't affect translation of SQLINto a native filter. This property is provided for backwards compatibility purposes, and may be removed in a future release. | 2 | 
| inSubQueryThreshold | At or beyond this threshold number of values, Druid converts SQL INtoJOINon an inline table.inFunctionThresholdtakes priority over this setting. A threshold of 0 forces usage of an inline table in all cases where the size of a SQLINis larger thaninFunctionThreshold. A threshold of2147483647disables the rewrite of SQLINtoJOIN. | 2147483647 | 
Set the query context
You can configure query context parameters in the context object of the JSON API or as a JDBC connection properties object.
The following example shows how to set a query context parameter using the JSON API:
{
  "query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar' AND __time > TIMESTAMP '2000-01-01 00:00:00'",
  "context" : {
    "sqlTimeZone" : "America/Los_Angeles"
  }
}
The following example shows how to set query context parameters using JDBC:
String url = "jdbc:avatica:remote:url=http://localhost:8082/druid/v2/sql/avatica/";
// Set any query context parameters you need here.
Properties connectionProperties = new Properties();
connectionProperties.setProperty("sqlTimeZone", "America/Los_Angeles");
connectionProperties.setProperty("useCache", "false");
try (Connection connection = DriverManager.getConnection(url, connectionProperties)) {
  // create and execute statements, process result sets, etc
}