This document goes over:
- What tools are available to understand and measure query performance
- Importance of page cache
- General tips and tricks for performant queries
Tooling
clickhouse-client
clickhouse-client
is a command-line application for running queries against ClickHouse.
When executing queries, it details progress, execution time, how many rows and gigabytes of data were processed, and how much CPU was used.
system.query_log
ClickHouse saves all queries it runs into system.query_log
table.
It includes information on:
- What query was run and when
- How long did it take to execute
- How many resources did it take up: memory, rows/bytes read
- In case of errors, exception information
Some tips for querying the query_log
:
- For distributed queries, filter by
is_initial_query
to disambiguate distributed queries i.e.is_initial_query = 1
denotes the query sent to the coordinator node, whereasis_initial_query = 0
denotes internally-generated queries sent to gather data from other shards - Use
clusterAllReplicas('<cluster_name>', system.query_log)
to query all nodes - Filter on
type = 'QueryFinish'
if you only want data on queries that completed successfully. Forgetting to do so can skew averages since the results will includeQueryStart
events, which have columns such asquery_duration_ms
set to 0.
At PostHog, we also often add metadata to the query as a comment to make it easier to analyze later.
An example query to get recent slow queries:
SELECTquery_duration_ms,query,event_time,read_rows,formatReadableSize(read_bytes) as read_size,result_rows,formatReadableSize(result_bytes) as result_size,formatReadableSize(memory_usage) as memory,columns,query_idFROM system.query_logWHEREquery NOT LIKE '%query_log%'AND type = 'QueryFinish'AND event_time > now() - interval 3 dayAND query_duration_ms > 30000ORDER BY query_duration_ms descLIMIT 10
Note that this table is not distributed - on a cluster setting you might need to run query against each node separately or do ad-hoc distributed queries.
EXPLAIN
Previous pages in this manual showed various examples
of using the ClickHouse EXPLAIN
statement to your advantage.
Various forms of explain can detail:
- If and how much data ClickHouse was able to avoid processing thanks to schema setup
- If and how ClickHouse "optimizes" the query by moving columns to
PREWHERE
- Detailing how the query is planned to be executed
Read more about EXPLAIN
in ClickHouse's EXPLAIN Statement docs.
Flame graphs
For CPU-bound calculations, flamegraphs can help visualize what ClickHouse worked on during query execution.
We've built flamegraph support into PostHog. You can find tools to generate flamegraphs for queries under PostHog instance settings.
Importance of the page cache
When running queries, you might encounter an odd artifact: the first time you run a query, it's really slow but it speeds up significantly when run again.
This behavior is due to the Linux page cache. In broad terms, the operating system caches recently read files into memory, speeding up subsequent reads of the same data.
As most queries in ClickHouse are dependent on fast I/O to execute fast, this can have a significant effect on query performance. It is a reason why at PostHog our ClickHouse nodes have a lot of memory available.
Effect on benchmarking
This behavior can be a problem for profiling: users constructing new queries might not hit the page cache and receive a worse experience than benchmarking may show.
This means it's often important to wipe page cache on ClickHouse when doing queries. This can be achieved with the following command on a ClickHouse node:
sudo sh -c "/usr/bin/echo 3 > /proc/sys/vm/drop_caches"
Note that the above will only drop the cache on the given node, but distributed queries might still be affected from the page cache on nodes in the other shards.
For completely clean benchmarking, you might also want to drop ClickHouse's internal mark cache.
Tips for achieving well-performing queries
Previous pages in the ClickHouse manual have highlighted the importance of setting up the correct schema and detailing how queries work in a distributed setting.
This section highlights some general rules of thumb that can help speed up queries:
- Make sure query is supported by ORDER BY. This is frequently the single biggest factor in query performance.
- Avoid JOINs and sub-queries whenever possible. Denormalizing data can help here.
- Partition data so old partitions can be skipped.
- Push down any and all WHERE conditions to in queries as deep as possible, as ClickHouse won't do it for you.
- Avoid reading columns that are not needed for the query.
- Test if moving small column filters to
PREWHERE
helps performance. - Avoid reading large columns in
PREWHERE
. - Compress columns with appropriate codecs.
- Make use of the extensive catalogue aggregation functions available in ClickHouse.
- Looking up single rows is frequently slow.
- ORDER BY and LIMIT/OFFSET can be slow in sharded settings.
- Cheat: preaggregate data or leverage materialized columns to pull out data from large columns.
- Leverage query settings in special cases.
- Benchmark different join algorithms on queries that use JOINs. From our testing,
parallel_hash
can help speed up most types of queries with JOINs at the expense of a bit more memory and CPU usage.grace_hash
is also a promising new algorithm introduced in ClickHouse 22.12.
Also always do your benchmarking in a realistic setting: on large datasets on powerful machines.
Next in the ClickHouse manual: Operations