ClickHouse Core Settings
2024-02-19 19:35:37 1 举报
AI智能生成
ClickHouse Core Settings是ClickHouse数据库的核心设置,它决定了数据库的行为和性能。这些设置包括了数据的存储方式、查询优化策略、网络通信参数等。例如,可以设置数据的存储路径、数据压缩级别、查询缓存大小等。通过调整这些设置,可以根据实际需求优化数据库的性能。同时,ClickHouse还提供了一种动态修改设置的方式,可以在运行时调整某些设置,以适应不断变化的工作负载。总的来说,ClickHouse Core Settings是管理和优化ClickHouse数据库的重要工具。
作者其他创作
大纲/内容
network
enable_http_compression
http_zlib_compression_level
http_native_compression_disable_checksumming_on_decompress
http_max_uri_size
http_make_head_request
send_progress_in_http_headers
max_http_get_redirects
network_compression_method
network_zstd_compression_level
cancel_http_readonly_queries_on_client_close
max_network_bytes
max_network_bandwidth
max_network_bandwidth_for_user
max_network_bandwidth_for_all_users
http_connection_timeout
http_send_timeout
http_receive_timeout
http_max_single_read_retries
async_socket_for_remote
async_query_sending_for_remote
validate_tcp_client_information
Schema Altering & Drop
alter_sync
alter_partition_verbose_result
max_partition_size_to_drop
max_table_size_to_drop
database_atomic_wait_for_drop_and_detach_synchronously
function/expression/command behaviors
JIT compilation
compile_expressions
min_count_to_compile_expression
compile_aggregate_expressions
min_count_to_compile_aggregate_expression
function_range_max_elements_in_block
table_function_remote_max_addresses
glob_expansion_max_elements
transform_null_in
cast_keep_nullable
aggregate_functions_null_for_empty
union_default_mode
check_query_single_value_result
regexp_max_matches_per_row
short_circuit_function_evaluation
max_hyperscan_regexp_length
max_hyperscan_regexp_total_length
splitby_max_substrings_includes_remaining_string
enable_extended_results_for_datetime_functions
date_time_overflow_behavior
json_value
function_json_value_return_type_allow_nullable
function_json_value_return_type_allow_complex
precise_float_parsing
print_pretty_type_names
validate_polygons
DML/DQL
SELECT query
LIMIT
limit
offset
prefer_column_name_to_alias
JOIN
join_default_strictness
join_algorithm
join_any_take_last_row
join_use_nulls
partial merge
partial_merge_join_optimizations
partial_merge_join_rows_in_right_blocks
join_on_disk_max_files_to_merge
any_join_distinct_right_table_keys
max_rows_in_set_to_optimize_join
GROUP BY
group_by_use_nulls
distributed_group_by_no_merge
Filter
additional_table_filters
additional_result_filter
Aggregation
extremes
totals_mode
totals_auto_threshold
count_distinct_implementation
INSERT query
insert_null_as_default
max_insert_block_size
min_insert_block_size_rows
min_insert_block_size_bytes
max_insert_threads
insert_quorum
insert_quorum_timeout
insert_quorum_parallel
select_sequential_consistency
insert_deduplicate
asynchronous insert
async_insert
async_insert_threads
wait_for_async_insert
wait_for_async_insert_timeout
async_insert_max_data_size
async_insert_max_query_number
async_insert_busy_timeout_max_ms
async_insert_poll_timeout_ms
async_insert_use_adaptive_busy_timeout
async_insert_busy_timeout_min_ms
async_insert_busy_timeout_ms
async_insert_busy_timeout_increase_rate
async_insert_busy_timeout_decrease_rate
async_insert_stale_timeout_ms
async_insert_deduplicate
insert_deduplication_token
optimize_on_insert
allow_settings_after_format_in_insert
UPDATE/MUTATION query
allow_nondeterministic_mutations
mutations_execute_nondeterministic_on_initiator
mutations_execute_subqueries_on_initiator
mutations_max_literal_size_to_replace
mutations_sync
DELETE query
Parallelization
max_threads
max_insert_threads
max_concurrent_queries_for_user
max_concurrent_queries_for_all_users
allow_experimental_parallel_reading_from_replicas
parser behavior
input_format_parallel_parsing
output_format_parallel_formatting
min_chunk_bytes_for_parallel_parsing
lock_acquire_timeout
max_final_threads
parallel_view_processing
dictionary_use_async_executor
compression
max_compress_block_size
min_compress_block_size
zstd_window_log_max
enable_deflate_qpl_codec
enable_zstd_qat_codec
output_format_compression_level
output_format_compression_zstd_window_log
interoperation with other systems
Kafka
kafka_max_wait_ms
kafka_disable_num_consumers_limit
PostgreSQL
postgresql_connection_pool_size
postgresql_connection_pool_wait_timeout
postgresql_connection_pool_auto_close_connection
odbc
odbc_bridge_connection_pool_size
odbc_bridge_use_connection_pooling
MySQL
mysql_map_string_to_text_in_show_columns
mysql_map_fixed_string_to_text_in_show_columns
external_table_functions_use_nulls
file
rename_files_after_processing
iceberg_engine_ignore_schema_evolution
materialized view / view
deduplicate_blocks_in_dependent_materialized_views
update_insert_deduplication_token_in_dependent_materialized_views
min_insert_block_size_rows_for_materialized_views
min_insert_block_size_bytes_for_materialized_views
live view
allow_experimental_live_view
live_view_heartbeat_interval
live_view_heartbeat_interval
periodic_live_view_refresh
memory
memory_overcommit_ratio_denominator
memory_usage_overcommit_max_wait_microseconds
memory_overcommit_ratio_denominator_for_user
schema inference
schema_inference_use_cache_for_file
schema_inference_use_cache_for_s3
schema_inference_use_cache_for_url
schema_inference_use_cache_for_hdfs
schema_inference_cache_require_modification_time_for_url
use_structure_from_insertion_table_in_table_functions
schema_inference_mode
system behavior
shutdown_wait_unfinished_queries
shutdown_wait_unfinished
session_timezone
default_table_engine
os_thread_priority
default_temporary_table_engine
show_table_uuid_in_table_create_query_if_not_nil
Distribution
distributed_product_mode
prefer_global_in_and_join
fallback_to_stale_replicas_for_distributed_queries
max_replica_delay_for_distributed_queries
max_distributed_connections
distributed_connections_pool_size
max_distributed_depth
max_replicated_fetches_network_bandwidth_for_server
max_replicated_sends_network_bandwidth_for_server
connect_timeout_with_failover_ms
connect_timeout_with_failover_secure_ms
connection_pool_max_wait_ms
connections_with_failover_max_tries
prefer_localhost_replica
max_parallel_replicas
parallel_replicas_custom_key
parallel_replicas_custom_key_filter_type
hedged request
use_hedged_requests
hedged_connection_timeout
receive_data_timeout
allow_changing_replica_until_first_data_packet
Keeper
insert_keeper_max_retries
insert_keeper_retry_initial_backoff_ms
insert_keeper_retry_max_backoff_ms
sharding
skip_unavailable_shards
distributed_push_down_limit
optimize_skip_unused_shards_limit
optimize_skip_unused_shards
optimize_skip_unused_shards_rewrite_in
allow_nondeterministic_optimize_skip_unused_shards
optimize_skip_unused_shards_nesting
force_optimize_skip_unused_shards
force_optimize_skip_unused_shards_nesting
optimize_distributed_group_by_sharding_key
load-balancing
load_balancing
distributed_replica_error_half_life
distributed_replica_error_cap
distributed_replica_max_ignored_errors
data insertion
insert_shard_id
insert_distributed_sync
distributed_foreground_insert
parallel_distributed_insert_select
distributed_background_insert_split_batch_on_failure
distributed_background_insert_batch
distributed_background_insert_max_sleep_time_ms
distributed_background_insert_sleep_time_ms
use_compact_format_in_distributed_parts_names
Replicated*MergeTree
always_fetch_merged_part
execute_merges_on_single_replica_time_threshold
ReplicatedDatabase
allow_experimental_database_replicated
database_replicated_initial_query_timeout_sec
distributed_ddl_task_timeout
distributed_ddl_output_mode
replication_wait_for_inactive_replica_timeout
Query Processing
SQL preprocessing & optimization
max_query_size
max_parser_depth
optimize_functions_to_subcolumns
optimize_trivial_count_query
optimize_trivial_approximate_count_query
optimize_count_from_files
use_cache_for_count_from_files
optimize_use_projections
force_optimize_projection
force_optimize_projection_name
preferred_optimize_projection_name
query rewriting
optimize_syntax_fuse_functions
optimize_rewrite_aggregate_function_with_if
optimize_move_to_prewhere
optimize_move_to_prewhere_if_final
rewrite_count_distinct_if_with_count_distinct_implementation
where optimization
enable_optimize_predicate_expression
force_index_by_date
force_primary_key
use_skip_indexes
force_data_skipping_indices
ignore_data_skipping_indices
convert_query_to_cnf
enable_positional_arguments
enable_order_by_all
optimize_using_constraints
optimize_append_index
optimize_substitute_columns
describe_include_subcolumns
query plan optimization
query_plan_enable_optimizations
query_plan_max_optimizations_to_apply
query_plan_lift_up_array_join
query_plan_push_down_limit
query_plan_split_filter
query_plan_merge_expressions
query_plan_filter_push_down
query_plan_execute_functions_after_sorting
query_plan_reuse_storage_ordering_for_window_functions
query_plan_lift_up_union
query_plan_distinct_in_order
query_plan_read_in_order
query_plan_aggregation_in_order
query_plan_remove_redundant_sorting
query_plan_remove_redundant_distinct
execution pipeline
max_block_size
preferred_block_size_bytes
optimize_read_in_order
optimize_aggregation_in_order
query interaction
interactive_delay
idle_connection_timeout
connect_timeout, receive_timeout, send_timeout
handshake_timeout_ms
replace_running_query
replace_running_query_max_wait_ms
partial_result_on_first_cancel
CACHE
use_uncompressed_cache
use_query_cache
query_cache_nondeterministic_function_handling
query_cache_min_query_runs
query_cache_min_query_duration
query_cache_compress_entries
query_cache_squash_partial_results
query_cache_ttl
query_cache_share_between_users
query_cache_max_size_in_bytes
query_cache_max_entries
enable_reads_from_query_cache
enable_writes_to_query_cache
compatibility
final
allow_experimental_statistic
allow_statistic_optimize
table engines
Merge Tree
merge_tree_min_rows_for_concurrent_read
merge_tree_min_rows_for_concurrent_read_for_remote_filesystem
merge_tree_min_bytes_for_concurrent_read
merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem
merge_tree_min_rows_for_seek
merge_tree_min_bytes_for_seek
merge_tree_coarse_index_granularity
merge_tree_max_rows_to_use_cache
merge_tree_max_bytes_to_use_cache
optimize
optimize_throw_if_noop
optimize_skip_merged_partitions
merge_selecting_sleep_ms
allow_nullable_key
s3
s3_truncate_on_insert
s3_create_new_file_on_insert
s3_skip_empty_files
s3_use_adaptive_timeouts
hdfs
hdfs_truncate_on_insert
hdfs_create_new_file_on_insert
hdfs_skip_empty_files
url
engine_url_skip_empty_files
enable_url_encoding
engine_file_empty_if_not_exists
engine_file_truncate_on_insert
engine_file_allow_create_multiple_files
engine_file_skip_empty_files
general I/O
fsync_metadata
temporary_files_codec
min_bytes_to_use_direct_io
storage_file_read_method
min_bytes_to_use_mmap_io
storage_metadata_write_full_object_key
logging
log_queries
log_queries_min_query_duration_ms
log_queries_min_type
log_query_threads
log_query_views
log_formatted_queries
log_comment
log_processors_profiles
profiling
query_profiler_real_time_period_ns
query_profiler_cpu_time_period_ns
memory_profiler_step
memory_profiler_sample_probability
trace_profile_events
allow_introspection_functions
system_events_show_zero_values
opentelemetry_start_trace_probability
log_queries_probability
Data Types
low-cardinality
low_cardinality_max_dictionary_size
low_cardinality_use_single_dictionary_for_part
low_cardinality_allow_in_native_format
allow_suspicious_low_cardinality_types
data_type_default_nullable
allow_experimental_variant_type
use_variant_as_common_type
others
poll_interval
stream_flush_interval_ms
stream_poll_timeout_ms
ttl_only_drop_parts
flatten_nested
asterisk_include_materialized_columns
asterisk_include_alias_columns
analyze_index_with_space_filling_curves
0 条评论
下一页