Type | Restriction Item | Description |
Index | Number of Fields | A maximum of 300 fields (including metadata fields) can be added to the key-value index of a single log topic. |
| Field Name | All English letters, symbols, and digits are allowed except for *\\",. A value cannot start with _, with the exception of the __CONTENT__ field.Parent-child JSON fields cannot be included simultaneously, such as a and a.b. |
| Field Hierarchy | When Key-value indexing for multi-level JSON is configured, the Key hierarchy cannot exceed 10 levels, for example, a.b.c.d.e.f.g.h.j.k. |
| Delimiter | Only English symbols, \\n\\t\\r, and the escape character \\ are supported. |
| Field Length | Search limitation: Only the first 1 MB of logs in each field are searchable. Logs beyond this limit cannot be searched. Statistical limitation: After statistics are enabled, only the first 32,766 bytes of each field are used in SQL operations. Data beyond this limit cannot be processed by SQL. Log fields that exceed the above limit are still fully stored and can be viewed or downloaded, but the exceeding portion cannot be searched or aggregated. |
| Token Length | After tokenization, only the first 10,000 characters of a single token participate in the search. The portion exceeding this limit cannot be searched, but the logs are still fully stored. |
| Numeric Field Precision and Range | The data range supported for fields of the long type is -1E15 to 1E15. Data outside this range may lose precision or may not be searchable. Fields of the double type support a data range of -1.79E+308 to +1.79E+308. If the floating-point encoding exceeds 64 bits, precision loss occurs. Suggestions on index configuration of ultra-long numerical field: If you do not need to search this field by comparing numerical ranges, you can store it as the text type. If you need to search this field by comparing numerical ranges, you can store it as the double type, which may result in some precision loss. |
| Activation Mechanism | Index configuration applies only to newly collected data. After index rules are edited, they apply only to newly written logs. Existing data is not updated. To update existing data, you need to rebuild the index. |
| Modifying index configuration | When index configurations are created, modified, or deleted, a single user can have a maximum of 10 concurrent tasks in progress. If this limit is exceeded, the user must wait for the previous tasks to complete. A single task typically takes no more than 1 minute to execute. |
| Reindexing | Only one reindexing task can be run per log topic at a time. A single log topic can have up to 10 reindexing task records at the same time. You need to delete task records that are no longer needed before creating an indexing task. For logs within the same time range, indexes can be re-created only once. You need to delete previous task records before rebuilding indexes again. The log write traffic for the selected time range must not exceed 5 TB. The reindexing time range is subject to the log timestamp. If the deviation between the log upload time and the reindexing time range exceeds 1 hour, that log is not reindexed and will become unsearchable. For example, a log with a timestamp of 02:00 uploaded at 16:00 is not processed if you reindex logs for the period from 00:00 to 12:00. Any newly reported log with a timestamp that falls within a time range for which reindexing has already been performed is not indexed and will remain unsearchable. |
Query | Sentence Length | A search and analysis statement supports a maximum of 12,000 characters. |
| Query Concurrency | A single log topic supports 15 concurrent queries, including search and analysis. |
| Fuzzy search | Prefix fuzzy search is not supported. For example, you cannot search for error by using *rror. |
| Phrase search | In phrase search, a wildcard can match a maximum of 128 qualified tokens and returns all logs containing these 128 tokens. The more precise the specified token is, the more precise the query result will be. |
| Logical group nesting depth | When parentheses are used to logically group search conditions under CQL syntax rules, a maximum of 10 levels of nesting is allowed. The Lucene syntax rules do not have this limitation. For example, (level:ERROR AND pid:1234) AND service:test is a 2-level nested statement and can be searched normally. However, the following statement is an 11-level nested statement, and executing a search on it will return an error: status:"499" AND ("0.000" AND (request_length:"528" AND ("https" AND (url:"/api" AND (version:"HTTP/1.1" AND ("2021" AND ("0" AND (upstream_addr:"169.254.128.14" AND (method:"GET" AND (remote_addr:"114.86.92.100")))))))))). |
| Memory usage (analysis) | The server memory used by each statistical analysis operation must not exceed 3 GB. This limitation is typically triggered when you use group BY, DISTINCT(), or count(DISTINCT()). It occurs because the field being aggregated has too many distinct values after deduplication by group BY or DISTINCT(). To address this, optimize your query by using a field with fewer distinct values for grouping, or replace count(DISTINCT()) with approx_distinct(). |
| Query Result | When the query results are raw logs, a maximum of 1,000 raw logs will be returned at a time. |
| | When the query results are statistical analysis results, 100 results are returned at a time by default. When SQL LIMIT syntax is used, a maximum of 1 million results can be returned at a time. |
| | The maximum size of the returned data packet of query results is 49 MB. When using API, you can enable gzip compression (Header Accept-Encoding:gzip). |
| Timeout Time | The timeout period for a single query is 55 seconds, including search and analysis. |
| Query Latency | The latency from log submission to being available for search and analysis is less than 1 minute. |
Download | Log Count | A maximum of 50 million log entries can be downloaded in a single operation. |
| Task Quantity | A single log topic can have up to two tasks in the File Generating state; all other incomplete tasks remain in the Pending state and are queued for execution. A single log topic can have up to 1000 tasks at the same time, including completed tasks in the "File Generated" state. |
| File Retention Duration | Generated log files are retained for only 3 days. |
Related external data | Quantity Limit | A single log topic can be associated with a maximum of 20 external data sources. |
| Query Timeout | When an external database is queried, the timeout period is 50 seconds. |
| MySQL Version | Compatible with MySQL 5.7, 8.0, and later versions. MySQL 5.6 has not undergone full compatibility testing; you need to test in practice to verify whether SQL statements execute normally. |
| CSV file size | The size cannot exceed 50 MB, and compression is not supported. |
Restriction Item | Description |
Query Concurrency | A single metric topic supports 15 concurrent queries. |
Query data volume | A single query involves no more than 200,000 time series, and a single time series in the query results contains a maximum of 11,000 data points. |
Timeout Time | The timeout period for a single query is 55 seconds, including search and analysis. |
Was this page helpful?
You can also Contact sales or Submit a Ticket for help.
Help us improve! Rate your documentation experience in 5 mins.
Feedback