Querying with N1QL
Parallel data management for complex queries over many records, using a familiar SQL-like syntax.
The N1QL Query Language provides a familiar, SQL-like experience for querying documents stored in Couchbase. You can read up on the language in our reference guide, but you probably just want to dive into a practical example.
Below, we fill in some of the gaps between reference and rolling-up-your-sleeves practicality, with discussion of a few areas of the Query Service where more background knowledge will help you to better program your application.
Prepared Statements for Query Optimization
When a N1QL query string is sent to the server, the server will inspect the string and parse it, planning which indexes to query. Once this is done, it generates a query plan (see the N1QL reference, which gives more information on how to optimize queries using prepared statements). The computation for the plan adds some additional processing time and overhead for the query.
Often-used queries can be prepared so that its plan is generated only once.
Subsequent queries using the same query string will use the pre-generated plan instead, saving on the overhead and processing of the plan each time.
This is done for queries from the SDK by setting the adhoc
query option to false
.
For Couchbase Server 6.0 and earlier, the plan is cached by the SDK (up to a limit of 5000), as well as the Query Service. On Couchbase Server 6.5 and newer, the plan is stored by the Query Service — up to an adjustable limit of 16 384 plans per Query node.
For Couchbase Server 6.0 and earlier, the generated plan is not influenced by placeholders.
Thus parameterized queries are considered the same query for caching and planning purposes, even if the supplied parameters are different.
With Couchbase Server 6.5 and newer, if a statement has placeholders, and a placeholder is supplied, the Query Service will generate specially optimized plans.
Therefore, if you are supplying the placeholder each time, adhoc = true
will actually return a better-optimized plan (at the price of generating a fresh plan for each query).
If your queries are highly dynamic, we recommend using parameterized queries if possible (epecially when prepared statements are not used). Parameterized queries are more cache efficient and will allow for better performance.
For LCB, adhoc is set to false by setting lcb_cmdquery_adhoc
to zero (false).
Do not turn off this setting for every query to Server 6.0 and earlier, since only a finite number of query plans (currently 5000) can be stored in the SDK.
lcb_cmdquery_adhoc(cmd, 0)
Indexes
The Couchbase query service makes use of indexes in order to do its work. Indexes replicate subsets of documents from data nodes over to index nodes, allowing specific data (for example, specific document properties) to be retrieved quickly, and to distribute load away from data nodes in MDS topologies.
In order to make a bucket queryable, it must have at least one index defined. |
You can define a primary index on a bucket.
When a primary index is defined you can issue non-covered (see below) queries on the bucket as well.
This includes using the META
function in the queries.
CREATE PRIMARY INDEX ON `users`
You can also define indexes over given document fields and then use those fields in the query:
CREATE INDEX ix_name ON `users`(name);
CREATE INDEX ix_email ON `users`(email);
This would allow you to query the users bucket regarding a document’s name
or email
properties, thus:
SELECT name, email FROM `users` WHERE name="Monty Python" OR email="monty@python.org";
Indexes help improve the performance of a query. When an index includes the actual values of all the fields specified in the query, the index covers the query, and eliminates the need to fetch the actual values from the Data Service. An index, in this case, is called a covering index, and the query is called a covered query. For more information, see Covering Indexes.
You can also create and define indexes in the SDK using:
var bucketManager = bucket.CreateManager();
await bucketManager.CreateN1qlPrimaryIndexAsync();
await bucketManager.CreateN1qlIndexAsync("index_name", new string[] { "name" })
await bucketManager.CreateN1qlIndexAsync("index_emai", new string[] { "email" })
Index Building
Creating indexes on buckets with many existing documents can take a long time. You can build indexes in the background, creating deferred indexes. The deferred indexes can be built together, rather than having to re-scan the entire bucket for each index.
CREATE PRIMARY INDEX ON `users` WITH {"defer_build": true};
CREATE INDEX ix_name ON `users`(name) WITH {"defer_build": true};
CREATE INDEX ix_email ON `users`(email) WITH {"defer_build": true};
BUILD INDEX ON `users`(`#primary`, `ix_name`, `ix_email`);
The indexes are not built until the BUILD INDEX
statement is executed.
At this point, the server scans all of the documents in the users
bucket,
and indexes it for all of the applicable indexes (in this case, those that have a name
or email
field).
Building deferred indexes can also be done via the SDK:
var bucketManager = bucket.CreateManager();
await bucketManager.CreateN1qlPrimaryIndexAsync(defer = true);
await bucketManager.CreateN1qlIndexAsync("ix_name", defer = true, new string[] { "name" });
await bucketManager.CreateN1qlIndexAsync("ix_email", defer = true, new string[] { "email" });
await bucketManager.BuildN1qlDeferredIndexesAsync();
await bucketManager.WatchN1qlIndexes(new List<string> { "ix_name", "ix_email", "#primary"}, TimeSpan.FromSeconds(2));
Index Consistency
Because indexes are by design outside the Data Service, they are eventually consistent with respect to changes to documents and, depending on how you issue the query, may at times not contain the most up-to-date information. This may especially be the case when deployed in a write-heavy environment: changes may take some time to propagate over to the index nodes.
The asynchronous updating nature of Global Secondary Indexes (GSIs) means that they can be very quick to query and do not require the additional overhead of index recaclculations at the time documents are modified. N1QL queries are forwarded to the relevant indexes, and the queries are done based on indexed information, rather than the documents as they exist in the data service.
With default query options, the query service will rely on the current index state:
the most up-to-date document versions are not retrieved, and only the indexed versions are queried.
This provides the best performance.
Only updates occurring with a small time frame may not yet have been indexed.
For cases where consistency is more important than performance, the scan_consistency
property of a query may be set to REQUEST_PLUS
.
ensuring that indexes are synchronized with the data service before querying.
The following options are available:
Unresolved include directive in modules/concept-docs/pages/n1ql-query.adoc - include::7.0@server:learn:page$services-and-indexes/indexes/index-replication.adoc[]
Consider the following snippet:
var randomNumber = new Random().Next(0, 10000000);
bucket.Upsert($"user:{randomNumber}", new {
name = "Brass Doorknob",
email = "brass.doorknob@juno.com",
random = randomNumber
});
var query = new QueryRequest("SELECT name, email, random, META(default).id FROM default WHERE $1 IN name'")
.AddPositionalParamter("Brass");
var result = bucket.Query<dynamic>(query);
The above query may not return the newly inserted document because it has not yet been indexed. The query is issued immediately after document creation, and in this case the query engine may process the query before the index has been updated.
If the above code is modified to use RequestPlus, query processing will wait until all updates have been processed and recalculated into the index from the point in time the query was received:
var query = new QueryRequest("SELECT name, email, random, META(default).id FROM default WHERE $1 IN name'")
.AddPositionalParamter("Brass")
// If this line is removed, the latest 'random' field might not be present
.ScanConsistency(ScanConsistency.RequestPlus);
var result = bucket.Query<dynamic>(query);
This gives the application developer more control over the balance between performance (latency) and consistency, and allows optimization on a case-by-case basis.